 Hi everyone. Sorry for the small DNA. There has been a small glitch with the transport. Anyway, we are all back. So we have the pleasure to welcome Glen Bang from the University of Innsbruck. We will continue the introduction of related to foundations of quantum computing. And after this course, there will be a welcome reception that is going to take place on the address where we had lunch. Okay? Okay. So thank you very much. So as probably you know, I'm going to speak today about adiabatic quantum computation. So let me start by giving a few sentences on the topic. So just an overview is going to be this recent review by Albaash and Lidar adiabatic quantum computation. So this is a review of modern physics in 2018. So here you're going to find basically all the references about the topic and you're going to find the latest results and the references to the latest result and latest theorems in this topic. But this is not very self-contained. It's not an introduction. It's just a review. You want to read something which is self-contained and I'm going to recommend for you the lecture notes by Charles Andrew and it's going to be lecture notes on quantum algorithms. So here you have an introduction to the topic which is self-contained and if you are willing to read it in depth there are also rigorous proofs about the statement that I mentioned. So till now you have probably seen the gate model for quantum computing. In this model essentially what happens is that the system undergoes a sequence of gates that form a circuit. So I'm going to go pretty fast on this because you have already seen it and the final result is going to be measured and you're going to get output. So in this kind of approach you have what would be a digital quantum computation model because you're applying this digital gate. So these elements. While now I'm going to discuss adiabatic quantum computation and this model is quite different because instead of being digital it's going to be analog and the system is going to be evolved with a certain time-dependent Hamiltonian and we're, yes, sorry. Maybe you can still make measurements at the end and you're going to always get an output. So here the dynamics is governed by the Schrodinger equation which is implemented by this time-dependent Hamiltonian and the output again should be the solution of your computational problem. So why would we try to do this kind of approach and why is it called adiabatic quantum computation? So the reason is that the goal in the end is to prepare the system. Let's say C out is going to be of the final time which is the final Hamiltonian. So the output is going to, we want the output to be more or less the ground state of the final Hamiltonian and we want that the ground state of this final Hamiltonian would be encoding the problem. So the reason why we do this is because in physics we more or less have techniques to prepare ground states of Hamiltonians. We have techniques to prepare ground states of Hamiltonians and the idea is to apply these techniques in this case to come up with a new computational, with a new algorithm and model for computation. So now the question is how can we prepare the ground state of a certain Hamiltonian given a certain time with the time evolution, with the Schrodinger evolution and this can be done relying on the adiabatic theorem which is the backbone of adiabatic quantum computation. So let me state it. So the adiabatic theorem, first of all, let's, before stating it, let's try to discuss what are the ingredients. So we have an Hamiltonian H of t which is time dependent. So this Hamiltonian being time dependent is going to have a spectrum which is also time dependent. So we can solve the time independent Schrodinger equation or the game value problem for each independent time. So we will then get such a spectrum that's going to be given by this, the energies and the eigenvectors as a function of time. Sorry, again bigger. This is the right size, hopefully. So, okay, so from here, as we said, so the Hamiltonian is going to have eigenstate and eigenvalues which can be done on time and satisfy the following equation. So if I apply the Hamiltonian to an eigenstate, it should just get, be multiplied by the eigenvalue. Okay, so we have this and if we look at the Hamiltonian at the spectrum of the Hamiltonian as a function of time and we're going to have energy levels and we diagonalize the Hamiltonian and look at the spectrum at each time. So we'd have the first energy level which is going to do something like this which will depend on the time. We'd have, let's call it, I'll put it on top because this is a function and so this is an A0, it's not an important, because it's small, it's not really important and then you're going to have other levels for, let's say this is going to be for the state 0 of t, this is going to be for the state 1 of t and et cetera, et cetera for all the levels of the Hamiltonians. Okay, and we make the hypothesis that these levels do not cross in particular that the ground state, so A0 is always different or in this case since it's a ground state, A0, AM of t is going to be always greater than A0 of t, meaning that this cannot happen. Okay, so this is not contemplated within the theorem as I will be stating it now. And having this hypothesis, we can now start stating what the theorem says. The theorem says that if we start from an initial state, say of 0, which is the ground state at time 0, and then we let the system evolve with the Schrodinger equation, in the end we are going to get that the state of the system at time t, I of t, is going to be, let's say, approximately proportional to the ground state at time t. Okay, so this is saying that for this condition to hold, we have to have a sufficiently slow evolution. So I will write it here because it's quite important. So if the Hamiltonian is changing sufficiently slowly, what will happen is that the state would always remain in the instantaneous ground state, the evolved state. And thus I can use this idea to prepare the ground state of a final Hamiltonian starting from the ground state of an initial Hamiltonian. So before describing in detail how we use, we can use this in quantum computing, I want to spend a few seconds, like a few minutes discussing the proof and the bounds that are related to this theorem. So okay, I'll just delete this. So this is, to make this statement, to give a quantitative statement, we need to be a bit more precise than what I said here. So we are going to consider that the Hamiltonian H of t is, let's say, it's going to be equal to H of s with s equal t over the total time. So we can take, if we want to be more precise, let's say we are evolving with H prime of t, which is equal to this function. And now in this case, we can rewrite the Schrodinger equation as a function using the variable s. And we are going to get that we're going to have a psi of s. And we're going to have that for this psi of s, the derivative in s is going to be equal to minus t psi of s. And of course the H. So in this case, by doing this change of variables, we get a time here. And this change of variable is quite relevant because this parameter allows us to slow down the evolution. So the Hamiltonian, for example, we can take as final point H of s0, initial point, going to H of s equal 1. And this in terms of the time is going to happen within a time from 0 to capital T. So by sending capital T to infinity, I make this dynamic slower and slower because it takes more time to reach the final Hamiltonian. So I have to be doing this change slower. So this is the setting. And with this setting, it's possible to start attempting to prove the theorem. So I'm going to give not a rigorous proof of the theorem, which you can find in the lecture notes from child because this is maybe not the right place to prove the whole theorem. But I just want to give some idea on how the theorem works, on how to prove the theorem and what comes out from the theorem. Because other than knowing that if we send T to infinity, the dynamics is going to be adiabatic. So it's going to stay in the ground state. We would like to know how large T has to be for this kind of statement to hold. OK. So I'm going here. So to make the proof, the first part of the proof is going to be to find an exact adiabatic evolution. So we say that this Hamiltonian, when this condition holds, will be approximately in the ground state. But we want to find an Hamiltonian which we want to define, an alternative Hamiltonian, HA of S plus I. So here P is the projector over the instantaneous again state. So if we want to be precise here, we should put S. But I will drop the S where they are like, so I may drop the S where it's implicit most that all these things except T are going to be S dependent. So the first statement is that this is going to be, implement the exact adiabatic dynamics. So if I evolve with this Hamiltonian, I'm going to have that the Schrodinger equation can be satisfied. So let's say Schrodinger equation for A. Oh, sorry. OK. So we said that if this, so the main thing here is that what does it mean that this implements an exact adiabatic evolution? It means that if I take A of S which satisfies the Schrodinger equation for with HA, OK. So if I consider this Schrodinger equation, then with initial condition that I'm starting from the ground state, then I'll have that the solution is by A of S equal to a coefficient times the instantaneous again state. So let's put S here because we want to keep everything in terms of S. So this is going to be a solution of this equation with some appropriate C0, OK. And this means that this, the first step is showing, is defining the Hamiltonian which does the job exactly, which is implementing an exact adiabatic solution. The next thing that you would have to do for the proof, so the next thing that you'd have to do for the proof is showing that if I take the evolution operator, let's say I'll say UA and U are close. So here the evolution operator UA is the evolution operator induced by the Hamiltonian, by the exact adiabatic Hamiltonian. So it's going to be satisfying the equation HA, UA. And the U is going to be the adiabatic, the evolution implemented by our original Hamiltonian. And this is going to be satisfying. So again both of these are, sorry, the derivative of course. Again both of these are functions of S. And as I said, everything except T is basically a function of S. So in this case, these two, I know the differential equation satisfied by these two operators, by these two evolution operators. And the idea is that I want to show from this that this should be small. That goes to infinity. So after defining the evolution, the main point as we said, we have defined the exact adiabatic evolution. And then the idea is to show that the exact adiabatic evolution is actually close to the evolution operator induced by the time-dependent Hamiltonian, the original one, when the process is slow enough, so when T is going to infinity. And finally, again, the third step here, which I will, okay, right here. So the third step is going to be to put some conditions. So it's going to be to say how large. Okay, so these are the steps that lead to the proof. I have not proved none of them at the moment, but I want to simply start by proving the first one, at least, because it will show, it will give us the state which is and the phases which are actually implemented by this slow evolution in the limit of T that goes to infinity. So again, let me start by, so as we said, this here. So I'll look at the first step and we say that this is going to be our H of S. Okay. And the idea in this first step is to show that, as we said, that the Schrodinger dynamic, this satisfies the Schrodinger equation. So let's write that explicitly. So if this would satisfy the Schrodinger equation, if we're going to have that the derivative of, let me just write here that this is the function, okay, the state, and we're going to have that the derivative of the state should be close to the part due to the Hamiltonian. Let's just keep the S out for consistency. This should be equal to zero if we want that the Schrodinger equation is satisfied because I just brought everything on one side in the Schrodinger equation. Okay. We can start writing this and see what comes out. So I'm going to, the first part is going to be here. Since there are going to be a few derivatives, I'm going to use the notation X dot equals the S of X. Okay. And this is going to be the first part. And the second part, we have to put in the Hamiltonian. So we're going to use the game value equation. So since a zero is a game value, a game vector of H s, this is going to be a zero times a zero. And then we're going to have plus this there. So plus, and we're going to have E times, okay. And this still applied to zero. So let's put here the coefficient because of linearity. This is zero goes out from this application. So, yeah. Okay. So to be consistent, actually, if we want to keep the H bar, sorry here, I forgot an H bar. Okay. So here we're going to have a one over H bar and we're going to have an H bar. Okay. So this equation, in this equation, actually what we would like to simplify, to resolve, like to simplify is this object. So let's write what this object is applied to zero, which is going to be, I just recall one second that P is zero is zero. Okay. And this year is going to give zero dot comma sorry here. Okay. And this is going to be zero dot is zero. Okay. And is zero. Okay. So now we're going to insert this in the equation, but I'm not going to do the computation bit by bit. I'm just going to give you the result for this one. And what you're going to get is that you're going to have, you're going to have that when you insert this inside here, you're going to have that the Schrodinger equation which should be zero is going to be actually C dot is zero. This is going to be plus. Let me just get you the right sign for the last term. And you're going to have minus I is zero dot. And this is going to multiply zero. So what happens is that the Schrodinger, all the terms in the equation vanish except those that multiply directly the single against the zero. And then we have an equation with just coefficient on a single vector. And this equation here, sorry, here of course there is a C zero missing. And this equation has a solution which is going to be that the C of T or of S in this case is going to be an exponential of a dynamical phase and a geometric phase. Okay, times the C at one and we will, by because of the initial condition C zero at one, this is simply going to be the value of C. And here the phases are given by the, are the integrals of these objects that appear here. And gamma is going to be the integral of zero is zero dot. Okay, so what would happen is that the state would approximately evolve acquiring, let's say, these two different phases. One is simply what you would expect in the usual Hamiltonian which is in the usual Hamiltonian, you would acquire a phase proportional to the energy times the time if the Hamiltonian is time independent. In this case you have to, the E depends on the time so you would have to integrate this. And in the second term is due to the, to its geometrical phase which plays an important role. For example, when you want to consider cycles which return at the initial point. But in this case, the, what we are mainly interested is in the dynamical phase. And I just want to mention that if you proceed in doing, in, in, proceed in proving the theorem, you'd find that the oscillations that are induced by the dynamical phase are what essentially allow you to prove the theorem. Because you're going to have that the eigenstate is the, that the state is having, that the difference, that the state is having phases which are high. And you're going to have that when you compute integrals, these phases are going to appear in the exponential and you're going to have to integrate by parts to, to obtain a bound for the error. So let me, I'll skip this part because it's quite technical. And I'm just going to give you the result for the bound. So, so if finally attempt to estimate how far it's your state, so how far it's your evolution from the exact adiabatic evolution, you're going to find that this, you're going to be able to write this equation. This is going to be, let's say, you're going to have an error which is smaller equal than time some constant. So you're going to have an error which is, okay, here you have a total time. So you're going to have, like, the error times the total time. It's, let's say, it's smaller than the norm of the derivative of the Hamiltonian besides the gap square. Where the gap square is the gap is, let's say, is going to be the difference. Let me write it here and I'll then write it's going to be t minus e0 of t. And you're going to have that, as we said, you're going to have these two objects and we are considering this difference between the e0 state and the e1 state. And here you would have to take of this quantity is defined for some value of s and you'll have to take the maximum. So actually let me write it explicitly but you have to take the maximum. Okay. So this is what the theorem states that the error that we define and it's telling us how close the state of the evolved state is to the exact, the state implemented by the exact derivative of Hamiltonian and this error, if I call it epsilon, so if I call the error epsilon, let's say here, I'm going to have that the time needed to achieve an error epsilon is going to be one over epsilon times a constant times the maximum over s h dot. So this is telling us that again, as I said, I'm calling epsilon the error. Okay. So this is telling us that if you want to keep a certain fixed error, so we want the final, for example, the final state to be pretty close up to epsilon to the ground state of the final Hamiltonian, we are going to have to increase the time because when we reduce epsilon, the time is going to increase. But on the other hand, it's also telling us that to achieve when we have a fixed epsilon, we can achieve sufficiently good performance if we choose the time large enough. And what are the physical quantities that are defining how large this time has to be? These physical quantities are the derivative of the Hamiltonian and the gap. So let's say if, for example, the derivative for some reason is constant, we are going to have that the time is going to be scaling with the inverse of the square of the gap, the time to have a certain error epsilon. So this is the adiabatic approximation that this equation tells us. This equation tells us how long we need, how much time we need to achieve a certain precision. And now I want to describe how this is used in, briefly describe how this is used in adiabatic quantum computation. So for this, I'm going to just finally describe the algorithm and tomorrow I will show practical examples. But today it's just about understanding what the algorithm is and what are the theoretical conditions of the algorithm. So if we start from a certain Hamiltonian, let's say h of i, an initial Hamiltonian, and we want to end in a final Hamiltonian which encodes the solution of a problem. So to do this we would have to restrict, it's easier actually, to restrict ourselves to some combinatorial optimization problem. So we consider the problem of minimizing a certain function, let's say h defined on binary to R. So we consider this problem, mean of h of z. So this is an optimization problem because we are given a classical function and we want to find the zeta that minimizes the function. So actually many problems can be written in this kind of form and also quite complicated problems which are NP hard. And for example, like the satisfiability problems, the problem salesman problems, and I will discuss some of them tomorrow. And also the Grover problem that you already saw can also be written in this kind of shape. So if you have a problem which can be recast in finding the minimum of this function, finding the minimum of this function can be again recast in the question of finding the ground set of a certain Hamiltonian. So we consider this Hamiltonian which is going to be the sum over all the possible strings z of, and we define the Hamiltonian, we define a diagonal Hamiltonian. So the elements of the diagonal are given exactly by h and the eigenvectors are exactly this. And the eigenvectors are exactly the computational basis. So again, z is going to be a string, as we said, with each of these values can be 0 or 1. So this can be seen as a state defined on a set of n qubits. So we have that this Hamiltonian n lives in the space of n qubits. And this, the ground state of the final Hamiltonian, since the Hamiltonian is diagonal and we already know the energies, is going to be the state which has the minimal energy. So in this case it's going to be the state which has the minimal h of z. So it's also going to be the state which minimizes, it's also going to be the state z which minimizes our function. So the ground state of this problem is going to be equal capital T at the end of the evolution. It's going to be the zeta bar star so that zeta bar star is the minimum of our initial problem. And in this case we can use the adiabatic theorem to generate an Hamiltonian that produces the ground state of this problem. And to do this we consider the following Hamiltonian. So the interpolating Hamiltonian h of s which is going to be, say, let me give you a simple form but this can be changed. So this is going to be s times h final plus 1 minus s times h initial. And I recall that s is t over t. So in this case I'm giving you, this would be a particular case where I've chosen a linear schedule as we'll see this may not be necessarily the best one but at time zero the Hamiltonian is h i. So it's going to be h of s equals 0 is going to be h i and h of s equal 1 is going to be h f. And if I run, if I am able to prepare the system in the initial state which is one of the requirements for the adiabatic theorem so that I must be able to prepare this initial state which is the ground state of the Hamiltonian then the adiabatic theorem assures us that this evolving with this Hamiltonian is going to generate the solution we are looking for. So this is essentially how things are going to work and for the time I need because we said that this, I need a certain amount of time to do this and the time I need is going to be dictated by the derivatives of the Delta Hamiltonian and its minimal gap. So I can get the runtime of the algorithm up with a certain precision. So this is a runtime at fixed precision by computing this object. So the analysis for when running this kind of algorithm for a certain problem is going to be right on the Hamiltonian which interpolates between an initial one and a final one. The final one should solve your problem of interest and the initial one should have a ground state which is sufficiently easy to prepare so we should be able to do this ground state preparation and after the evolution if the time is longer than a running time which is depends inversely with the gap squared of the Hamiltonian along the evolution path the system is going to return the ground state of the final Hamiltonian which is also the solution of our initial problem. So this is just the, let's say the theoretical side so that's how the algorithm should work in theory and tomorrow I will discuss explicit examples for the algorithm and I'm going to show how it actually works in practice for some specific problems.