 ta semnog, nukabljučnega, taj kot kontov, poslušal, kaj je napotran, sזrsta bo, pola, sputetje, pola, protioooo, moja tragi in tez, pola, pa načo imamo tu, ta porerno, prorom začo, ne bom, ni, in, Hier, ko v Secondly manje, the order, first, I will tell you why and how we want to apply it, and then explain to you how it works actually, because without motivation, it may be a bit difficult to understand, why we care about such an operation. In advance I can just say that operators Fourier transform is a sort of operator analog izstimacije. V medeljih mekanikov bi vzal tudi vzal, kako se pošliča. Jedno je ta zelo, kot šeljel posleda, kaj je s tebe način zašliča. Pošliča je so v zelo, kot v tem frameworku. Popravo je je vzal, kot hejzenberg pošliča, kaj je pomembi vzal, kot je vzal, je vsečen, ali operatori se vsečen, a na operatori vsečen je vsečen in analog v kvantumfazimestimacije. Vseč, da je bilo vsečen, da bomo se vsečen. Motivacija je metropolis, vsečen in vsečen v GIP-stajci. V GIP-stajcij vsečen v klasičnih sistemu včežen, ion vživljeni. Svej objev eno obček tukaj tega tukaj, kakva ozvestovana se na dvega tanice. V zemlju je, že ten temprit judge in tukaj parametru, ta inhvek temprit, je vzumet vzumet. Tukaj tor, je, že kaj glas je postočno, in velikphone in je joins 키, na da, tako, ki jste niletite. Na koncerno, želi pred taj več sentrat. Sve se vse svetak videl na ASL, je taj vse vse koncentr, bo jo, ki ona je veliko in tako, ta pridina niletite, je veliko in pravda, bo je, kako je veliko in tu, so svetak videl na ASL, In as you increase the energy, the exponential becomes very, very small, and so it's very unlikely that you find your state in a high energy state. This is the intuitive meaning of it. And then the quantum analog of this is that your state is basically diagonal in the basis of your Hamiltonian, so it is a distribution over energy eigenstates of your Hamiltonian, and formally you can write it as e to the minus beta of the Hamiltonian. And if you would diagonalize your Hamiltonian, then it would be the same thing as before. Just now this is an ensemble over the eigenstates of the Hamiltonian and weighted by the energies as described by the Hamiltonian of your system. Ok, so this is a Gibbs state. And so in this talk I will have much more physics connections, so if I say something that maybe is unclear then please interrupt and ask. Ok, and so first I would like to start by reviewing this classical discrete metropolis-hastings algorithm. And the objective in general is that you want to sample from some target distribution, which is proportional to some vector that you are given. In our case it will be just this Gibbs distribution, so it doesn't have to be normalized, but it has to be non-negative and you want to find distribution proportional to it. And so an example would be, you have a n spin easing model, so you have basically just bit strings of length n, and then your energy function is defined by this classical Hamiltonian. Everything is diagonal in the computational basis, that's why it's classical. And then you have some two-buddy terms and some single-buddy terms. And according to this formula you can compute the energy of any configuration and then your goal is, so this z i's at j are just the i's and j's coordinate of your bit string z. And now this target distribution would be just the Gibbs distribution defined by this energy function. And so the dimension of this space or the size of the configuration space would be 2 to the lower case n, because you had n spins. Okay, so this is just some background, and this metropolis algorithm actually works more generally, it doesn't have to be the Gibbs distribution, but any tau suffices. And here is the algorithm. So it starts with some symmetric exploratory Markov chain. They call it exploratory because it just, it's a good idea to start with a Markov chain, which kind of rapidly converges and checks lots of parts of this configuration space to begin with. And so a very good example would be in this classical case that just pick a random spin of these n spins and you flip the value from plus to minus one or back. And so you start with this very easy process. For example, if you just do this random bit flip, then because it's a symmetric process, it's stationary distribution with uniform distribution, and this is not what we want, this random walk on this configuration space such that night will converge to the Gibbs state. And this is what the metropolis Hastings algorithm does. It says that, okay, we will modify the original Markov chain. Let's first try and make a transition from z to z prime according to this Markov chain, the probability is described there. And if my likelihood actually went up, then you accept it. So tau z prime being larger than tau z, it means that that's a more likely state in physics language that would correspond in Gibbs state to a lower energy state. So if your likelihood in the target distribution is higher, then it's a good move, then you accept it. However, if the likelihood of this new state would be smaller than where you were, then you only accept this move with this ratio of the likelihoods, and otherwise you reject it with probability 1 minus tau z prime over tau z. So in particular, if z prime is much more unlikely than z, then this number will be very close to zero and you almost always reject. And the good thing about this algorithm is that this modified Markov chain, which applies this metropolis Hastings rule, has some very nice properties. First of all, the stationary distribution of this Markov chain will be the target distribution, and we achieve this all without knowing the actual normalization of this vector, which is good. In the stationary distribution, it means that, well, hopefully it will converge to this as you apply your Markov chain for some time. And for analyzing these kind of properties, it's very useful to know that this Markov chain is reversible with respect to this target distribution, which is also called the detailed lands property, which I wouldn't write here, but there will be some exercises about this in Excel class if you haven't seen this concept. And moreover, in some sense, this metropolis version of your walk is in some sense the closest Markov chain to the original one, which has now the target distribution as the stationary state. And, well, it's also nice in practice because it turns out that often this process just converges rapidly to this stationary state and then you can sample from your target distribution by running this Markov chain, this modified algorithm, only for a short time. OK. Now, you can also define a continuous version of this metropolis-hastings algorithm, which is less well-known, but that will be more useful in the quantum case, so I wanted to review this classical continuous time version. So now you have a continuous time Markov chain with a symmetric generator L. This generator is such that its off diagonal entries are non-negative numbers that describe jump rates. So, something like if I do, during a small time period, how likely is that I do a jump from my current state to the next one. It's a little bit similar to, I don't know, how like radiative decay. It would be like, if you are modeling your radiative decay, then you would have the, I guess the state when the molecule is still, the atom is still intact and then the decayed one and then it has some decay and it continuously decays its probability and you just need to exponentiate these jump rates to know how many of your original atoms remain. And so, to make sure that this process, when you take this generator and exponentiate it, it preserves the probabilities, so then you need to make sure that once you jumped from somewhere, then the probability of staying there is reduced, so the jump rates you need to sum up all the jump rates for a particular state and take it with a minus sign in the diagonal entry because that's how much you decay in the original position. And so, this defines that way a Laplacian matrix which corresponds to actually a weighted directed graph and it's very natural because you often draw pictures about Markov chains as walks on a graph and indeed they are mathematically essentially equivalent, especially in this reversible case. Ok, so now the continuous time metropolis hastings is very similar to the discrete one. We just need to modify the jump rates as opposed to the stochastic matrix which describes the discrete jumps. And if a jump happens to increase the likelihood in the target distribution then you always accept the jump, so for the modified Laplacian you take the same entry at that position. However, if this infinitesimal jump decreases the likelihood then once again you reduce this jump to this you only accept this jump with this ratio of the two probabilities, which is going to be less than one in this case and otherwise reject the move which means that you are basically decreasing the diagonal entry in your Laplacian. And once again if you are doing this metropolis version of your continuous time Markov chain then you get similar properties then in the discrete case once again the station distribution when you exponentiate this process this continuous process will have the station distribution, the target distribution and once again we don't need to normalization of this distribution or this vector. And once again you will get a reversible process which is detailed balanced and once again it is somehow in some sense the closest generator to the original one and similarly converges often very fast in physically motivated examples. Ok, so there is very tight connection between the discrete and the continuous but it will be somehow more friendly to the quantum case as I will tell you in the next slide. So let's try to do quantum metropolis sampling and in this case the objective function, the Hamiltonian will be non commuting and that causes a lot of trouble and so for example the example that I showed you was some classical spin system and if you just replace the single site terms from a Z operator if I would write this with a Z operator here then it would be exactly the same as classical nothing changes but if I replace this by an x operator then suddenly my Hamiltonian will have non commuting terms and everything will become potentially much more difficult. Ok, so now the goal would be to prepare the quantum give state on a quantum computer and now let's try and describe an analogous version to the discrete time metropolis hasting algorithm. So once again you start with some symmetric exploratory quantum process which is a quantum channel and once again you can just do the same as before you pick a random spin and flip it you apply this xj operator in a random location and so the quantum metropolis will similarly modify this quantum channel and this was a nature paper and this is my theme at all bit more than 10 years ago so once again you can just look at the energies of your of your states if after the move your energy decreased then you get to a more likely state in the Gibbs distribution and therefore you accept it and so here I just explicitly use this diagonalization of my Hamiltonian according to this energy levels and if my new energy after the jump actually went up then similarly we can apply the same metropolis rule as before and only accept this move with this probability and otherwise reject and so this is as I described is basically just a random walk on the eigenstates so it's basically the same thing as in the classical situation just you are doing this walk understanding it in different basis however and because of this view that it's actually just a random walk on the eigenstates the same proof will work showing that this actually converges to the Gibbs distribution and hopefully it will converge rapidly for physically motivated examples but now comes the difficult part because it's a non-committing system it can be quite challenging to actually compute the energy so how do you decide that the state where you are currently in what's the energy of that state well the best guess that you can do in general is just use phase estimation but we talked about this quite a bit in the previous days phase estimation unfortunately will only approximate your energy and we have some uncertainty and that causes troubles because you don't know you cannot exactly compute the acceptance and reject probabilities because you only have an approximation to your energy and another problem which is similarly difficult or maybe even more difficult is that in a classical case it was very easy to revert the move you recorded where you were before the jump you did the jump compared likely to the energies and then if you decide to go back just go back and forget the last step but in the quantum case you can't quite do that because because of no cloning so it would mean in the quantum case that you copy your state and do a jump and keep either of the copies now this is of course impossible in general no cloning and there is a solution to this in some sense using something called the Marriott-Wattrues revining technique but that also that also has some difficult interplay with the uncertainty in phase estimation and it's kind of messy and hard to prove that it works right so it's very difficult to analyze and not so nice to work with but this was the best technique at the time so that's how they described it and this was the original approach and so much of these attempts at Gibbs sampling tried to deal with these two problems this revinding problem and its interplay and also the fundamental difficulties coming from the ambiguity of the energy estimation algorithm so in the original approach they used a certain version of phase estimation which was shift invariant and boosted but boosting is a difficult property and this shift invariant which kind of meant that if you are starting an initial estimate it just shifts your estimate that you have that actually turned out to be not play well with boosting and there seems to be some technical issue with how that works so the particular technical tool that they relied on in some part of the algorithm seems to be impossible to implement due to some topological reasons now it has to be seen if some easy fix is there but the current version seems to have issues with phase estimation uncertainty and then there was a version of this metropolis algorithm which being a quantum walk tried to apply a random walk tried to use quantum walk techniques to speed it up and they just assumed that phase estimation is perfect it gives you the energy exactly wow, this is unphysical because of this energy time uncertainty principle so if you have some quantum system that you interact with for time t, then basically the best precision that you can get is something like 1 over t in your energy estimates this is the same thing as we have seen in phase estimation so learning the energies is simply unphysical and then recently Vokian and Tema showed an alternative approach which is very nice and uses this continuous time quantum metropolis algorithm which in the physics language corresponds to so-called david's generators but once again they got into trouble with this uncertainty about the energies and to solve it they assumed that the spectrum has some periodic gaps according to the phase estimation mesh that they use they called this a rounding promise but this is once again unphysical if you have some many body system then it often has continuous spectrum and very recently there was a further improvement when they could resolve some of these problems in this continuous time setting by somehow randomly shifting their mesh of phase estimation and and show that that their mesh with high probability doesn't align badly with what you want to estimate and using these kind of arguments they could improve on this rounding promise problem and so now in this paper with my cooters Antoni Čan, Michael Kasunja Fernando Brandao we made one more step from this continuous framework and basically solve these issues and the solution to this was using this very nice gaussian smooth version of phase estimation that I talked about which gives very nice properties of your estimator of the energies and the other ingredient was to use the operator Fourier transform this Heisenberg picture of phase estimation and so these two new ingredients seem to tackle all the technical issues that we have a long history of struggling with really a challenging combination of difficulties so what's Fourier transform what is operator Fourier transform and why do we need that ok, I hope that you will bear with me so now I describe you the continuous time quantum processes and their metropolis version and for this I need a quantum analog of this as email jump operators that were the Laplace operators and how we get to the Markov chain by that exponentiating that so the Gibbs state is a quantum state it's a mixed state it's not a pure state but it's a perfectly valid quantum state and you wish to prepare this this mixed state in general but maybe once you have the Gibbs state you can measure observables and learn properties of this Gibbs state which tells you if you have some system that you want to understand better it tells you the behavior of your physical system and different temperatures and so on so the Gibbs sampling is just preparing this state so that you can study and understand the properties of your physical system that's like one of the main motivations and this could be helpful for for example high temperature superconductors for which the theory is not very well understood and if you have a tool of simulating things and understanding how they behave different temperatures that might just help you gaining more intuition and understanding stuff but also other conness matter systems and material science understanding different temperature properties of your material are very important and for that it's a fundamental primitive just to start and prepare the state which represents your physical system to begin with well I don't know, like a quantum state is an ensemble of pure states and you want to prepare this ensemble properly so that the measurement statistics recover the Gibbs state statistics so so you well one way of doing it is to prepare a purification of your state and then say that if I trace out the second register then the first register density operator describes the Gibbs state and then you can do whatever it is we have such purified algorithms but also ones where you are simulating Gibbs state in this ensemble sense well, yeah, so if you would have really good phase estimation and you could very precisely nail down the energies then what you do is you have your state, you apply some local operation it will suppose that you are an energy eigenstate you apply some local operation it will in general bring you to a super position of other energy eigenstates but now you can do again phase estimation and it will basically tell you in which energy state you got and so it's a it's a random walk on energy eigenstates that way but you know to which exact energy you end up after doing it in these steps that will be very closely approximating the statistics of the true Gibbs state and so in statistical sense you are actually prepared in the Gibbs state although in every instance you would seemingly just get a pure state for a particular energy but the statistics of these pure states that you end up with actually correspond to the Gibbs distribution that's totally fine yeah, it's kind of difficult to see in this grid picture but let me explain the continuous one and then lots of these issues are actually resolved but also maybe we can take it offline if more detailed questions later ok, so what was in the classical case you had some infinitesimal jumps from one state to another one and the collection of these were put in a Laplace matrix Laplacian and you exponentiated that so we need something like this in a quantum setting so there the distribution is replaced by density matrix rho and you want to do some kind of jumps and it turns out that the jumps can be described this way that you have your rho and you are sandwiching it with some operators kj and their dagger and so this would mean that you transition from your current state to a new one and these kj operators are the infinitesimal jump rates because we work with quantum systems and then to make everything trace preserving so that your quantum state doesn't like disappear you also need to add this term which is corresponding to decay that's how much you are jumping away from that's the decay term this is corresponding to the diagonal part of your Laplacian matrix and so this is completely analogous but now you have slightly more reason to write it down just think about this as jump rates, transition rates and decay rates and so mathematically this is the description of this continuous time quantum channels and if you want to know that if you are doing this stoastic quantum random process for time t then what is the quantum channel corresponding to time t evolution then it's once again just exponentiate this operator which is now super operator because it maps density operators to density operators and you exponentiate this super operator that's an even larger matrix it's a new super operator that describes your quantum channel so this is completely analogous to the classical case but the dimension is much higher and so now here comes how you would get the metropolis modification of the jumps and this is something which is called generator and it's also very similar to what happens in nature when you have thermalization so you put your quantum system weakly interacting with a big path and you wait and then basically what nature does and how it will drive your system to this thermal state which is the Gibbs state it will be very similar to the metropolis algorithm it's basically the same thing maybe some weight choice is slightly different but essentially the same thing happens so in the metropolis algorithm if we change our energy then we want to accept changes that decrease the energy and only accept increasing energy changes with exponentially small probability this is the metropolis rule so what should we do this k was this jump operator this transition operator and we need to decompose it according to parts so I denote this delta part of this operator k as the part of the matrix that changes energy by delta so if you apply this part of your matrix to the state then it means that it increases your energy by delta and well any transition in your matrix does some energy change so if you are summing up over all the energy changes then you recover the original matrix so for understanding this just imagine that your matrix k is written in the eigen basis of your Hamiltonian then it really just corresponds to particular matrix elements in this table and you are just decomposing it according to how big energy change induces that particular matrix element ok so now your jump operators are decomposed according to how big energy change they do and well once you know how big is the energy change you can apply the metropolis step if the energy change is negative then you accept it so this is the one here and otherwise if the energy actually went up there is no probability in the energy increase and so this means that your infinite tensile jumps are reduced in the case when it increases the energy and there is also some decay part that is just the same thing as before it just the thing that ensures trace preservingness ok so and here comes the need for operator free transform once again the energy differences are called bore frequencies in physics and as I told you we can decompose any matrix k according to the set of bore frequencies so k delta was the part that the transitions that that changed energy by delta and then well any transition does some energy energy change and any energy change can only come from differences of energies from your Hamiltonian so if you summing up then you recover the original matrix and so what we wanted to do is somehow label different parts of your transition matrix with the different transition energies the different energy differences that it induces so ideally we would have some qubits initialized in zero state because that's what we usually do and we have our operator and we wanted to label the operator with the corresponding energy change and this is what we wanted to do and if we could do this exactly then metropolis something would work exactly and everything would be very nice now unfortunately as I told you because of this energy time uncertainty you cannot do this but you can do something approximately similar and this is what we will do and but now once more in the perfect case if you could perfectly label the energy difference in your jump then once you have this representation this decomposition of your operator you could just further reduce the parts of this matrix where the energy went up by the metropolis weights so if you would have this perfect energy estimation you could do this perfectly and exactly get the right jump operators and after doing this you would just get this metropolis modified jump operators which give you the new Laplacian sorry the Lindbladian and that is how the quantum analog of the Laplacian matrix is called for these systems ok and now after this we came to the operator Fourier transform and what it is so once again what is the aim is to have a method to decompose an operator according to the energy different transitions that it makes and once again just as in phase estimation we will only be able to do it approximately because of the uncertainty in the energies that we can how we can learn them and so this is the circuit I'm telling you then this will turn your matrix K to roughly decomposition where these omega bars these would be good approximations of your bore frequencies your energy change and then when you measure here in this qubit some particular omega value then you can know that roughly it was changed by omega by this operator because of this decomposition so this is the goal and so I would like to explain why this holds approximately and this is the story of how to understand operator Fourier transform so at the beginning we had this zero state and we prepare some amplitudes now think about preparing this Gaussian amplitudes for example that's very nice as we talked about so you have some amplitudes in this register and you just have your matrix and so what happens here that controlled on this register here you apply now Hamiltonian stimulation with time e to the minus it where the time is controlled then you apply your matrix on the system of interest and then once again do Hamiltonian stimulation but now with opposite sign of time then you apply e to the i hd and finally you will do a Fourier transform on the time register so this is how it goes and so let's understand what's happening so when you apply this Hamiltonian time evolution before and after k and I am sorry this tensor product is at wrong place it should be after the t I didn't notice this typo so you have e to the i hd k e to the minus i hd and this with various t's here yes so let's understand what is the effect of this time evolution backwards and forward and sandwiching k with it so I am telling you that if you decompose your operator by the board frequencies then it is just a simple phase that it's added here and to see this notice that suppose that your operator is just a rank from matrix which is transitioning from against psi to psi prime now when you apply this Hamiltonian evolution with minus time then it will pick up this energy of psi but with a minus sign and after the transition you are doing one second Hamiltonian evolution but now the eigen value, the energy of psi prime will matter and because you do this time assign differently but it will turn out that you can just pull these phases through and the difference of the energies will come into the picture so because it was a energy against psi we know that it is just a phase added in this side also just phase added by this other operator and in two phases we can just pull them together and we get the difference of the energies induced by this rank one transition and when we decompose the operator k according to this then the same thing happens that we basically just decompose our matrix to such rank one transitions or some linear combination of them but the main thing is that the part of the transition matrix which induces energy change delta that will pick up a phase factor e to the i delta t which is now scalar so this is what's happening when you do this time evolution before and after your operator and now I'm just rewriting this summation I just exchange the sums and regroup terms nothing changes when I move to this part I just move the summation over the more frequencies to the beginning summation over t after and I can see that there is all the t dependence is now transferred into this time register and so here I have the original amplitudes f of t and multiplied by some phase which depends on the bore frequency that you have but we have seen that a phase multiplication in the Fourier picture that is just shift so after you apply the quantum Fourier transform then what you will end up is that you will get a shifted version of the amplitudes of the Fourier transform of f for the particular delta that you have and if we use this nice Gaussian weights then it means that this after Fourier transformation we will get a function which is peaked at the right value delta is the energy transition that we wanted to estimate and this amplitudes will peak around that so we will see an omega value which is very close to the actual energy difference and then the corresponding transition is indeed applied so now I can only say that this is peaked so if I regroup my my entire expression by the energy difference labels omegas then I get something k omega and this is the operator Fourier transform of k so formally when I move from the operator k to this omega tensor k omega I can think of this as a function of omega that's my operator Fourier transform and if I choose my Gaussian weights right then intuitively speaking this should be roughly this k omega should be roughly representing the part of the matrix when an omega energy difference is induced after doing this operation so this is the role of the operator Fourier transform that you decompose your operator approximately labeling how big energy change it induces and so what's good thing here that we work with the operators so we never encountered any problems related to no cloning because we just transform the operators and label the jumps there we never change the state actually so there is no issue with no cloning we just analyze and Fourier transform that operator and that's perfectly fine nothing is violating the no cloning there and this is why this continuous picture is much nicer because here we just we just modify the infinitesimal jumps and then that induces the right dynamics and there is no cloning at all okay any questions here because I think it's a difficult concept yeah okay so now the question is okay I can decompose my operators like that but what is it useful for so for this we need to understand how to represent such infinitesimal jump operators this linbladian and how to make use of it in a quantum computer as I told you in the metropolis algorithm we begin with some nice exploratory process which is symmetric for example kj, the jate jump operator could be just take the jate cube it and flip it with an x operator something like that and in general the only thing that we need is that we need some dilation of these jump operators so this is some unitary matrix which is a quantum circuit such that it labels the different jump operators with the corresponding labels j and this is just the input think about kj being just xj flipping the jate bit and so after applying this operator transform on all of the operators xj or kj in this case we further decompose these kj operators and label them with approximate energy change labels omegas so we will have now one more label j and omega and then corresponding operator kj omega so we had some maybe number of qubits many jump operators and now we have many many jump operators that also correspond to the energy difference estimates that we have but that's still a lindladi and just having much more operators ok so suppose that someone gives this dilation of now this modified thing this modified jumps well I can just look at this label omega and say ok well let's assume that this is indeed how much energy changed and let's reduce the transition matrices and slightly which increase the energy by the metropolis phase and then I get a nice approximate representation of all the jump operators for the metropolis modified quantum process and ok how to use that and I am saying that actually this quantum circuit will approximate very well a tiny delta time step of your lindladi evolution of this quantum stovastic process so what does it do I try to explain this in an intuitive level so while there are some ansida qubits which are just storing your labels but what happens is that you have your quantum state of interest this is this can be in general a mixed state row and then you apply this dilation operator of all the jumps and then you will see some jump labels and so on and while in particular when all these qubits end up zero after you that means that you successfully applied some valid jump operator if they are non-zero then it means that ok this is something which we don't intended it is just some garbage in the encoding so you apply you have row, you apply u and it means that in super position you applied a lot of jump operators and maybe you didn't succeeded and you didn't do anything interesting now suppose that you applied your jump operators that is signaled by these qubits being all zero then we want to only weakly measure it so it's kind of like only accepting these jumps with a tiny probability and that means that we model measuring these jumps by inducing a slight rotation on this answer like you did and if we rotate it to the state one which only do with square root delta amplitude so delta probability then it means that ok now we actually wanted to do a tiny jump but only this like delta strong jump and if that happens then this qubit is set to one and we say ok well we did a small jump good job otherwise well we actually don't want to do this jump then we are running this circuit in backwards so it means that we reverse the effect of the operation to begin with and we erase all the garbage and other stuff that we did wrong and so the claim is that this will be delta approximation of your Lindblad evolution and so this is another application of the quantum zeno effect so the nice thing is that if you are only making these jumps delta strong in interaction then it means that you can erase the rest of the operations not perfectly but much better than the progress that you made turns out that somehow the erasure of all these ansilas that you kind of put a lot of garbage into that will be delta square precise so you manage to do delta time evolution by only making delta square mass and this is the usual thing how you authorize the evolution scale and all the first order methods scale they ensure that you can do some delta progress by only making delta square damage and then you know your target goal maybe you want to simulate your system for time t then you need to divide time t up to many many slices and make sure that all the small mistakes that you make which are practically smaller than progress don't add up too badly and if you just do the mathematics well it turns out that you end up with evolution time so you need to do something like t square over epsilon slices where t is the evolution time you want and epsilon is the precision now this is not the optimal thing because the complexity then scales quadratically with time but actually because in all of these individual jumps you only do something interesting with a tiny probability in a way you can compress together many of these steps and just do them in one go and doing this trick which is in use is more complicated circuits actually solve this simulation problem with linear dependence in time and logarithmic in precision so much nicer than what is here that requires more complicated circuits but also this more general argument is kind of based on such a building block and this was originally showed how to achieve this nice linear dependence on time by Cleveland Wong I think in 2016 in their Lindbladian evolution simulation paper and I realized that this circuit is kind of complicated I think it's enough if you understand this on the high level but in case someone wants to understand the details of why this delta versus delta square issue comes in and how the quantum zino effect can be utilized I actually put up a proof here and so if someone wants to check it, it will be on my website after the talk but this is only for those who are really interested in topic the key thing is that here is a delta and here is a delta square and we are happy about that ok so now we have described the way of doing a metropolis version of your Markov chains and that actually closely represents what nature does and that's really exciting because nature typically quickly terminizes if you leave your water bottle on the sun it will pretty quickly get hot and similarly a quantum system if it's not too exotic we hope that it will relatively quickly thermalize and this metropolis algorithm that I showed you is basically very quickly resembles the best models that we have on thermalizations in nature so in principle if you have a quantum system which thermalizes quickly in nature in the variant of this algorithm and that should also very quickly prepare the gift state that you care about and so I didn't talk about this but this gaussian damping still introduces some mistakes in energy estimation that still makes some mistakes but we were able to bound them and basically we can show that if your system rapidly thermalizes then you don't need too high precision to avoid all issues in this process and everything will approximately work through from the beginning to the end well, but all of this is kind of dependent on this process being quickly thermalizing in nature which also means that we would need to simulate this linear evolution for relatively short time major open question how to you bound this thing for systems of interest but we are hopeful about this and we can get some motivation from the classical side of things where this metropolis sampling was described and a tier level described many decades ago people started using it because it's very useful in computation problems but only more recently we were able to prove that it actually converges for particular systems whereas people have been using it in high performance computation for ages now and we have only been able to scratch the surface and show particular examples where we can prove that it converges and gives the right solution so that seems to be an even more difficult problem as well and converges in most cases although we can only prove it in a fraction of the cases where we applied still it's a very promising particularistic algorithm and an interesting feature of this is that we are basically simulating a thermodynamic system and the thermodynamic system is thermalizing by basically doing random interactions which is noise quantum systems to the thermal states so what happens if I try to do this algorithm on a noisy quantum computer while it's kind of unclear but there is some hope that maybe some version of it would be noise resilient because noise would be just additional source of driving you to the Gibbs state while that is the hope but in this maybe these algorithms earlier proposers about how to prepare Gibbs states and that were decades ago and they had some ideas like ok, we know what happens in nature I have a big system and a small system I couple them, small system will thermalize wonderful, let's just build a huge quantum computer and simulate both the small system and the big system and then the small system will naturally be in the Gibbs state now we all know that getting large number of good qubits is extremely difficult so this is not going to the way how people are going to simulate thermodynamics that they will simulate a huge system but this formalism, this Lindladian and quantum matrix algorithm is basically a way of simulating the effect of coupling to huge bath without ever mentioning the bath so in particular it means that if you look at this circuit then these are just some very small number of qubits somehow describing the jump operators that is some number of particles so number of qubits is logarithmic in the system size or something like that and something to record your energy differences and that also doesn't need to be too precise and once again you are logarithmic in that and they are like one or two qubits for implementation so what it means in practice that if you have a system of I don't know n qubits then you need something like n plus 15 qubits in total to simulate the entire thing and now that we have very, very few qubits available on a quantum computer it's very nice that you only need like 15 qubits overhead on top of the actual interesting qubits that you care about so for this reason I think that this is a very interesting algorithm and maybe this is the most useful application that I showed you and maybe there is a hope to get a quadratic improvement in carbon capture and do something useful for humanity I hope that this is the appropriate end of this 5 week journey but a 5 day journey and I hope that you enjoyed exploring quantum algorithms for Fourier transform Thank you