 Let's start to find a talk of a morning session. Yeah, nice and pleased. Okay. I'm going to start because I realized I've spent two hours talking about classical quantum chemistry, rather than quantum computing quantum chemistry. So we're going to start talking about that now. Someone asked me a very good question in the break where they were like where where did the qubits come into the configuration traction? So basically in the qubits can like represent occupation numbers by if you look at this if you look at these configurations here this would be you'd have so this would be a 12 qubit problem and you'd have a qubit in the first six qubits would be one one one one and then the final six qubits will be in zero zero zero so you can see all the possible combinations of ones and zeros which come from the two to the end possible qubit configuration You get from the qubits can represent all the configurations this way. So it's actually interesting because the configuration traction only stays in six particles, but the qubits can obviously have all zeros as well. So configuration scales less severely for a given positive number than the whole qubit space. Okay, and then the matrix elements of this of this operator are just the typical second-quantised ones that you've seen before this guy. Okay, so let's start talking about some quantum computing stuff. So so we can typically the because the matrix scales at combinatorially we have to truncate it for real problems and if you notice we can form We can actually form our basis using the second quantised representation. So if you see you can form the we can form the excitation by starting from the Hartree-Fox state, which is the one one one zero zero zero state. We can then form an excited thing on that by destroying one of the virtuals and creating one of the excited orbitals. We're going to see a i. When a is the occupied and i is the virtual. So this this is called this this is excitation operator. So we're generating the basis here. And we call these these T's the excitation of races. Yeah, so we got our singles excitation and our doubles excitation. Okay. This truncation means that you might hear it's called S i C i S d. Singles and doubles. This is precisely because that we wanted to have all possible excitation that would be a gigantic matrix we can't put on our computer. So we truncate it to the single and double levels and make it manageable. This is where and then a couple cluster with briefly very briefly basically takes this linear operator and expedient shakes it and by doing that you get basically more of the wave function back for the same cost because you get the cross terms from the excitation coming from the expansion of the expansion of the operator. So basically a couple cluster gives you more bang for your buck when compared to the same T operator. Now the couple cluster motivated the first quantum chemistry quantum ansatz for quantum computing and this is a unitary couple cluster. And basically you take this T of non-unitary and then you basically exponentiate the T minus complex congeal. And this is now a unitary couple cluster object. But the problem is when you try and solve this a classical quantum quantum chemistry method, this is non-terminating. But you do a nice trick in quantum chemistry. So yeah, so here's the here's what their things look like. So we have T, so T i is the it's called a cluster expansion coefficient and then the unitary form is just this. In quantum, to get up to work on a quantum computer, we do this trick called trotterization, which basically takes the exponentiated unitary operator and expand it some degree at degree rho. Typically, rho one is fine, so we just take the first one trotter and then we get this basically. So basically what this is doing is you break up each excitation in the expansion into its own exponentiated unitary operator and then you apply them in N series as a series of products. Now this has a form that can be implemented directly on a quantum computer. So sine naught, remember, is our Hartree-Flockway function. This is the initialization state, if you want, the reference state. This is the one one one one zero zero zero zero zero, etc. In the lowest occupied state. We act on that with our unitary operator. These products are of exponentials and then we have our unitary couple cluster and that. Now, how do we get this onto a quantum computer? Yeah, the one thing the ordering can be important here. And it can give a better answer. So there was some work by Garnet Chan showing that you can actually get the exact wave function if you get the ordering correct. All right, so how do we get this onto a computer? It's a nice iron trap. I'm not biased, but they are the best. So we have what we have to do is these TIJ, these TNs, these are our fermionic creation annihilation operators. Now, you may have heard of something called the Jordan-Mignan transform. We need to map these fermionic creation annihilation operators to our to the power of operators, which can then be implemented directly on a quantum computer. So you might have seen this before. So this is the Jordan-Mignan mapping. We have a fermionic excitation operator and then this is essentially we have a product of the product of Z. The Z's keep track of the anti-symmetry. So if you're acting on orbital four, we have Z acting on zero, one, two, three and then the actual flip operation is done by this X minus Y. And then that is equivalent to thermo- creation annihilation operators. And this is the same for the creation minus X minus Y annihilation X plus Y, X plus Y, sorry. So applying this to our excitation operators, we get some quite, quite gnarly things. So this is, but you can see here, we basically, you can see the the, you can see where the X minus Y's and X plus Y's come in, sorry, because we have two fermionic, two creation annihilation operators. But basically you have the Z strings and you have like this quadratic pair and then you have the Z, for the double you have the Z strings and then you have these quartic set of eight. So you get eight. So from, so here you got, so the Jordanian transform for the fermionic excitations, here you get from two sets of fermionic annihilation operation operators. And then you end up with eight for the doubles and then two, and then two here. So the doubles scale much worse in this picture. And there's lots of ways you can generate these things. So this open fermionic cascade we have in quantum. But the, so we have to exponentiate these, now you have these, we've done the Jordanian transform. We have to exponentiate these powers. So there's a really famous way to do this, which is, if you take one thing away from this lecture, this should be it. Because the power of gadget is the most powerful primitive and conspicuous in my opinion. It shows up everywhere. It actually spots it. It's really easy to make algorithms with. So we've got, we've got this exponentially set of Pauli's from our fermionic Jordanian transform. We then, it might not be that obvious to you, but this is a multi control, this is a multi qubit RZ rotation. So you can, I encourage you to, in the break, maybe turn this into a Z. So with no rotation. And then come in with a one, one, one, one state. And then work through it and you'll see that it works as a multi qubit power for the operation. If you then were to add a rotation in a Z basis here, you then get this really powerful E to the I, E to the 2, Z, Z, Z. So now you've got an exponentiated parameterized Pauli rotation. So this is really powerful for the many things. It's a Hamiltonian simulation. Like these, these, you can take double cluster and Z axis. And then we can change that now to N, so that was just a Z to Z, Z. Now we can change that by just a basis rotation on each side of our CNOT ladder. And then we get any power of gadget we want. So this is really powerful. So now you can get exponentiated powering word. And then what the coding exercises are basically building a Hamiltonian simulation from this structure. Okay. So this is really like, it's very powerful because now you've got, you can have access to Hamiltonians and cluster operators, exponentiated. Okay. So now, now let's talk about VQE. VQE in quantum chemistry is probably, because as you saw, the quantum chemistry Hamiltonian had N to the four terms. It's the most, it's a fully interacting problem. It's probably not, VQE is probably not being very applicable to quantum chemistry in my opinion. But it's still going to be useful for state preparation and things like that. So it's worth learning about. VQE is essentially this. So you have a state preparation. The state preparation is ansatz that I showed you before. This is your wave function. You have to have some parameters to see if you change your wave function. You then have your, then you have to measure that Hamiltonian. And you can do that by individual Pali terms, I'll explain that in a second. Then you have to measure, so you apply the Hamiltonian and you measure it. Okay. And then you repeat for each Hamiltonian term. Okay. So you can be anything. It can be unitary couple cluster. It can be hodr-efficient. It can be whatever amount that you want. It just depends on how expressible it's going to be. If you will get this, how accurately it will represent the ground state. But we're not yet. So far, this isn't even a VQE. There's no, this is just for one set of parameters. For a measure of Hamiltonian for P. Okay. So we haven't changed it. VQE is the process of updating the parameters. This is just a single energy calculation. Okay. So if we look at the first point, this is, this is quite, these are quite old slides. So I need to get stuff. If you see here, this is the unit, you can see in the slides. So you can see these CNOT ladders. Right. These are each of the excitation operators that we showed. These, these Fermi-Koak-Krisch annihilation operators. You've got this set of fermionic operator. You've got a set of exponentiated fermionic operators. Sorry, set of exponentiated power leads. These power gadgets. You've got these chains of power gadgets. Then you've got a parameter at each one at the bottom of these CNOT ladders. That's a unit-couple plus there on that. So by changing these parameters here, here, here, you can change the energy, basically. People like the unit-couple cluster, and that's because it's got some physical representation. Because you actually have, it's representing the excitations of the operator from the heartary clock state. So you can actually get a picture on what the orbital is doing. If you're just using hardware-efficient ANSATs or some random mess of entanglement and rotations, it's very unphysical. Okay. It's kind of like tensor network methods and CI in classical space. Whereas tensor networks, you lose all this information. It's just a man with massive, non-linear parameters. Okay. So we've just prepared our state with our ANSATs. We want to now take our operator. How do you measure our operator? And this is in this setting. So you can apply the same Jordan-Bingham transform. Our state was prepared by fermionic operators, exponentiators, but the operators also contain these fermionic operators. So you have to Jordan-Bingham that's into something which you're going to apply on a computer. So the weight and coefficients here, when you apply the Jordan-Bingham transform, some program will probably spit this out for you, but if you do it by hand, you end up with something like this. And the kind of submixing happens between these. But for H2, you end up with 15 terms. And you have to measure it. And then the weightings are these parameters, weighting each power string. They're like related. They're not the same. And then the powers are related to the fermionic operators. They're not the same. That's just showing what they correspond to. And it's similar to measuring the density of matrix elements. So basically now, in order to calculate the total energy of the system, you have to prepare your state. And you have to measure the red, which are these power strings, which are applied to these qubits. And then you have to times it by its weight, the interaction coefficient. And then you loop over all the terms in the Hamiltonian. And in the chemistry setting, these are the classically completed integrals. But if it's a fermion-hub, what are these parameters? But what I'm trying to say is that the only quantum part in this calculation is the powers. The coefficient is just outside. This is done with post-processing. So how do we measure the density of matrix elements? Well, they're kind of the powerly version of the density of matrix elements. So this is quite simple. It's known as a technique of operator averaging. So you might be like in your quantum computing algorithm, click, measure, calculate, excitation value. But what's actually happening? So if you're measuring in, like, if you have a powerly z, z, z, for example, if you measure it in the z-axis, basically you have to, and you have three qubits. If you had 0, 0, 1, you have a parity of 1. 0, 0, 0, you have a parity of 0, et cetera. Now, the parity of these basis-axis measurements, which correspond to the power z pi x pi y from the previous equation, the parity of the shot measurement and the outcome corresponds to the eigenvalue of the powerly. Now, by measuring over many times, you'll get a mixture between 1 and paraties of 1 and 0 in the outshot outcomes. That'll give me a mixture of eigenvalues to 1 and minus 1. You average that over the number of shots, and then you time that by the weight and coefficient, and that will give you the contribution to the energy of that powerly term. Okay. This is quite a subtle thing. So, and then the VQE, this is from the classic paper bunch, like Romero and Ryan Baloch, John C. What this is showing is it's like VQE basically. The rally variational principle saves you again. You can basically just twist the parameters in your state preparation, and if it goes down, you'll save because you have a convex optimization problem. Basically, what this is showing is you prepare your states and measure your state and get the energy from the method I just showed, and then you calculate the energy, and then you change the parameters just so that the energy will go down basically. So you have a gradient optimizer or whatever optimizer you use. Basically, you're changing the parameters so that the total energy of the system goes down. And the key point here is that you have this measurement of powerlies at each step, and you're summing over each term in the Hamiltonian. So you just change the parameters in your ANDAX until your energy gets to a minimum. But obviously, because your ANDAX is... There's a large amount of choice in the ANDAX, right? So your ANDAX might not be good enough to get to... One ANDAX might get to get to... For the same operator, one ANDAX might get to low energy and another ANDAX. Well, on the other occasion, one ANDAX might get stuck in a local minima from a different initial set of parameters, which is bad. So the reason why I say that... Like, I... And this is, in my opinion... I don't want to say it's not going to work, it'll be useful, but calculate energy is certainly not the right use case for it because the amount of measurements you need... We know quantum computing is how we have a finite budget of measurement cost. These variational algorithms require a huge amount of measurements. You get your looping of every term in the Hamiltonian, and then you're changing the parameters each time. So if there's lots of reviews now with the same VQE, it will take hundreds of years to do anything useful for chemistry. So that brings me to quantum Q-Log methods, which I think are a useful application of quantum computing for this stuff. So we go back to exact diagonalization, where we're building... We have this huge basis... I'm going to speak now about quantum Q-Log methods, which is probably... I would say is the nearest term application of NISC for chemistry, I would say. And it kind of leverages the classical quantum balance in a slightly different way, which I'll explain. Rather than with VQE, where you're leveraging the classical cost on a gradient automizer and the constant cost on the shot measurement, and the sort of energy measurement for the operator averaging, here you do something slightly different, which I'll explain. So going back to exact diagonalization or configuration interaction in chemistry, we have this exponentially scaling basis. This matrix obviously scales exponentially. This is... It's combinatoric, but it's within 2 to the N. I'll explain that if I want to explain that. Go ahead. So we have this exponentially scaling matrix. This is bad. So what quantum Q-Log methods... So what Q-Log methods do is it's a very general method. It's really cool. So you again start with a tartary-fock method, but then rather than expanding your basis via excitation, you expand your basis via powers of functions of the Hamiltonian and then with K times. So if you see... And typically the dimension of K is much smaller than the dimension of the previous matrix. These are typically in order of 100, maybe less, whereas before you're explaining your scaling essentially. And the basis obviously scales just like this. So you have your wave function and our linear combination of your reference state plus your first state, which is the first power of the Hamiltonian function, etc. Okay. And you still have this generalized eigenvalue problem, but it's much smaller. Okay. Now obviously the complexity is being hidden inside this function of H. Okay. But we'll talk about that now. One second. And then if you think about the pre-loved generalized eigenvalue problem, the matrix elements of these things, the Hij, is now, you have this reference, bracket reference, and then you have the function of H to the i against the left, the function of H to the j against the right. And then you end up with this matrix problem. The ij matrix elements are formed by... The ij basis elements are formed by f to the h to the i and f to the h to the j. And the overlap matrix elements are formed in exactly the same way, whereas now the h would just have this function of h to the i in the function of h to the j. So clearly this is different. These... Even if the basis is smaller, the eigenvalue problem you need to solve is smaller, these elements are much more complex. So we can't just solve these by operator averaging, like these functions require a bit more thought and have to implement them. So there's a number of different ways to do them. There's kind of three famous ones. So you have real time evolution. So this is the function of h to the j that you apply is the time evolution operator. The time evolution operator is just n times t in time. So you've got j type... After the h to the j, as the power of the time evolution operator is just e to the i h t j t. Right. And the reason... Time evolution seems to be very popular because there's many proposals to do time evolution efficiently on quantum computers. I'll be in a more fault-tolerant setting, but you can see that there are ways to do time evolution quite cheaply if you just have small t and low j's. It's obviously more expensive than VQE, but it's definitely cheaper than phase estimation. And then if anyone's confused about what the function of a matrix is, the function of a matrix is defined as a singular by decomposition where the function acts on a singular value. Because if you think about it, if s v d is just a rotation, a scaling and a rotation, so you can just scale... you can just act on the scaling part and then the rotation will be fine on each side. Another famous one, another famous function of h and creole method is imaginary time evolution, which is kind of... so you're just basically missing the i here. So this will propagate all the states. You'll get a linear combination of places, states, in the original space, in the eigenvector space of this. And then imaginary time evolution, if you're not familiar with it. So I've done a lot of work in both these areas. But imaginary time evolution is essentially a way to propagate to the ground state. It's a quantum way of doing optimization. So if you propagate an imaginary time evolution far enough, you'll only end up with the ground state. That's a non-unitary evolution because if you think about your initial state being a linear combination of all possible eigenvectors, the ground state is just one eigenvector. So you're killing all the other eigenvectors apart from one. So it's not a rotation, it's a projection. So this is difficult to implement on a quantum computer. But you can do it. And I put a paper out yesterday on it. And then finally, this is kind of the very sexy topic at the moment in quantum computing. It's the cherish of polynomials. And these are kind of the way I think about them are you know the double angle formula that you learn about at A level or whatever high school diploma you did. And then if you recursively do that to end to the power n times, you get the cherish of polynomials basically. And these have recently been shown can be implemented by Grober's reflections recursively, iteratively. So I know Calum may have given you too short on that. And then finding these functions is really open, implement these functions on quantum computers is a really open research area. So this is quite a hot topic at the moment. Okay, same. You can take this further. Make the whole thing unitary. You can even change the eigenvalue problem itself to be unitary. So you can take real time evolution and then take the Hamiltonian operator and make that itself a unitary rather than it being a Hamiltonian operator. So you know, I saw a unitary generalized eigenvalue problem rather than the original one. And eigenvalues are related so the phi n's here are the solutions of sorry the lambda n's are related to the the solutions of e to the i h t by this and then they're then are related to the eigenvalues of the original Hamiltonian by this equation. So what you can do is you treat your we're working in the time evolution space we're using time evolution as a function a Cree Love function but then we also do the operator as a Cree Love as the as it's a replace what was the h so what was before was this h we've now replaced that with e to the we've now replaced that with e to the i h t so what you can do which is really cool is because these are all exponential you can just add together all the powers of this much smaller and simpler object so this is really powerful here and you just reduce the complexity of your problem massively because if you think about it you only have to calculate like the overlap on the Hamiltonian that's going to be calculated from the same set of objects so you end up reducing the I think you go from quadratic to a linear scaling problem so this is really cool and then this can be implemented these are this is what I'm trying to say here the overlap and the unitary Hamiltonian whatever you want to call it elements are from the same set and they're generated from this smaller set here and these are just transition matrix elements so this is your reference state actually I have a time evolution operator turning it into another state and then taking the overlap and these can be implemented very easily on quantum computers and cheaply-ish if you think medium term cheaply with the Hadamard test where the Hadamard test we're taking that time evolution operator controlling it and then doing the Hadamard test with that and doing that for different values of k you can generate this object we need this is a complex value here so we need this is a complex value so we need the real and imaginary parts of this so that's quite easy to do basically to get the real and imaginary parts of the Hadamard test expectation you do basically the Hadamard test but then you have an s here now you can generate the energy of this is calculated from e0-e1 you can obviously calculate a difference this object is quite straightforward to calculate there's a nice I can show you the proof of this but showing that this is an expectation value is quite straightforward you just have to combine the shots in the correct way it has to do with the processing of the shots but it's straightforward I think it's e1-y2 I end the shot so now you can see what I mean by this is leveraging the quantum classical balance in a different way is that the quantum balance is now you have to do this Hadamard this small set of Hadamard tests where you have to do this controlled tidal region operation there's a lot of complexity hidden inside this box like trotterization is the simplest way I will speak about that later in the fourth lecture but there's a whole field looking at different ways to improve time evolution time evolution is really a sub-routine for many algorithms learning how to do time evolution is crucial if you want to work in quantum computing the classical balance is this matrix problem that you're off-loading everything to but that's quite small now and the quantum balance lies more heavily than bqe but in these time evolution controlled tidal region operation so I also got a paper saying this as well people are interested it's under the name of variational phase estimation variational fast forwarding where we basically try to do some approximate compilation of these objects to solve the same problem okay so we've got 15 minutes to work phase estimation that is not enough time for it to work okay so we have okay so now we're moving on from the kind of hybrid methods where there's a balance between the classical part and the quantum part we're now moving into the fully quantum algorithms and the famous one of these is called quantum phase estimation okay so many of you may have read this, Nielsen Schwang is really good and we can use this in quantum chemistry to a great advantage so what is quantum phase estimation so whenever we have a unitary at the eigenstate of that unitary the phase will be generated into the phase will be generated from the operation so we get n is the eigenphase we call that now we can apply that to we can use that property in quantum chemistry because I should be in mine too we can then use this phase idea so if we have an eigenstate of the Hamiltonian and we apply e to the iht to it we then get this object which is like a phase that the phase contains ej times t okay so the idea of phase estimation essentially can we exploit this and get this out basically come and implement this and get this e to the i out and this is going back to what I said before in how phase estimation for quantum chemistry uses time evolution as its main primitive so there's lots of there's a lot of papers in phase estimation that study the complexity of phase estimation for energy values where you change the method of Hamiltonian simulation that you need so a lot of the Google papers they like to use cupization and Chebyshev polynomial framework et cetera lcu I will speak for all of these next session okay so can we get the argument what's actually happening when we think about phase estimation I like this as a physics person or quantum chemist I like to think about physical problems rather than a computer science approach to things so if you look at what if we do have an eigenstate of the system and we apply this transition matrix element and you measure so this is a measure along the this is time along here and this is the real and imaginary part you get this perfect spiral where the phase just propagates you in time and it's quite nice and essentially the the phase is related to basically the distance between these spiral which is really cool now if you have don't have an eigenstate and you have some a linear combination of eigenstates which is what we have in practice most of the time because if we could form the grand state then we'd solve the problem and this is one of the things that has caused a lot of issues with phase estimation whether it's useful or not but you can see here the massive if we have a linear combination of eigenstates we then basically get weightings of the competing phases in here for the different eigenstates and that results in kind of this messy spiral where you've got all these different phases coming out and you have the weightings of each eigenstate it's here but this is how I think about phase estimation it's a really nice physical way of doing it ok so what the the canonical form of phase estimation the old school way that you'll see from in Milton Triang is basically you have to really understand the motivation for you have to understand the quantum Fourier expansion so I'll do my best to explain this that obviously we know the Fourier expansion basically it has any function can be expressed as an infinite weight in sum of cosines and and you can see this here well obviously in computers we don't have an infinite matrix we have a finite dimensional matrix and finite dimensional vector so we have to truncate this to some realistic amount and this is essentially what happens so you get an approximation here now this underpins most of modern signal processing and there's lots of people that say that the fast Fourier transform is the most powerful algorithm in the past 200 years but you can take this sine cosine sum I'm going quite quick but you take this sine cosine sum of the exponential of that and then basically what that means is that your Fourier expansion can be represented as a sum of exponentials where the exponentials have the specific power relation ok so it's quite a you can see you take a vector y vector y can be formed by acting with these omega nk's to the n and these omega nk's are these complex exponentials where you have the index n and k of here divided by n ok and it does it is basically magic that you can trust this but it does work and then I think about these things in a matrix form so you can think about the quantum Fourier transform as this linear map or basis transformation so you're just basically rotating the input basis into this new basis where the first one doesn't nothing happens to it the second one is times by omega the first n times by one second time n times by omega omega squared you get these powers going down here so this one goes up to the power of one the power of two to the power of three ok now you can think about this the way that I think about these problems is basically a matrix vector product where this linear map is a neutral matrix and we know that a neutral is can be implemented on quantum computers so the game of the quantum Fourier transform is implement the discrete Fourier transform unitary on a quantum computer now you can see if we take a simple three cube example so the classical discrete Fourier transform that would be an eight dimensional or eight dimensional discrete Fourier transform you can see here the first line is the power of one to the x and the second line is x to the power of two what you see is here when you get to x to the power of eight you loop back around because you get this is one of the things that you have this equation here you always loop back around in the Fourier transform you end up with these one states and this is a three qubit I say this is a three qubit we aim to implement this as a three qubit thing because we've got a two to the power of three by two to the power of three a neutral matrix ok so now to really understand the quantum Fourier transform you have to understand binary notation and some of the how you can get the integer values from the binary values this is something I struggled a lot with because computer scientists are quite native with this kind of stuff but I was certainly not so if you think about in typical binary notation we have the x here represents the value in the binary and then then you have the two to the k k is the position and then two to the k is the value of the bit ok so this is position two position four position one bit value two position yeah you see what I mean so we can then basically we can get the integer value by just summing our x's and our two to the k's so this is six this is four times one two times one four times zero ok now quantum Fourier transform uses fixed point binary fraction notation so this is the same idea but we shift the decimal point and we do it with fractions so here it's a bit confusing but we now use the position is one two three four for the right of the decimal point and then we do it times in by powers of two minus two minus powers two to the minus powers for the position so you can see here this is the bit value is one half a quarter one eighth or sixteenth and then we times it by it's bit value so you can see one times a half is a half one times a quarter is a quarter plus one one eighth times zero is zero and one is sixteenth times one is one sixteenth and we add these together and then this basically gives us our fraction binary fraction which represents our decimal so the more bits you have the higher resolution you get and we represent it whenever there's a square brackets in my notes this is a binary fraction so just be really careful with that and this really did confuse me a lot so I'll leave this for a second because it's really central okay so this is quite heavy so I'll try and follow the derivation or just get the main idea the main idea that you map between two states what I'm mapping is is done by what I'm mapping is is given by these equations so remember we had that so we have a three cube example so we have this ij times k over n and this is n in the three cubic cases two to the power of three so these exponential powers that we had we have here in the three cubic case which is eight dimensional so n equals eight I really have changed the indexing you can see it's basically over n here and so it's what we put in a cubic form so it's two to the power of three okay and then we've normalized it with cubic form as well two to the power of three I say cubic one I mean just the powers of two okay now this is a fraction and we can explain that using the binary fractions that I just put in the binary fractions that I showed before now one eight, two over eight three over eight etc then take this binary fraction form over this KL which is either one or zero and then we can represent this integer as a bit string and you can see now we've got this sum over an exponential for each KL so it's quite natural to break this exponential up into its constituent tensor product what we do now okay I'll say any of these slides don't worry about trying to understand it but the main idea is that you break up the tensor product you use the binary fraction notation to get sum of this weight of this coefficient you then use that to break it into a tensor product for each bit and then use this very powerful idea that when you have k to the zero you have exponential of zero which is always one then you get basically all the terms on the zero vanish and you just get these terms on the one and this is a product state in quantum information so there's actually been a lot of work suggesting or proving that the quantum Fourier transform is not it's classically assimilated for which is well known but it doesn't generate strongly entangled states okay so basically what you can do now is you can then for all these binary fractions that you have for the j's you can basically take out the whole number and there will always be a dominant fraction so you have half, quarter the halves can be represented as the simplest binary fraction the quarters can be represented as the four element term and the eighth can be represented like this I encourage you to work through this okay so then you end up with this binary fraction kind of mapping you get x to this e to the binary fraction oh yeah I'm rushing through this but you can cut when you expand the binary fraction which is a sum into its exponential terms you can see where the quantum Fourier transform comes from because you have h this is just this is a h and this is e to the power 2 then 3 here and you can kind of see it actually needs to keep it this way again I encourage you to work through this so basically you basically can map from one state you basically map from the non Fourier coefficients into the Fourier coefficient state you've got your bit string state then it goes to this Fourier basis state and then the quite subtle step in the in Nielsen and Schwang is this kind of mapping from it you map back from the integer bit you map back from the integer mapping to the bit mapping to the integer mapping which is very confusing and the generalization is this and you can see you can get more and more higher resolution fractions but the game of the real like light bulb moment happens when if you think that if you have an input state for a given phi in this form because this is the Fourier basis the quantum Fourier transform will map from that to the bit string so you can read out the binary fraction of the phase from the input Fourier basis the problem is getting the system into the Fourier basis to read out the phase now that's the game of quantum phase estimation is how do we get the phase into the structure which can be read out by the quantum Fourier transform and that's what phase kickback is really so you might have heard this properly called phase kickback so when we have an eigenstate when we have an eigenstate when we have a controlled unitary acting on an eigenstate you can see that the unitary is applied just to the one here this is a controlled one but then because this is the eigenstate the phase which gets generated and they can just form this product it's almost like this wouldn't even touch we just have the phase that's kickback on the ancillar so this is how we get the input state of the quantum Fourier transform in the way we want this is the game we played to get the input state of the quantum Fourier transform to read out the phase from the Fourier basis to the from the Fourier basis to the phase basis and then as we noticed there was powers of phases in the input state of the quantum inverse quantum Fourier transform I should say so we just did that by playing a powers of use of m so you can now see let's have a look at this example so you can see we applied a Hadamard first one and then we applied these powers of controlled unit trees and then we ended up with these powers of controlled phases and remember this is an eigenstate here now basically we've got this Fourier basis here we're going to map out the Fourier basis to get this phase so we read out the phase from that and then the phase can then be extracted via this binary fraction equation here so how are we going to apply that to eigenvalues so the idea is the same but rather than our controlled unit tree now is our controlled time evolution operator and the phase that we're trying to calculate is Ej times t okay and we're trying to extract this in the same way with the inverse quantum Fourier transform y-successive powers of controlled time evolution operators this is again an example of where time evolution is so important and why properization is really useful so again we have this example and then we apply this excessive powers of controlled time evolution we then get this phase and then we can extract it this way because the phase is e to the t and then we can extract the energy from that okay so the problem with the canonical phase estimation is that it's very expensive so you show that you have these controlled time evolution operators these time evolution operators themselves are really expensive things and there's lots of work to try and reduce the cost of that and the main one is that you need so you don't need the obviously to define the algorithm you need the eigenstates but you don't need it as long as you have a significant overlap I think over half because obviously successive applications of the algorithm will then boost the signal of the correct phase which is yeah so as I said controlled time evolution is really important and there's lots of different ways of doing it and I explain this and one of the other problems is obviously you need lots of ancillars and you probably I mean you almost definitely will need full tonic compilation an error correction for these algorithms to work okay I'm going to stop there because any question so as you mentioned and as it was mentioned VQE doesn't work doesn't have any useful applications for near term like nix error converters right so what are the exact points where like Crelob methods like what which points do Crelob methods or like these quantum phase estimations like time complexity or like at which points are there more advantages and for the near term quantum Crelob methods are the best I think but again how you compile these functions as H I think the simplest trotterized time evolution operator combined with Crelob is a simple way to go it's probably the best near term circuit primitive that will be much useful so controlled Hadamard test with trotterized time evolution operator I don't know if that answers your question the second part like at which points does quantum phase estimation for which of the parts does it show advantages or like as you mentioned QPE has practical applications right which part of the problem encoding makes it so like what enables it so I mean there's lots of studies in complexity theory of like the precision needed for phase estimation it's quite hard to compare them to Crelob methods because they're not the same I would say if you have enough ansilis phase estimation will win every time because you can get really precise I think I think Crelob methods are probably slightly better but phase estimation is a long way off I think I had another question as well so garnetian works on tensor network states for classical simulations right of quantum chemistry so is there a quantum equivalent of like answers like tensor network answers and do they work well or is it yeah great question so there's two ways to approach this the first one is that all quantum circuits are tensor network so you've got a tensor network and that's if you've got a quantum circuit okay so that's always a great but there have had this work with my colleagues Michael fistfrog et cetera Reza people like that where they're doing these quantum tensor networks which is a slightly different approach where I'm sure you're aware if you know about tensor networks in like matrix product states mirror the bond dimension is a limiting factor like because like basically the dimension of the matrix multiplication between the tensors needs to be truncated to the point that it will fit on a classical computer that sometimes has been shown to scale exponentially for certain problems so the quantum tensor networks are a way of dealing with the size of the bond dimension on the quantum computer so if you read the work with my colleagues I think that's very recent actually they use the quantum mirror approach and that's a way of leveraging the experimental scaling of qubits combined with the classical tensor network which is a new method Thank you very much for the wonderful lecture I'm not sure if this is like some basic knowledge but is there a way of getting the ansatz like a smart kind of method or is it just like a heuristic method like a trial and error approach so there have been some papers which show in the limit the extra limit you can get an ansatz which will get the exact grand state for example the famous paper The Symmetry Preserving Ansatz but I think you need exponential parameters for that to be I personally think that the internal group symmetries give a lot of argument as to why you can reduce the parameters but that's not my answer but the internal symmetries of the wave function often mean that parts of it don't need to talk because they're in different irreducible representations that's often represented in matrix problems by like block diagonalization where you have lots of zeros so I think maybe an ansatz using total spin for example to take advantage of that I don't know if you're familiar with spin eigenfunctions you can basically couple spin eigenfunctions in like a tree network I always thought it would be cool to map a quantum circuit ansatz to that but then if you want to just go for the heuristic you can do this variational compilation approach which is where you kind of successfully add gates there's like these are the adapt ansatz and there's also I got to work on this as well but you basically make the cost function the overlap with some state that you want and you keep adding gates until you get closer to this cost function but that's very heuristic making ansatz is a very difficult problem it's the same thing in tense networks in fact they're faster space you're just throwing NPS as a mirror problems which don't necessarily have that inherent structure thank you for the talk I wanted to ask first a technical question about the Hadamard test where there was a W operation that I just wanted to ask which operation that's a phase gate so that will give you that will give you the imaginary part of the expectation wave ok and this guy ok perfect and also in the same in the same slide there is that wave function scale linearly with a qubit number I was wondering if this means an advantage respect to the classical numerical methods yeah so this this is probably badly worded but what I'm trying to say is that the basis dimension here is you can represent an exponential number of states in this part by a linear number of qubits because you have 2 to the N possible 1 to 0 combination whereas you have to score this explicitly as a vector in a classical space so you've got to store an expansion scaling vector so you have a linear object to store rather than an expansion scaling object but there's a lot of other problems with storing the quantum state it's not as simple as this ok and last question about face estimation if there was any like since with face estimation we get a bit representation that is an approximation of the real eigenvalue I was wondering if it was possible to apply any rescaling to the operator that we are applying in order to get integer eigenvalues and so more precise bit representation of the eigenvalue can we change the time step in the face right now that's a very good idea I'm not sure how easy that would be able to apply to the canonical form because that algorithm is kind of like a recipe that you can't really touch but I actually have some slides on some modern approaches to face estimation that I will talk about in the next slide which kind of do use a similar argument you use different time step lengths which can still calculate the face but it's not quite a rescaling but it's just a different way to get the face ok thank you so much another question can we go back to the phase gadget the power gadget my favorite primitive yeah so you suggested that people put in a bunch of like cats and analyze what happens shouldn't the students instead use the powerful medium stabilizer formalism to analyze what happens with these circuits I like to think about everything in terms of states because I don't like the way functions flying around masochism I think if anyone's interested around the break I'll show people how to decompose the circuit with the stabilizer formalism and show that this executes some power gadget ok let's speak next one again