 Where's that? I think so. Just kidding. Yeah, nice to meet you. How's it going? OK, good morning, everyone. Hello. Welcome to day number three. So today we're going to have a day of quantum chemistry by our senior research scientist, Nathan Fritz Backdrich. He's coming from the UK. He's a continuum. And just to let you know, so pretty much today's morning schedule, like our lectures with 10 minute breaks, then we're going to have lunch here at 1.30. We have to take the shuttle or walk down to the lab again at the Adratico guesthouse. So at 2 PM, we're going to have quantum chemistry labs today. So make sure everything is charged. The charging points down in the lab. But without any further delay, let's start our day on quantum chemistry. Thank you. Hello. I guess everyone heard me, I guess. So I'm going to talk about quantum computational chemistry. I hope you're looking forward to four hours of me lecturing today. I am looking forward to it. So the idea is essentially to start from the ground, from ground zero, basically. So I imagine some of you from a computer science background haven't even done quantum chemistry before. And then hopefully, even for the physicists, this might be some new stuff. And hopefully, try to give you some of the intuition behind quantum chemistry calculation that I think is quite mysterious to a lot of physicists. So why study chemistry? At the basic level, it is why do atoms stick together? Which is known as the study of chemical reactions. And the study of chemical reactions essentially is the analysis of bond breaking and bond forming reactions. So you may have seen these curly arrow drawings around. Maybe given you nightmares from your undergraduate part. So these are extremely qualitative. And it's kind of amazing that these work describe chemical reactions so well. But you'll see some of just a simple analysis working from a atomic or what's always gives you this picture. And it leads to a huge amount of chemical and biological complexity. So particularly, if you think about the atomic orbitals, how they interact and the electron density, you can actually build up quite a good picture of chemical bonding. But if you want to go a bit deeper, why study chemistry on computers? So essentially, it's energy as a function of nuclear geometry. And we have this thing called the potential energy surface. Where each point on the surface is a quantum chemistry calculation. It can be semi empirical. The level of theory is up to you. It's your choice. Obviously, you'll get better results if you use a more advanced level of theory. And you can see the minimum to these point A and C, these are what we call equilibrium geometries. This is where the molecule is stable. It likes to sit in these things. These are the relaxed geometries. But to form a bond, we've got to go from A to C. We need to go by a B. B is known as a transition state. That's where the bond breaking happens. So you can see, if we go along this reaction coordinate, where the reaction coordinate is this hydrogen, is this hydrogen, this white thing, being passed between the two oxygens, you can see it doesn't like being in the middle. So there's a high energy here. But obviously, to get from A to C by B, we need to increase the energy of the system. So this is particularly important for biological processes and studying these accurately. Because if you look here, we have, for example, this is the oxidation of glucose. This is important for us to be alive. Without an enzyme, this requires a lot more energy to go from this equilibrium geometry of glucose to become dark sodium water. So ideally, we want to understand why enzymes can reduce the energy barrier. The problem is that enzymes in these biological systems are huge, loads of degrees of freedom. Therefore, we can't really study all the quantum interaction. So this is one of the motivations of quantum computing is it allows us to break the scaling problem because we can go 2 to the n number of degrees of freedom. So typically, our best results at the moment are quite inaccurate for these strongly correlated systems. The really famous one is FOMOCO in the nitrogenase. FOMOCO is the reactive center in the nitrogenase enzyme, which is found in the soil. It uses sunlight to convert nitrogen, hydrogen, and pneumonia, which is important for fertilizers. But we have to deal with the hub process, which uses 2% of the most energy on a day-to-day basis. So if we can solve this, and if we can figure out why the reaction barrier is so low for FOMOCO, we can't save a huge amount of energy. And this is kind of the post-a-child. This is FOMOCO versus hub processes, why we need quantum computing. OK, so briefly, I'll just talk about it. So the potential area surface is this kind of when you go into high dimensions. So I showed you previously a two-dimensional one. Here we have kind of this is a three-dimensional one. And it actually generalizes to 3n minus 6 because for every atom, you have three dimensions with x, y, z. But then obviously you have global translational and rotational, which remove 6 to 3s of freedom from the problem. So obviously the minimum are these reactions and products and these saddle points of the transition states. And you can do this numerically formally with Hessian analysis, where the Hessian is the two-dimensional matrix of the second-world derivatives. And then interestingly, the molecular properties come from the slopes and the curvature of potential areas. For example, if you've got a very steep well, it's going to want to fall down to that equilibrium geometry very quickly. So the force required for that reaction to happen is more. But for the purpose of this talk, we're not going to worry about potential energy services. Moving on potential energy services all causes chemistry. But in order to get a good potential energy service, we need to calculate a single energy point for a given geometry. And then for the rest of the talk, we're just going to be focusing on one grid point. Because if you use kind of the cheap methods like DFT or semi-empirical, you'll get potential energy services. But it might not be the right one. So we need to get accurate potential energy services. And to do that, we need to use very high-order methods like quantum computing too, or exact diagonalization if you're familiar with that. OK. So I'm going to start kind of what you may have seen before. So just kind of want to introduce the quantum chemistry Hamiltonian. So the quantum chemistry Hamiltonian is the fully interacting fermion problem where we have this, essentially, we have these one-body interactions and these two-body interactions. And then we have these fermionic and fermionic creationization operators, which I'll introduce later. But essentially, if you kind of squint, you can kind of see that it's like the Fermi-Hobbub model where rather than having nearest neighbor interactions, we have all possible global interactions. OK. And then rather than having these parameterized weightings for these interactions, these G and H, these are actually these spatially dependent objects which contain a huge amount of information. And they're actually responsible. There's many, many years of development just to find these parameters. So these parameters contain all the information of, so this shape here, this orbital, this is just one of these indexes here. So to get all the information for PJ, PQRS, each individual indices, it contains like the shape and the geometry of this molecular orbital. So I'm going to be speaking quite a lot through the first lecture and a half about basically just obtaining these molecular orbitals. Because if you start with a quote, you can just start here in quantum computing if you want to be naive. You can just say, you may have used pi SCF or some other driver. But that gives you your quantum character Hamiltonian, which you then give to a VQ or phase estimation or whatever. But it's like, to actually get these weightings is a huge amount of work. And to actually understand the chemistry to understand how you get the weightings of the interactions. OK, so we're going to speak about orbital's basis sets, integral evaluation, and most importantly, hot sheet fox theory. Yeah, so I'm going to start from the total basics. There may be people who haven't studied quantum mechanics at a high level here. Obviously, there's lots of people who do threat of computer science in this field. So essentially, we're only going to be worried about the time-independent Schrodinger equation, because the quantum chemistry Hamiltonian is just dependent on there's no time-independent term. So we can just use this time-independent Schrodinger equation. And essentially, the time-independent Schrodinger equation will always have this form. You'll have a kinetic energy term, and you'll have a potential term. And in three dimensions, the potential energy term is related to del squared. Sorry, the kinetic energy term is del squared. And this will always be the same. But the behavior of the system is encoded in the potential of the Schrodinger equation. So depending on what application you have, you'll have a different v. And that will determine the overall wave function. Because this is essentially a differential equation. So you can use, for a lot of the toy systems, you can use differential equation methods to solve for psi. And if you have different boundary conditions, or turns in your Hamiltonian, you will end up with a different solution to differential equation. But the moral of the story is the Schrodinger equation really depends on the potential and the boundary conditions. So you may have seen it like this. This is what we call Hamiltonian eigenvalue. Hamiltonian eigenvalue equation, where we group the operator terms. So I should say this is Planck's constant h bar. And this is the mass of your particle. We're just talking about a single lecture on this. I apologize. Single particle. And then we can group all these terms. We can put them as this h operator here. And then that will generate the same wave function, psi. And then this energy term. This energy waiting here. So this is what we're trying to solve to get the wave function given a potential and some boundary conditions. So Schrodinger, when he came up with his wave equation, which I think was in 1927, he then released, I think, six different papers solving time-finituring equation using lots of very powerful differential equation methods, which would come from the plus and people like that looking at planetary motion and things like that. So even though these equations are extremely scary, there's a lot of kind of, the theory was quite well known by this point. So the ubiquitous one, which you probably all solved in your first year or second year physics course is the particle in the box. And it's the simplest example of how potential, one d particle in the box, simple example how potential and boundary conditions can govern the wave function. And there's some slightly more difficult ones such as like the free particle and free dimensions. This introduces separation of variables and the idea of product wave functions. And then there's also the stationary hydrogen atoms. This is the one that we're really going to focus on today. I don't have time to really, so some of the exercises will be to solve these if you haven't before. But what we're really going to focus on today is the solutions of the hydrogen atom. Because that's where all chemistry can be built up from. So the first kind of analysis of the solution of the Schrodinger equation is this quite simple model where you potentially have this one d dimension. And then you have these infinite potentials. And then you have zero potential between zero and l. OK, so the kinetic energy is kind of represented by this cool drawing I made. So the Schrodinger equation looks like that. So it's a bit confusing because there's no potential in this form of the equation. And this m here is the weight of the electron or the particle which you choose. But the particle is included in the boundary conditions of the solution. So one of the exercises will be to solve this. But essentially, the moral of the story is when you solve this equation, you end up with these quantized energy solutions, which are given by this parameter n. So the rest of these are constant and l obviously as well. But given a fixed length of box and a fixed mass, n is the only thing that can change. So what this represents essentially is you get standing waves between these two potentials. And each standing wave has the harmonics. These have different energies. So this is really the first indication that Schrodinger showed that you get quantized energy from this Schrodinger equation. So this stationary hydrogen atom is kind of where quantum chemistry really started in my opinion. So here we have a single electron represented by m1. And you have this kinetic energy operator, del squared. And you have this v. So this v is this spherical potential. So this is a totally symmetric spherical potential. So you can see this now. So it's just 1 over r squared. And this is the mass of the proton. And that mass of the electron up here. OK. So essentially, the potential felt by an electron in the sphere will be dependent on a 1 over r term. Now, you can try and solve this problem in Cartesian coordinates. But it's not very natural because you have spherical potentials, right? So you can switch the polar coordinates. So it looks terrifying, but as I said before, this is all quite well-known theory of planetary motion and things like that. So essentially, what you have here is rather than that x, y, z, you have what are called, you have the radial dependence. This is r. You have the theta, which is the equatorial angle. And then you have the azimuthal. I think that's how you say it. So you basically have rotations around the equator and rotations around the vertical axis. And the Schrodinger equation, then the kinetic energy operator looks like this. v is just equivalent to 1 over r. So what this basically allows you to do, if you notice, you have three different terms in your kinetic energy operator. So again, don't try and solve this. This is all the textbook stuff. But I'm just trying to give you the main ideas. If you notice, you have the r terms here. You have the theta terms here. And you have the phi terms here. Now, what this means is when you have a Hamiltonian object like this, or any differential equation, we have non-interacting terms in the linear equation. It basically means that you can solve each one of these variables on its own using a technique called separation of variables. And this is actually really important for Hartree-Fark and a single electron wave function later on. But basically what it means is you can solve an independent system of equations. And then you can basically form the total system by having a product of that, of those independent systems, which is very powerful. So you can kind of see that here. So if we assume that we can have this product wave function, which is dependent, which you have a term for r, a term for theta, and a term for phi, we just substitute this into the previous equation. Looks a bit nasty because you've got this extra potential term. But you can do some rearranging. And you get this nice set. You can divide by the wave function, the product wave function. You get this nice equation. And this can be solved by separation of variables. Now, this takes five chapters in most textbooks. So if you are interested in looking into this, I highly recommend the book by Linus Powling, Introduction to Quantum Accounts. It was written in 1937. But in my opinion, it's the best description of how to solve the hydrogen atom. And it really gives you kind of the idea of how these problems are being solved back then before computers. A lot of the modern quantum chemistry textbooks are, you just jump into kind of the numerical solutions, which don't give you as much insight into the problem. OK, so here's the three equations that you get from separation of variables. You've got your phi equation, your theta equation, and you've got your radial equation. Now, you can solve these using some quite intense differential equations methods. But what this happened, so I've jumped a lot of steps here, but more summary, this is the solution of that product equation. As with the wave functions in the path in the box solution, which was quantized by some quantum numbers, depending on the harmonics, here we have a similar idea, where we have quantum numbers dependent on the radius. It's called a principal quantum number. We have the magnetic quantum number, which is around the equator, L, and then we have the angular quantum number, which is around the equator. We have the magnetic quantum, which is the azimuthal degree of freedom. So yeah, it's quite scary, but as I said, like a lot of these polynomials, there's loads of what happened here in the 1800s. And the main tool they use is the spherical homonites, because obviously we have spherical symmetry and the potential of this makes everything easier. OK, so now you may notice that the energy when you solve this, because we have no electrons, extra electrons, the energy is dependent on the wave function contains n, l, and nm, the quantum numbers, because we have no interacting single electron picture, the solutions of the Schrodinger equation for the hydrogen atom or the particle and spherical symmetric potential, they only contain the principal quantum numbers n here. OK, so if we want to go to larger atoms, larger like lithium, helium, helium, et cetera, this theory does break down a little bit, because obviously we have interact, you need to take into account the interactions from the inner electrons. It's called screening, if you've heard of that. No, it's just screening. But it's quantitatively, qualitatively. Actually, qualitatively, it still works quite well. So now you can kind of start to do some analysis with these wave functions, looking at the density, where the density is essentially, is psi squared. So the solution, I jumped the head slowly, so if you take that equation that you saw, that really scary equation that I showed you, and then you put in this psi here, we have nlm, that's these indexes, we have the principal quantum number one, there's only one, then we have principal quantum number two, we have two, well we have the s orbital here, and then we have, so that's two zero zero, and then we have the, so anything with principal quantum number one, so n zero zero is known as an s orbital. OK, so that's a totally symmetric solution. So you can see here we have psi one zero zero, this is just parameterized by this, you can see here, this is just a radial exponential here. It's an exponential parameterized by the radius. So this is a spherically symmetric 3D thing, or a sphere basically. And you can see the same applies with the 2s, but now you have kind of, you have this product here, which gives us nodal behavior. So if you see the 2s, rather than being a sphere, is actually this kind of sphere, but with an inner sphere inside it as well, and the inner sphere comes from this product term, before, and then you start, when you start to have two one zero, and two one one, anything with l equals one, that's a peel sort, and that's when you get these dumbbell light shapes here. But what I'm trying to show is that the shapes of the orbitals, or the hydrogen orbitals all come from the solutions of the spherically symmetric single-particle wave function. So you can do some analysis on these with these things called the radial distribution functions. Now that the, what the radial distribution function is, is essentially it's used Born's rule, which essentially is you take the density, of your wave function solutions, which is just essentially, you take one, you basically square the product of them. So here we're looking at that one s, which is this one zero zero. Remember this is the simplest one, which is just the spherically symmetric ball, that's just an exponential decaying along the radius symmetrically. You can see that we have, you can see that the, and there's obviously edit from when it starts decaying when you get away from the nucleus as well. So you can see here that if we analyze this along, along the radius, you can see that the probability of finding an electron has this distribution function. And then this is the two s, which is equally symmetric, as this function. But you have this nodal character here. So the principal quantum numbers all have these, I just go up in principal quantum and you get an extra node in your wave function, which in the s orbital picture is a ball with a ball and a ball, if that makes sense. Okay. But the main point to take is that the hydrogen atom gives you the kind of the shape of where the electrons like to sit. The solution is the hydrogen atom. Give you the, give you the rough that were the best approximate of how the electrons like to behave in large molecular. So this is really waiting to start three quantum chemistry. So having a good understanding of these molecular orbital, these atomic orbital shapes really like gives you some intuition and chemistry. This is, if you think about a lab chemist, they're always thinking about the shape, these shapes and how they can fit together, things like that. So this is really where the chemical intuition comes from. Okay. And this was actually experimentally verified in 2013. You think you're using extra diffraction, I think. And you can see that 90 years after the hydrogen derivative equations, only about nine years ago, they actually were able to verify this. So these are the excited hydrogen extra diffraction, I think. Which is really cool. So essentially, in the presence of no extra electron, so the single electron picture of the hydrogen atom, is essentially correct with that, because there's no relativistic effect, sorry. Okay. So, and what's very, very interesting is that these, the solutions of the hydrogen atom, they predict the structure of these orbitals. It turns out that the periodic table structure, which is determined before the solutions of the hydrogen atom, is grouped into essentially SP, D and F, which are essentially the N, L, M solutions of the Schrodinger equation. So Schrodinger actually essentially verified the structure of the periodic table, which is termed by Mendebe and Lehev. So you can see here, you have, we have going down in principle quantum number. You have N1, 2, 3, 4. This block here is called the S block. These are all the spherically symmetric, these are all the outer electrons in the spherically symmetric. So yeah, the periodic table structure on the valence electrons, which is the outermost electrons. And then the electrons in this block are known as the, so they're almost in this block, this block in the product is known as the S block, because the outer electrons are all in S orbitals. And then this block here is known as the P block. And the outer electrons are all in these P orbitals, which are these dumbbell-like shape ones. Shall I get back? Yeah, so the S orbitals, the S orbitals are these one, and these are the P orbitals here. So that's one P orbital, he made this figure. So this is one P orbital, it's a dumbbell. There's another P orbital, a dumbbell. Another P orbital, a dumbbell. So yeah, so, and then in the middle, you get these, what are called the D orbitals. And essentially, these are grouped into kind of these, these are all the blocks have the same principle, the same outer electrons that have outer electrons in the same, in a spindle, in an orbital, which has the same quantum number solutions of the hydrogen atom, which is very cool. And then you've got the, the lanthanides, and the anacinides down here, which is the F orbitals. Okay, so, and you can kind of see this with, so you kind of, so what this is showing is this analysis, rather than taking a qualitative approach, this is kind of the quantitative analysis of this. So this is the, this is the ionization energy, experimental ionization energy. So what this is showing is the energy it takes to take your most outer electron and, and remove it. So that's, that's kind of telling you the energy of your, your orbital, your last filled orbital. And you can, you can see that like, it's, it's quite good. Like, so you, you can see here, this is H to H E, this is the first principle quantum number, and that just goes up monotonically. And then L to, L to neon, this is the second principle quantum number. And then we have the third principle quantum number. And then things start to get a bit confusing when we have the D orbitals, then we have the F orbitals. But you can see the blocks do monotonically increase, which is kind of what is, what is expected from, from the solution to the hydrogen atom. But the, the problem is when we start to introduce electronic, when we start to include the electronic interactions of the, of the inner electrons, then the, the, the theory kind of breaks down. But it's still like, like a quantitative level. But they're, they're roughly qualitatively correct. And it's basically due to, as you increase the potential as you go up the group to the same quantum number. Because obviously, as you go along with the group, you're adding more, more protons to the nucleus. So yours, the attraction of electrons to the new, the protons increases. So it requires more energy to ionize them. But obviously this, this whole approximation that we've spoken about so far, neglects electronic, electronic interactions. And of course it, the relativistic effects for the lanthanides. So like, for example, gold would not be gold if we didn't include the relativistic interactions because they, the, the, the, the core orbitals travel at the speed of light. So this is kind of what you, what, this is what is really seen. You might have seen this in your chemistry classes at A level. So if the, the solutions that we just derived before, these would have, for all the things with two, these would be the same, same level. And then all the things to three would be the same level. But obviously what happens is that because the, because the, the peal orbitals are in these dumbbell shapes and they're more diffuse, when you start to include the core electrons, the peal orbitals, and yeah, this is slightly confusing. So ionization is up here. So this is more energy to ionize here. So yeah, so this is larger energy gap. So the peal orbitals are less strongly bound to the nucleus because they, they interact less strongly with the inner electrons. And this is what it's called shielding. So the peal orbitals are shielded less, sorry, shielded more, wait, yes. The peal orbitals feel the effects of the, are repelled by the nucleus. Sorry, they're repelled by the inner electrons more. And that's why the ionization energy is lower. But okay, so to solve this, we need, we need to include these electronic, electronic interactions. Okay. And this, this is where Hartree-Fock theory and all the motivation for Hartree-Fock comes in. So obviously molecular energies, we have all of these. We have, there's lots of terms in the Hamiltonian which are neglected. So you have vibrational, rotational, and nuclear spins. These all decrease in value. And we're searching neglect and this translation as well. We basically neglect all of these and just focus on the electronic, electronic energy. Okay, so now we're starting to think about molecules. Okay, so we have, so now we have, this is, these are all the terms from the molecular, from the molecular electronic Hamiltonian, just for electronic energy. So we have here, I hope you like my drawing. It's color coded. So we have the kinetic energy operator of the electrons. So we're assuming, I'm gonna say, we're using the Born-Oppenheimer approximation here. So we're assuming that the nuclei are fixed in position and then the electrons are just whizzing around these fixed nuclear positions. And the, so you can see the kinetic energy is represented by the yellow and then we have the, the nuclear, the nuclear electron interaction is represented by green. You can see this is the coulombic term. There's one over R. Oh, that should be, yeah, that should be squared. That should be squared, I thought of that. As you can see here, we have this is the massive electron times by the electronic mass times the number of protons in the nucleus, which is sad. And then obviously R, IA is the distance between, is the distance between, so is the distance interact, is the distance between the nuclei and the electron. And then we have this red term, which is this kind of incident. This is a two-body electronic interaction term. And these are, because these are moving electrons, this is not, you can't solve this term like very easily. It's, there's actually not, this is two, this is a three-body problem, so it becomes very difficult. And then you have this fourth term, which is the, which is the electron, the nuclear-nuclear interactions. But we tend to remove that because the nuclei are fixed, so they're not going to change over that. So you can, so you can actually just calculate that and add it in at the end. So the problem that we're trying to solve with electronic Hamilton is actually this one. This should be squared, I apologize. These are to the Coulomb interaction, sorry. To the charge-charge interaction. So now we have these three terms in electronic Hamilton, molecular electronic Hamilton. We have the, we have a Coulomb interaction of the positive nucleus and the negative electron. We have the kinetic energy of the moving electrons. And then we have the electron-to-electron interaction. So these are what are really difficult to solve. And actually most of modern quantum chemistry is actually trying to solve this term, rather than this term. So it turns out, if you notice the the nuclei, yeah, the nuclear are fixed in position, then they're not variable. So it turns out that we have a single electron term here and a single electron term here. And what that means is we basically have a non-interacting set of terms in the first part of the equation. Okay, so this can be solved with this product type approach where you have, well, when you have a separable differential equation, you could obviously use separation of variables and solve the individual term each time. And then this term's not separate, obviously, because you have these electron-electron cross-terms. And it's essentially because the instantaneous positions are affected by each other. So this is incredibly, yeah, so what we do is we just get rid of it as an initial, initial qualitative approximation. So, and then this is called the linear combination of atomic order with Hamiltonian, okay? We literally just have the kinetic energy of the electron and then the kind of the electronic interaction. We have the chemical interaction with the electron and the nuclei. So you can see the sums in this simple picture. Yeah, so you have these four sums here, which is represented by the arrows. Then we have two terms for kinetic energy. Okay. All right, so now we've got the separable differential equation, or separable linear operator. So it means that we can use separation of variables and we can solve the global molecular picture with a single electronic product wave function. Okay. But, so the way we go about this is we use, we suggest we use the first idea of just trying to solve this molecular problem with this very simplified Hamiltonian that we've picked is we use, we use the solutions of the hydrogen atom, hydrogen like atom to these shapes, these orbital shapes. We use those as a basis for our, for this Hamiltonian. Okay, so I must say that there's something called the power exclusion principle, which essentially means that the solutions of the hydrogen atom that I showed you before, they have space for two electrons. And that's because you can have, basically have no repeating quantum number, but there is a quantum number that I ignored, which is called spin. Spin's a fundamental property of the universe. It was discovered by Jordan and it has this S2 symmetry. But, what we basically have here is we have what is called a spin orbital. So this is really important. You might have heard this, you might have heard this from, you might have heard this like from your lecture or something like that. So what spin orbital is essentially is, you have this psi here, the psi is the spatial orbital wave function. The spatial orbital is the, is that hydrogen atom thing, the shape, but then that's got space with two electrons. And then the individual electronic wave function is called the spin orbital. It has two variables. We have R, which is a distance, and then we have sigma, which is a discrete spin variable, and it typically represents a spin half or minus spin half. And we often greet them together into this thing called the position spin variable called X. Okay, so that's what it means. So, okay, so we then, so we take the Hamiltonian and then we solve it in the single electron picture and that's the single spin orbital picture, okay? So we take one term from that, basically one of those Hamiltonians for each electron and then that's totally valid to then just solve it for each one and then form a total molecular wave function and add them together. And then we use, what we use as the combination of atomic orbitals as you can tell by the name, we form a molecular orbital from a linear combination of atomic orbitals, okay? So you can see here that we have a sum over atomic orbitals, weighted, which is the spin orbital like things that I just showed you before. And then we have this linear weighting for each spin orbital. It's a bad quantitative picture, but this is actually an adequate way of getting kind of quantitative results in the lab. It motivates all the arrow pushing you see before. So let's consider the simplest molecular system now, okay? So we're doing chemistry finally. So we've got two hydrogens interacting and then we say we've got these, we take the simplest single electron solution and due to symmetry arguments, you can just isolate all the S orbitals. Don't ask me about that, but it's to do with group theory if we're interested. Well, please do ask me about that, actually. That's my favorite topic, but... So basically you can isolate the S orbitals due to symmetry arguments and then you can solve this, like this, you can solve the interaction. You basically say the two S orbitals on each hydrogen, you say, let them interact. And these are spin orbitals, so there's a single electron. You have the spatial orbital, which is the S orbital and the only one spin occupation on each side. And you solve it. Now I've interested basis, when we apply this basis to the Hamiltonian that we showed, you get what's called the secular equation. Now I don't have time to derive this in this equation in this lecture, but essentially when you solve, when you have a Hamiltonian, you introduce a basis. You will always form a not a convex optimization problem. And that's due to what's known as the Rayleigh-Ritz variational principle. And there's an exercise in the things that solve this. What that gives you basically is that whenever you solve this, you get a from Rayleigh-Ritz variational principle, you solve it for a minimum. This equation drops out. And what this equation shows is that the solution, this eigenvalue equation will always be the global minimum for this basis, which is extremely powerful. So you see this in all the quantum chemistry and all the kind of exact ganglization methods, things like that. And this is kind of really fundamental. So it's like the secular equation and the variational principle. It's two of the founding principles of quantum chemistry. Oh yeah, so what does this look like from a mathematical setting? So we have this, so we have, sorry, I should get back. So this H, this H here is called the Hamiltonian matrix. Oh no, we've got this non-orthogonal basis. Now obviously the atomic orbitals are orthogonal in space, but bear in mind, we're not in this, they're not on the same, they're not in the same point in space. So they're not actually orthogonal wave functions. And then you have this overlap matrix which counts for some orthogonality. So this is a generalized eigenvalue problem, no. So what do these elements look like? So a lot of quantum chemistry really comes down to solving these Hamiltonian elements. Now this is one of the, because we're working in that kind of a first quantization picture here, you notice that the functions of distance, the wave functions, which is not actually the case in our second quantization, but it gives you an idea of how you build up these elements. So there's a whole field of first quantization in quantum chemistry, quantum computing, but basically what you see here is you have the wave function on each side. So you got this, you see your wave function, you get your operator, and you get your wave function on the right side, integrated over all the space. Now the operator, you can see here, this is the, if we just take the S-Able tools, we've shown that these come from the radial solutions, really symmetric solutions of the Schroding equation on both sides. And then we have that kind of sandwiched by this, you have this operator which is sandwiched by both of them. And then you can see the operator now has, we've got the kinetic energy term. Remember this is, remember we're in a single electron picture, so we have the kinetic energy of one electron, and then we have the kinetic energy of one electron interacting with the nuclei on the first hydrogen and the second hydrogen. Okay, so this is a pretty nasty equation, but you can actually solve this with kind of side-by-dot integrate or something like that. I think it would be really cool to actually write, if someone wanted to write a, this solver for this, for the S-Able tools, so just, you can see that you can just plug this in, these are all functions that you can integrate, you just need to discretize some numeric grid over the radius, the radial function. And then we have the overlap matrix, and the overlap matrix have kind of a intuitive picture, because it's literally in this first quantizational picture where you have, where the wave function of the distance is dependent, you get, that's probably the wrong, the wave function always is independent, but in first quantization, the distance dependent lives on the wave function rather than the operator, like in second quantization. But anyway, you get this kind of physical overlap picture where you have these kind of exponentially decaying functions, with the nuclear cusps, and you can see that the overlap will increase with how much they physically overlap. Okay, so I'll just quickly, I'll do a few more slides and then I'll start, so. Okay, so this is kind of what we, you can use this, so if you notice that equation was an eigenvalue equation, that it turns, and then we had two basis functions. So that means there'll be two solutions of the s orbitals and hydrogen. We have, so we solve for the one electron picture, and then we fill it up with two electrons, because we get one solution for one electron and one solution for the other electron, but they're the same due to the spatial orbitals being the same. So basically you have two interacting hydrogen atoms, and then you get what's called a bonding orbital, this is super important now, and then you have an anti-bonding molecular orbital. So we take the two spin orbitals and we form a molecular orbital. Now it's quite confusing because you get two molecular orbitals down here, one for the spin up and one for the spin down, and two molecular orbitals up here for this, but these are vacant. But what we call the sigma g, this is the bonding orbital, and this is kind of a positive linear combination of two orbital orbitals. And then you get the sigma u star, stands for graded and ungraded, I think which is chairman for spherical symmetry. Symmetry, symmetry across the center inversion, I think, and anti-symmetry. Now you get this negative combination here. So this is really kind of what you, you get this bonding and anti-bonding picture. Now it's not normally this simple when you have lots more electron, lots more spin orbitals, but in the kind of two spin orbital system, it's quite nice to see that you get bonding and anti-bonding. So what this is showing, just to summarize, you're joining up these two, and then you form these two here, and then you can kind of look at the bone rule now, the densities for this new molecular orbital picture. Okay, so if you look at the bonding orbital here, you can multiply this out into the two spin orbitals if you wanted to. What you see is the bone rule, you get these kind of, this peaky, the two orbitals are really, these peak-nuclear cuts and cusps, and you get the bonding interaction here, which is like this favorable density in the center of the bonding also, and that's why you get this joined electron cloud. These are all just mathematical solutions. And then you have the anti-bonding, which is obviously the negative, you have a positive and negative interaction here of the same function, and then that's where you get this nasty function here of the wave function. But obviously, when you take the bone rule, you're squaring it, so then you get this unfavorable area in the center where the electrons are like, you get away from me. And so that's why you get this, that's why you see this anti-bonding, and that's why you have the positive and the negative colors here. These are called the phasors. Okay, I'll just go, yeah. So the summary is you can build up a molecular bonding, a very simple molecular bonding picture with the hydrogen-like atomic orbitals, so you get the simplest idea, you don't even have to think about the mass really, it basically boils down to, if they have the same shape and they're overlapping of the same color, there will be a favorable interaction. So this is where the chemists get so much of that chemical intuition from is that they're kind of implicitly solving the linear combination of atomic orbitals in their head by kind of like overlaps with like favorably, and that's where you get this bonding picture. So here you have the two S orbitals, bonding's from, favorably interacting to form this, this sigma bond, we call the sigma bond, because it's totally spherically symmetric, it's circularly symmetric around the bonding axis. And then when you have two favorably interacting P orbitals, you get this kind of delocalized, you get the positive phase delocalizing at the top, favorably, and the positive phase delocalizing favorably at the bottom. So you get this two sausages almost, above and below the bonding axis, and this has kind of 180 degree symmetry. Okay. And this all comes from, if you're interested in group three, these labels all come from point group symmetry for the irreducible representations over the finite groups. And then, yeah, so you can see now, now if you start wanting to go to, you can obviously add all the, there's infinite number of solutions for the hydrogen atom, right? So you can start to form all these bonding pictures to hydrogen atoms. So you can kind of see it's quite intuitive. You just have favorable interaction, non-favorable, favorable, non-favorable for the same shape. Then the P orbitals, they're all the same, but they're on a different axis. So they can actually spatially interact slightly differently. So you get sigma bonding P's here, pi bonding P's, anti-bonding pi P's, anti-bonding sigma P bonds, sigma P orbitals, you can kind of see the P orbitals interact less strongly than this. So the pi bonds are less strong than the sigma bonds because they have less physical overlap. And that actually comes from the overlap term in the secular equation, having less magnitude for these two combinations. Yeah, so this is the chemical intuition that people sort of have. And it's basically just orbital overlap. That's all you think about. So benzene is these six carbon atoms. You know, carbon is in the P block. It's got these PZs, we call them. So you have PX, PY and PZ. Now all the PZs from the carbons, you have six of them. So this would be a six-dimensional basis, right? One PZ for each carbon atom. And then you get these favorable interactions here. This is the pi-bonding interactions of benzene. You get this delocalized cloud. And then you can kind of, this is really cool because if you start to think about like graphene, you can see why you have all this delocalization property of graphene because the electron clouds are all delo... Like, it's basically extrapolation of this to like, tessellations of these columns. Okay, I'm gonna stop there for now. Any questions? Let me know. Okay, do we have a question or... Yeah, okay. Maybe it's a bit early, but I need help building my chemical intuition. So if we look at the diagrams of the atomic orbitals, where you have light and dark patches, if I want to make them fit together, should I be trying to put light on top of light or light on top of dark? You want them to overlap with the same colors. So if I go back. I mean, you can kind of see it here actually. So if you look at this, this is a very strong... So we have the... This is the most strongly bonding because we have, vertically, we have six of the same color interaction and then you have the negative phase, six of the same color interaction on the bottom. Then you can see that now the energy of the bond is less strong here because we start to have... There's more mix here. See up here, you've got one positive, one negative, two positive, two negative. And then you can see up here is the most weakly bonding, the strongest anti-bonding. This is because you have totally out of phase things. So if you have more things of the different colors touching, that's bad, basically, yeah. And then, so I noticed when you were doing the ionization energies of various elements. Indium and gallium have really low ionization energies. Does this have something to do with why they're used in three, five semiconductors? I am not an expert, that's my... But, so I believe that's to do the fact that they form 2D materials like graphene, gallium arsenic, that's why, because it's a fabrication, it's easy. But, the gallium is there. I mean, so let's go to the... I don't have a period table in my head. It's been a long time since I've actually done chemistry properly. So where's gallium? Someone help me. So the right one column, and there it is. So it's the first P orbital. So you've changed your quantum number and then the P orbitals, it's the first one. So that's why it's at the bottom. Yeah, it's why it's lower. Okay, so one electron has just been pushed up into an orbital and it's easy to peel that electron off. Yes, exactly, yeah, yeah, yeah. Okay, one quick question. You can see here, actually Ben, you can see it's the first P block, because you can see the start of the P block organization. So the six P orbitals you've got. So gallium's the first one and you've got the rest that's a crypt, crypt, yeah. And when you have more nuclei, you've got obviously strongest potential interaction. So it's like the nuclei hold onto electrons much more. Yeah. Okay, that's all right. Thank you for the wonderful lecture. I'm here. Okay, so in the linear combination of atomic orbitals, there is a section equation, right? Yeah. So you have four parameters for variable C, right? Can you just add up all, like all possible linear atomic orbitals and make a linear combination of them? And I don't know, by brute force or machine learning, just like go up to as much as you can and learn everything. You could just have all of them if you wanted to. Yeah. And then solve it. And would there be any like significance to it or like what's the implications? You'll see the symmetry. If you just were naively to throw all the spatial orbitals, you will see, and if you had to machine learn it, you will see them group into switch symmetry objects. So you'll see this Hamiltonian will get blocked into symmetry groups. What's that, irreducible representations. Yeah, so imagine the machine learning if you were to throw it all of the map, all the orbitals out of it, it will just find the physical symmetries present in your problem. And that'll be represented by the blocks of this Hamiltonian, which will then result in kind of, you'll have eigenvectors which have a lot of zeros, which don't allow for interaction as a other thing. Okay, good. So let's have a quick break. Thank our speaker for a first hour. And we'll be back soon, okay? Before? Yeah, yeah. Is that a request? Was that because? Okay, shall we start? Next talk, next time. Okay. Shall we start? Yep. Yep. So, please. Good. Okay, I'll start again. I'll probably do this. Yeah, sorry. Okay, so, sorry. I will continue. I'll start again. So, we have this mean field. So far, we've only spoken about the single, the non-instruction electronic picture, where we basically threw away all of the electronic terms in Hamiltonian. So we can solve this, use a single electron separable idea where we can solve the total electronic wave, total molecular wave function, it's single electron pieces. And what we do is we use this mean field. So then we, to take the theory further, we want to introduce the electronic, electronic attraction. So, the true electronic Hamiltonian is, it's not, the electronic attraction is kind of unsolvable in this single electron picture. But the, what you can do is you can add in this mean field term. I quite like this figure because it kind of shows you what the mean field is actually doing. It kind of makes a lot of intuitive sense. So, we have this, the electron density peaks around the nuclei, right? So, we saw that in the previous plot. And this is what, this is kind of the three space picture. So, kind of the mean electron density is always getting greater around the nuclei because they're attracted due to the, due to the electro, the electrons, the key-long interactions. So, you kind of, you've got this, can we treat this background fixed potential with our, without Hamiltonian. So, the idea is quite simple. Time you get this extra mean field term. And the mean field, formally the mean field is defined as kind of the density. So, you integrate the density. So, you basically, this basically gives you a three-dimensional plot of this electronic nuclear cost. As a, so now, now we've got this, so now we have this term which is essentially is, if you notice we've integrated out all the variables here. So, this is just the, like, this is the cost. Once we have the density, it's just the constant. So, let's assume we have the density for now. Then, we can solve this mean field Hamiltonian with the separable methods that we have before. So, we can use to solve the global molecular function. We can use a product of singular molecular. Single electronic wave functions. And this is known as the famous Hartree product. Again, this was determined in about 1929, I think, by Hartree. He was an expert in differential equations from World War I in the ballistic. So, then he was at Cambridge and then basically applied all that knowledge to solving differential equations of the Schrodinger equation. Yeah. There's a product of single electron wave functions. And because the mean field operator is still a one electron operator because you've integrated overall space out the distance, we can use this. But it's not anti-symmetric and electrons are interesting at the differential. So, there's a lot wrong with it, but I'm just trying to show you the motivation. So, it turns out that when you have a density, it is solvable. Let's say if we're given some electrons in space. I'm gonna spin over to it. We can solve this here because this term can be solved. And then this is the potential term and this is the kinetic term. But you might notice that when you solve this, you get a new solution of the spin orbitals. So, this isn't kind of, it's a non-linear equation. So, you get this, by solving this to input density, you get a new density here. For one set of spin orbitals, you solve the equation and you get a new set of spin orbitals. So, it says non-linear equation. So, what you have to do is you have to keep iterating over this and this is called, you might have heard what's called the self-consistent field because the field has to become, this field term has to become consistent and be kind of optimized to kind of this constant. Now, I must say we can't use any combination of atomic orbitals in this approach because the, but I should say, sorry, the if, so there are some, and the reason why this, there's no basis here so far. This is all just using differential equations. So, this is an integral differential equation and they are solvable in some cases. So, the famous one is the, again, the spherically symmetric nuclei. So, you can actually solve the Hartree-Flock equations, or the Hartree equations for a nuclei. So, you can, for example, fluorine with lots of inner core electrons, you can still solve that exactly because of the spherical symmetry. There's some quite nice, if you look online, there's some very good, like, some very good resources showing there. But as soon as you go away from spherical symmetry and some molecular picture, then you kind of have to start solving things with a basis. So, you need to use a basis in the same way that we use linear combination of atomic orbitals. But due to numerical problems, those aren't suitable. Okay, same. And I will come on to why that is in a second. But as I said before, this is a very crude picture, I apologize. But when you go to helium, you start, this is the first solution. This is the first case of differential, first case of the Schrodinger equation, where, sorry, the first case of the, where you have core electrons, two electrons. You start noticing that you get these anti-symmetric solutions, whereas in reality, they were only observing these anti-symmetric solutions. So this is kind of the first case where they started to think, it was like experiment, this kind of first proof that they didn't, that the anti-symmetry is needed. Okay, so, what is anti-symmetry? So it's quite simple. Because the electrons, the physical solutions were anti-symmetric, they realized the electrons had to be anti-symmetric with respect to particle exchange. I just put, because of the universe. Apparently if you go into the quantum field theory, there is actually no regulations for this, but it's kind of an observation effort for our purposes. So, going into the previous method where we have this Hartree product, we now, we take this single electron picture, but we have to, we give it the constraints that it has to be anti-symmetric with respect to particle exchange. So we take the simple two electron Hartree product, two spin-off Hartree product, and we see that obviously, because there's two terms, you have this normalization factor here, whenever root two, and we say that we've moved, that should be, sorry, yeah, that's a mistake here. These x, that should be two, so these x's should be swapped. But basically, you see you have this swapped over term, which is the exchange. And that very neatly can be expressed as a, I can be very neatly expressed as a determinant. So if you take the determinant of the, if you take the determinant of this, you'll then get this, you get the equation from the previous slide, and then we represent it kind of compactly as a ket. So the ket contains the answer here. Now this generalizes to n electrons, or n spin-off tools, with, oh, so n electrons. And it has all the same properties. So if you, you know with determinants, if you move columns and rows, you'll flip the signs, things like that. Okay, so what this is showing is that we have this heart, the single electron picture wave function, this product wave function, but it can actually account for the anti-symmetry. Okay, so you can see here, as you exchange the columns, you get the sign flipping, and exchanging the columns is equivalent to exchanging the electrons, and this bar ket notation. Okay, so what does it do to our operator? So I don't have time to derive the Hartree-Fock equations, but you just have to take it from me. If you go back to this equation, this is the mean field equation. So we have this mean field term where we're integrating over all the spin-off tools, all their positions. Now in the presence of anti-symmetry, you basically get, you get this result, this falls out, and your mean field term has what's called the Coulomb term, which is kind of electro-neutral, so the electronic interaction has got a physical understanding, and then you've got the exchange term. Now this doesn't have a physical interpretation, it's purely a facet of the anti-symmetry, the necessary anti-symmetry in the equation, but you can see the difference is there's still two electron, it's not integrating over the two electron over the density here, but then on the exchange, see how the density of it, here you've exchanged these two particles. So it looks like a two-particle operator, but you're actually integrating out the variable, so it's still a mean field operator, it's a bit confusing. And this, because this is a one-electron operator, so they're integrating out the terms. I think it's quite remarkable really that you have a one-electron operator that still accounts anti-symmetry, anti-symmetry is the property of two particles. So yeah, so this is a solution for the whole molecule, and we've got the sum over i here, where i is the number of spin orbitals, and then j is the number of spin orbitals as well. So you've got the Coulomb term, which is average instantaneous point charge of portions, so that is density, and then the exchange term, which exists only to anti-symmetry. And this causes a lot of problems. So one of the exercises will be to kind of try to understand this a bit more. So we can then take the Hartree-Fock molecular operator, and then we can, because this is a one-electron operator, we can then use the same trick of using separable differential equations, and we can solve a single-electron picture. And this gives us what's known as the Fock operator. This is really important. So what is the Fock operator? So we have, inside the Fock operator, we have our kinetic term, and then we have our term, which accounts for the, and the potential that's contained in here as well. Yeah, so we have, so we call this a one-electron operator, and we take the, so this has got the one-electron kinetic, and the one-electron potential, and then we have the mean-feel potential term, the mean-feel potential term, which contains, which is the, which kind of is our approach in this picture, to have an electron-electron interaction, which has this exchange, this Coulomb term, and then this exchange term where we've split table there. There's a very exchange there. So it's quite strange, because you're solving, you have the j's, you run over all j's, and then i is the solution that you're trying to solve. So that's what you get here. So it's, so again you have this kind of feedback loop, where you're solving for i, then i's that go back into the equation. Okay, then again, we have this iterative self-consistency condition, but now we've got this kind of anti-symmetrized mean-field operator. So you can think of Hartree-Falk as just an anti-symmetrized mean-field operator, but it must be solved iteratively, and that's why you get, you might have self-consistency use. Okay, so I was, this was all solved, for the purpose of the previous discussion, we were just treating this as a differential equation, as if it could be solved, analytically. But we know that it can't, for molecules. So, for multi-nuclear sensors, there's no analytical closed-form solution. So we have to solve what's called the Fock, the briefing, at least it's called the briefing equation, and essentially it's the secular equation, but with the, it's the secular equation with the Fock operator, as a non-linear equation, because we only have this feedback loop. So we introduce what's known as a finite basis. Now, as I said before, we can't use the linear combination of atomic orbitals, due to numerical problems, with orthogonality, but we can still use the ideas in the generalized secular equation. But the problem is we need to find a basis. So, what is a basis? Now, this is something which is very subtle, and I think it's where a lot of people kind of get confused in quantum chemistry. So you have, what, essentially we have a spin orbital, and we introduce, so let's say that this coronade is quite nice. So the black here is the exact wave function, but we want to introduce a basis, so we have these weightings, these red circles and these eters, and we can weight the eters with this thing. And the more eters that you have, the better approximation to your overall wave function will be. So, the basis is kind of like, you mangle it together to the shape of the orbitals that we showed in the hydrogen picture. There's quite a subtle, you have this molecular bonding picture, but it's made up of these smaller blocks, which is like your LEGO, essentially. It's really quite subtle. So the stuff we show for benzene, things like that, you essentially, you build that from a smaller set of basis functions, we call them. So if you've got more balls, a larger basis, you'll get a more accurate answer. If you have different shaped balls, typically you have different angular momentum shapes as well. You'll get a better fit, so that's where you function. So there's a huge amount of research and quantum chemistry into building basis functions for different things. So, for example, you might have S2 3G, 631G, double Z to basis set, things like that. Okay, so typically what we do is, rather than balls, they're formal mathematical functions where we're trying to fit the exact way of function on to, with these, kind of, with this finite basis here. So, typically a good example of Gaussian, you combine a load of Gaussian to, how it works, people in the 70s and the 80s, people spent a long time fitting these basis functions to get the mathematical functions, these eters, like the Gaussian parameters, like the tweaking, these are done quite rigorously, and it's about like a paper, this is probably at the paper basis set. So now, essentially, we just have this fixed set of Lego blocks that we apply to our problem. So, we can then build these fitted, build our density structures, for example, our bonding orbitals from a set of Gaussians, for example, or slated type orbitals. What's nice, because of what I said about the variational principle, the secular equation always finds the exact, the minimum energy for a finite basis. So, you can obtain your bonding orbitals by directly just solving the fuck secular equation. It's very powerful and very cool. And this is why we haven't got the quantum computing yet. This is why you need to do all this class like old-school quantum chemistry first to build the shape of the vector orbitals, because you have these numerical parameters which show your basis function, numerically fitted basis functions, which then build up these orbital shapes like Benzene and Hydrogen, H2. Okay, so, the basic examples of the basis functions, the two main ones are, for molecules at least, you have slated type orbitals, which are exponentiated to the single power of R. We know from the Cato theorem, which is basically says that the nuclear cusp, like these finite peaks, this is exactly what the slated type orbitals, the slated functions, right? The problem is, they're very mathematical, they require a lot more mathematical work to solve them when you start applying them into the secular equation. So, typically, just for numerical purposes, we use Gaussian type orbitals, because the reason for this is the product of two Gaussians is the Gaussian. So, you can then interpret that over these four center Gaussians, and then you get a single Gaussian. So, it makes the numerical stuff much easier. So, you can see here in this figure that, so this is the slated type orbital, and then we've got these Gaussians best approximating it. So, we have lots of Gaussians, you can, in combination of Gaussians, you can quite accurately fit the slated type functions. So, what Popol and the colleagues did was they basically fitted a load of Gaussians to these slated functions. So, S2O3D is slated type orbital, but it's made of Gaussians, okay? It's slated type orbital through Gaussians. Okay, so, let's take this, if we know, this is where it gets quite advanced. So, we take the spin orbital that we have before, for this, this is the Fock equation, the Fock operator, so for the energy, we're integrating, so this is a bra operator cap, but we're just not showing it a direct notation. We're integrating a real space for a single spin orbital. Now, we expand out the, we expand out the operator in terms of the spin orbitals, we're gonna get the one electron exchange and the two long term. Now, when we apply this equation, the basis equation, let's say we've picked some basis functions, eta, we then substitute that into the previous equation and the Fock operator takes this form. Okay, so you notice now we've got this extra, so the G's are these orbital integrals and then the C's are the weightings of each, or of each of these basis function terms. Okay, so the Fock operator just got a lot more complicated and you can see that because we have the Fock operator, if we have back, because we have orbital inside the operator, when the Fock operator then has these C parameters inside of it, so you can see like, so really importantly, the Fock operator has orbital parameters inside of it in the basis expansion. Now, you can see, you can now see why that would be non-linear because you solve F which has C in it to get C. You still have to start moving around. Now, and so going back to our bending example, then we're not using atomicals now, we're using this arbitrary basis expansion. Okay, let's say S2, 3G. Now, by solving the Fock operator, the Fock and the Routhian equation, each eigenvector solution is one of the spin orbitals. Okay, so you can see here, say if we had some rough P orbital like basis function, but made of Gaussians, then the linear combination of those all add up to be like the delocalized P here. So these are all mathematical objects now. You can see we end up with the same result and again, because this is the secular equation which comes in the Rayleigh-Ritz variational principle, this is the minimum energy for each spin orbital. So just to clarify, that inside the Fock operator, we have these basis function overlaps. Now, these are just Gaussians here, these eta. So you have to do, you're just doing four center Gaussian overlaps here. So there's quite a lot of hard-fought numerics going on here. So, summarized. So, Hartree-Fock is essentially it's an anti-symmetrized mean field approach. And it's used as a rough electronic structure correct. So it's kind of, Hartree-Fock really does explain a lot of the chemistry quite well. So you can see here, by solving the Routhian equation, that I did this with Gaussian a long time ago. But it gets the chemistry correct. Like this is what you see in molecules. The qualitative picture is correct for Hartree-Fock most of the time. The problem is when you want that 1% of energy to get the transition state correct, that's when you need higher order structures. But Hartree-Fock is responsible for all of the chemical properties you see in higher order methods, if that makes sense. The orbital structures, the orbital shapes, they all come from the Hartree-Fock calculation so that's why it's so important to verify. And like most importantly, these C eigenvectors are the shape of the spin orbital. You need to do Hartree-Fock calculation before a higher order quantum computing calculation to get the shape of the spin orbital that contain the spatially dependent interaction in the second quantum particle mechanism. It's passed on to the second, so what I mean by the fact that Hartree-Fock has passed on to the second quantized for configuration interaction, couple cluster or quantum computing calculations is that now we're coming back to the quantum and quantum calculation at the beginning of this lecture. Now you might understand the significance of why you have to really respect Hartree-Fock to get the interaction parameter. So you can see here what this is your second quantized chemistry Hamiltonian. Now these ion j's, these are the spin orbitals which are themselves linear combinations of basis functions, okay? Now the ion j's, so obviously when you multiply out the ion j's in their basis expansion, you get all the linear combinations carefully to come out. So the Hij terms are these like four rank four tensors, but the rank two tensors and the gijkl is the rank four factor of these gas intervals. This double thing is the exchange in their Coulomb term, but you can see that you have these, like you might do your quantum computing calculation and you'll get a chemistry Hamiltonian. The terms contain all the information about the shape of the orbital, basically. That's the moral of the story. So your interaction parameters in second quantization contain them at the Hartree-Fock vector orbital expansion coefficient. So that's why you need to do this lower level of theory calculation first and then apply. And I'll explain why you need to apply all the theory in a second. So Hartree-Fock, almost perfect, it typically gets a bonding, I even jumped to 99% of the energy is correct, but at bond breaking and bond forming length, it breaks down, I'll show you why. And most importantly, strong electron interactions like strong correlation is not treated well by Hartree-Fock. They're more accurate methods on quantum computing than they used. So here's an example of where Hartree-Fock starts to break down. So you've got your two hydrogen atoms at equilibrium geometry. And then you can see here, as you stretch your hydrogen, your two hydrogen Hartree-Fock starts to get way beyond the exact. So you have, and there's this other method called configuration interaction. And then the exact here is the infinite basis limit. So, very interesting. But the CI is the same base as the Hartree-Fock, but it seems to get the dissociation limits. So what is configuration interaction? So if we go back to our nice, so this is now not, we're not thinking about the combination of Hartree-Fock, but the solutions of the spin orbitals are the same actually, so before in the benzene example. If we have a single, we think about the ground state, we think about the ground state, the single determinant wave function, the ground state of hydrogen, the H2. We have this two, and it's two spin orbitals. So we have one sigma G solution times by another sigma G solution. The sigma G solution, the ground state, as we showed before, has this, so we're kind of approximate. We're just showing that it's got this S orbital-like character. It's a linear, the positive interaction of two S orbital's. Now if we take, if we just roughly multiply the two things together, we get this product, which this is just quadratic, quadratic region. But what you see when you multiply this out is you get four cross terms, okay? Now the four cross terms actually end up being what we have here is both electrons on one, both electrons shared, both electrons shared, both lectures on the other one. If we think about when we want to go to the real dissociated bonds, you don't want all the electrons want to be on individual atoms here. They don't want to be isolated on one, okay? So you can see that there's a really unphysical allowed solutions in this expansion. So the electrons were, and along this is the electrons that we want to be separate, basically. So you can see, we've got these terms here which shouldn't really be there in the long separation limit. So the single determinant picture is not adequate. So that's why we have to use this method called CI. And CI is really what motivates us to use quantum, it's the same problem that we use for quantum computing as well. And you start to introduce the extra expansion. What do I mean by that? So extra determinant. So you have this many electron, multi-determinant wave function now. So what you do is you take this solutions of the Hartree-Flock, so you've got all these different Hartree, so each one of these lines here, one, two, three, four, these are solutions to the Fock matrix. So these are eigenvectors of the Fock matrix. But I've just put them here as, you can see, and then so we've got sigma bonding g, sigma bonding g. And here we've got sigma bonding g multiplied by sigma star u, anti-bonding. Sigma star u, and then sigma g, anti-bonding. Anti-bonding is bonding, and we've got two anti-bonding. So we introduce these extra determinants. So this kind of seems a bit arbitrary. Why are we doing this? But if you look now, when you combine the totally bonding and anti-bonding bonds in a linear combination, now this is very important now. So we've gone from the single electron picture from the many electron basis. So this is now a wave function of two electrons. Okay, so we've got, we're out of the single electron picture now, so this is what, by introducing, having this many electron wave function, but linear combination of these two determinants, you've got 10 at one, 10 at two, 10 at one, 10 at two, but they're different. And we've got the weighting parameters. What happens is when you work through the mass, basically the dissociated terms, the sea, you can basically, the sea's, you get basically a, the sea's cancel basically here. So you have the sea's, these are equally contributed, but then it causes the very, the long expansion term, the separable term is the cancel. You end up with the truly physical solution, which is the separators, only the separated SL. You've got SL, SA, SC, SA, SC. And so if you were to solve the secular equation for this picture in the main electron wave function, you'll just get this, if you want the solution. But you can think about it in an intuitive way, so. So this is what you get. So now the bonding is described correctly at long distances, you only have these separators term. And it's a physical term. So this is really the motivation for configuration traction. So configuration traction is a complete generalization of this, of this problem. So now we've left Hartree Fock behind, okay? Hartree Fock gives us each one of these black lines. It gives us the orbital, the spin orbital shape. And then we have this many determinant wave function, which here's a six electron wave function. And then we introduce all possible combinations of excited determinants. Okay, so in, as you saw before in the H2 example, we added an extra determinant. But in a general case, you just see all combinations of all possible excitations. So you've got here, we have, this is called the reference wave function, which is the ground state, all the, it's the lowest occupations of the Hartree Fock solution. And then here we have the single excitation. The set of, so yeah, I should say this is a set. So this is all possible singles. So this is just showing one single, but it represents all possible singles. So we can have, so A is the whole, and then R is the particle. So you get this particle whole excitation here. And then we have the same for the doubles, where we do all possible double excitations. And then if you, and the full solution goes to the all possible excitations. And this is kind of, it's equivalent to exact diagonalization, you might have heard of that in physics. The chemist call it configuration traction. Because if you think about it, you're introducing all these extra configuration, electronic configurations. And then the wave function is essentially this. So you have the C naught reference. This is just the Hartree-Fock energy, by the way. Hartree-Fock wave function. From the, this is the solution of the, this is the product of all the, of the single electron Hartree-Fock solution. And then we have basically all the, all the, the weight into the singles, and the weight into the doubles, some of the occupied virtual, et cetera. So we often call this the occupied, the occupied space and the virtual space. We then get, we get our old friend the secular equation, because we have a basis, and a Hamiltonian, we can apply the secular equation, and the, we still get this optimal wet, like the solution to this would be optimal for the ground and excited state. But now this is, whereas before the basis scaled with the size of the basis set that you choose, those Gaussian functions, here you have a combinatorially scaling matrix, okay. That's really bad, okay. So this, there are a lot of kind of quantum chemistry for the past 40 years has been devoted to solving this equation efficiently. It's just a generalized eigenvalue problem. And essentially, so this is scaling with, well I mean, if you were just to naively solve this by diagonalization, you can only really do up to like 10 spin orbitals. The largest known numerical solution to this uses iterative diagonalization methods, which is essentially, it's called a pre-log method, that basically allows you to, rather than solving the matrix exactly, you approximately solve it using matrix vector products and iterations of that. The largest solution of that, with the Lanchoff method, is 44 spin orbitals, 22 spin orbitals, 22 electrons and 44 spin orbitals. So you can't really do anything, any large molecules interestingly with this, and you're not gonna get over, like the amount of effort that they put in to get one extra spin orbital there was crazy, because you'll find the answer, combinatorial scaling problem. Then it's really motivating to me for quantum computing, because if your basis scales combinatorially, it's actually slightly less than two to the N, combinatorial, in this case, for the particle number. You can really see why we need quantum computing methods, because quantum chemistry has been stuck here for about 40 years trying to solve this problem. Okay, we're not gonna get past it with the classical methods really. It's just a hack, all the new methods are hacks to solve this slightly more efficiently. How can we keep the most of the wave function that treats the interest in part of the wave function? But if we really wanna solve this problem fully, we need quantum computers to go to an interesting system. Now there's lots of arguments in quantum chemistry that we don't need to treat the whole system, we only treat the sites of interest, like the active sites, things like that. Garnet Jam will probably tell you about all his methods in perturbation theory, all in the chromium diner, and DMET for example. So the argument is true that is it necessary to solve this for everything? No, but for some systems, which have very strong electron correlation, this is needed, and the nitrogenase example that does the whole process in the soil, is a good example of this. And very importantly, this gives you excited states as well. So you get the excited state, and this gives you some images. I can write it for you. Okay, okay, I'll briefly talk about this. So what is second quantization? I mean, I briefly alluded to it, just because probably people are familiar with it. The second quantization essentially takes the distance dependence from the wave function. So you don't have any of these functions of R anymore, your wave function. It takes that and then puts it into the operator as you saw in the previous slide. Then it means that we can treat them, the basis in the occupation number formism. We're just ones and zeros, and it gives us a regular mathematical footing. This is what, one, nine minutes. And these, these, these, these are, basically the determinant like properties that we saw in first quantization, where you can exchange the rows and columns, et cetera. These are all represented in the same way by these fermionic curation, malation architectures. So you can, you can, you have this vacuum state, which is empty cap, then you excite this, and you get a spin able to pee existing. Same way it happens here. You can destroy, try pee, get an empty cap, and you're going to supply them in succession. And then you can see, this is the really important one here. So you can see if you have, if you destroy pee, if you apply them in this order. Yeah, peeque. If you exchange peeque via the use of the fermionic, rather than exchanging the rows of the state determinant, if you just do it via the fermionic curation and malation operator, you still get the same properties. But this is now done rather than the wave function level by moving around the determinant. It's done at the operator level by these operators. And as obviously there's all these famous commutation relations here. And the commutation relations, I should say, sorry. Whoops. But the main idea is that these operators preserve the symmetry. Okay, so finally, I guess, this is kind of, so what we just spoke about with configurations are actually as main as post-tartary fuck. But as I said, these post-tartary fuck methods require heartary fuck to be done beforehand. So that's why you've downloaded pi svf before you do your quantum computing calculation. You run your heartary fuck calculation, you give it your nuclear coordinates, you choose your basis set, you then get an output of optimized orbitals and electronic integrals. You then give these electronic integrals to your configuration interaction. Or you could use a couple of cluster or whatever method is more of them. But in particular, the point here is that quantum chemistry, quantum computing, just is exactly the same as configuration interaction basically. In terms of what it requires to be run beforehand. So you need these optimized integrals in the second quantum spawners, of course. I'll probably stop there, I think. Sure, any questions? Thanks, Nathan. Question? No question. Okay, let's take a break. Next, Nathan Tocque will start to, yeah, thank you. Next, Nathan Tocque will start at 20, 20, last year, then. Please take a seat, okay. Okay, let's start to, let's start to, final talk of the morning session. Yeah, Nathan, please. I'm gonna get started, because I realized I've spent two hours talking about classical quantum chemistry, rather than quantum computing quantum chemistry. So we're gonna start talking about that now. Someone asked me a very good question in the break where they were like, where did the qubits come into the configuration interaction? So basically, the qubits can represent occupation numbers by, if you look at this, if you look at these configurations here, this would be, you'd have, so this would be a 12 qubit problem, and you'd have a qubit in, the first six qubits would be one, one, one, one, and then the final six qubits will be in zero, zero, zero. So you can see all the possible combinations of ones and zeroes, which come from the two to the n possible qubit configuration. You get from, the qubits can represent all the configurations this way. So it's actually interesting, because the configuration interaction only stays within six particles, but the qubits can obviously have all zeroes as well. So configuration scales less severely for a given problem than the whole qubit state. Okay. And then the matrix elements of this operator are just the typical second complex ones that you've seen before at this time. Okay. So let's start talking about some quantum computing stuff. So, well. So we can, typically the, because the matrix scales at combinatorially, we have to truncate it for real problems. And if you notice, we can form, we can actually form our basis using the second quantizer representation. So if you see, you can form the, if we can form the excitation by starting from the Hartree-Fox state, which is the one, one, one, one, zero, zero, zero state. We can then form an excited thing on that by destroying one of the virtuals and creating one of the excited orbitals, where the C, A, I. Where A is the, but there's an occupied and I is the virtual. So this is called this, this is excitation operator. So we're generating the basis here. And we call these T's the excitation operator. Yeah. So we got our single excitation and our double excitation. Okay. So this truncation means that, you might hear, it's called S-I-C-I-S-D, singles and doubles. This is precisely because that we wanted to have all possible excitations that would be a gigantic matrix we can't fit on our computer. So we truncate it to the single and double level to make it manageable. This is where, and then a couple cluster, briefly, very briefly, basically it takes this linear operator and accelerates it. And by doing that, you get basically more of the wave function back for the same cost, because you get the cross terms from the excitations coming from the expansion, the exponential expansion of the operator. So basically a couple cluster gives you more bang for your buck when compared to the same T operator. Now, a couple cluster motivated the first quantum chemistry, quantum ansatz for quantum computing, and this is a unitary couple cluster. And basically, you take this T with non-unitary and then you basically exponentiate the T minus the complex conjugate. And this is now a unitary couple cluster object. But the problem is, when you try and solve this with classical quantum chemistry method, this is non-terminating. But you do a nice trick in quantum chemistry. So yeah, so here's what their things look like. So we have, so TI is the, it's called a cluster expansion coefficient. And then the unitary form is just this. In quantum, to get on to work on a quantum computer, we do this trick called trotterization, which basically takes the exponentiated unitary operator and we expand it to some degree row. Typically row one is fine, so we just take the first one trotter. And then we get this basically. So basically what this is doing is you break up each excitation in the expansion into its own exponentiated unitary operator. And then you apply them in N series as a product. Now this has a, this has a form that can be implemented directly on a quantum computer. So sine naught, remember it's a Hartree-Flockway function. This is the initialization state, if you want, reference state. This is the one, one, one, one, zero, zero, zero, zero, zero, zero, zero, et cetera. In the lowest occupied state. We're gonna act on that with our unitary operator. These products are exponential. And then we have our unitary couple cluster and that. Now, how do we get this onto a quantum computer? Oh, yeah, the one thing, the ordering can be important here. And it can give a better answer. So there was some work by Garnet Chan showing that you can't actually get the exact wave function if you get the ordering correct. All right, so how do we get this onto a computer? It's nice iron trap. I'm not by it, but they are the best. So what we have to do is these TIJ, these TNs, these are our fermionic creation annihilation operators. Now, you may have heard of something called the Jordan-Mignan transform. We need to map these fermionic creation annihilation operators to our, to the power of the power of operators which can then be implemented directly on a quantum computer. So, you might have seen this before. So this is the Jordan-Mignan mapping. We have a fermionic excitation operator and then this is essentially, we have a product of, the product of Zed. The Zeds keep track of the antitimetry. So if you're acting on orbital four, we go, we have Zed acting on zero, one, two, three and then the actual flip operation is done by this X minus Y here. And that is equivalent to, come up, creation annihilation operators. And this is the same for the, so this is the creation minus X minus Y, annihilation X plus Y, X plus Y, sorry. So, applying this to our excitation operators, we get some quite, quite gnarly things. So this is, but you can see here we, basically you can see the, you can see where the X minus Y and X plus Y's come in. Sorry, because we have two creation annihilation operators. But basically you have the Zed strings and you have like this quadratic pair and then you have the Zed, for the double you have the Zed strings and then you have these quarter set of eight. So you get eight. So from, so here you got, so the Jordanian transform for the further excitation's here, you get from two sets of formula like a night creation annihilation operators and then you end up with eight for the doubles and then two, and then two here. So the doubles scale much more in this picture. And there's lots of ways you can generate these things. So, it's open formula on Qiskit we have in Quanto. But the, so we have to exponentiate these. Now you have these, we've done the Jordanian transform. We have to exponentiate these powers. So there's a really famous way to do this, which is if you take one thing away from this lecture, this should be it because the power of gadget is the most powerful primitive in Qiskit in my opinion. It shows up everywhere. They're what you spot it. It's really easy to make algorithms with. So we've got this exponentially set of Pauli's from our formula Jordanian transform. We then, it might not be that obvious to you, but this is a multi-key bit RZ rotation. So you can, I encourage you to in the break, maybe turn this into a Z, so with no rotation, and then come in with a one, one, one, one state and then work through it and we'll see that it works with a multi-key power of the operation. If you then were to add a rotation in a Z basis here, you then get this really powerful e to the i, e to the i over two, z, z, z. So now you've got an exponentiated parameterized Pauli rotation. Now this is really powerful for the main thing with the simultaneous simulation, like at least you can take a couple clusters and that's it. And then we can change that now to, so that was just a z to z, z, z. Now we can change that by just a basis rotation on each side above, see not that there? And then we get any Pauli gadget we want. So this is really powerful. So now you can get exponentiated Pauli word. And then what the coding exercises is actually basically building a Hamiltonian simulation from this structure. Okay, so this is really like, it's very powerful because now you've got, you can have access to Hamiltonians and cluster operators, exponentiated. Okay, so now let's talk about VQE. So VQE in Quadrant Chemistry is probably, because I actually saw the Quadrant Chemistry Hamiltonian at the end of the four terms, it's a fully interacting polynomial problem. It's probably not, VQE is probably not being very applicable to Quadrant Chemistry in my opinion. But it's still gonna be useful for state preparation and things like that, so it's worth learning about. So VQE is essentially this. So you have the state preparation. The state preparation is ANZACs that I showed you before. This is your wave function. And you have to have some parameters to see if you can change the wave function. You then have your, then you have to measure that Hamiltonian. And you can do that by individual power terms. I'll explain that in a second. Then you have to measure, yeah, so you apply the Hamiltonian, then you measure it, okay? And then you repeat for each Hamiltonian term. Okay, so U can be anything. It can be neutral, it can be cluster, it can be hardware efficient, it can be whatever amount that you want. It just depends on how expressible it's going to be if you will get this reference, how accurately it will represent the ground state. But we're not, yeah, so far this isn't even a VQE. There's no, this is just for one set of parameters, I wanna measure a Hamiltonian for P. So we haven't changed it. VQE is the process of updating the parameters. This is just a single energy calculation. So here it's, if we look at the first point, this is, this is quite, these are quite old slides, so I need to get stuff. But if you see here, this is the units you can see in the slides. So you can see these C not ladders, right? These are each of the excitation operators that we showed, these Femian code, Krish and Nalashian operators. You've got this set of Femianic operators. You've got a set of exponentiated Femianic operators. So set of exponentiated Pauly's, these Pauly gadgets, you've got these chains of Pauly gadgets, and you've got a parameter at each one on the bottom of these C not ladders. That's an unit-couple cluster on that. So by changing these parameters here, here, here, here, you can change the energy basically. People like the unit-couple cluster on that's good, it's got some physical representation. Because you actually have, it's representing the excitation of the operator from a heart-shaped rock state. So you've got to actually get a picture on what the orbital is doing. If you're just using a hardware-efficient AND-ats or some random massive entanglement and rotation, it's very unphysical, okay? It's kind of like tensor network methods and CI in classical space. Whereas tensor network, you lose all the information. It's just a man with massive learning parameters. Okay. So then, we've just prepared our state with our AND-ats. We want to now take our operator. How do you measure our operator? And this is in this setting. So you can apply the same Jordan beam transform. Our state was prepared by thermodynamic operators, exponentiators, but the operators also contain the thermodynamic operators. So you have to Jordan a beam of that into something which you're going to apply on the computer. So the weight and coefficients here, when you apply the Jordan beam transform, some program will probably spit this out for you, but if you do it by hand, you end up with something like this. And there's some mixing happens between these, but for H2, you end up with 15 terms. And you have to measure it and then the weightings are these parameters weighting each power string. They're like related. They're not the same. And then the powers are related to the thermodynamic operators. They're not the same. That's just showing what they correspond to. And it's similar to measuring the density of H2 term. So basically now, in order to calculate the total energy of the system, you have to prepare your state. And you have to measure the red, the power strings, which are applied to these qubits. And then you have to times it by the weight, the interaction coefficient. And then you loop over all the terms in the Hamiltonian. And in the chemistry setting, this is a classically-complete integral, but please, if it's a fermi-hub on these parameters. But what I'm trying to say is that the only quantum part in this calculation is the powers. The coefficient is just outside. This is done with post-processing. So how do we measure the density-rate element? Well, they're kind of the powerly version of the density-rate term. So this is quite simple. It's known as a technique of operator averaging. So you might be like in your quantum computing algorithm, click measure, calculate excitation value. But what's actually happening? So if you're measuring in, like, if you have a powerly Z, Z, Z, for example, if you measure it in the Z axis, basically you have to, and you have three qubits. If you had zero, zero, one, you have a parity of one. Zero, zero, zero, you have a parity of zero, et cetera. Now the parity of these, of the basis axes measurements, which correspond to the power z, y, from the previous equation, parity of the shot measurement on the outcome corresponds to the eigenvalue of the powerly. Now, by measuring over many times, you'll get a mixture between one and, parities of one and zero in the outshot outcome. That'll give me a mixture of eigenvalues between one and minus one. You average that over the number of shots and then you times that by the weight in coefficient. That'll give you the contribution to the energy of that power term. Okay, this is quite a subtle thing. Yeah, so, and then the VQE, so this is from, this is from the classic paper bunch, like Romero and Rimbavish, John C. What it's showing is it's like VQE, basically. The rally variational principle saves you again. You can basically just twist the parameters in your state preparation and if it goes down, you'll save because you have a combat optimization problem. Basically what this is showing is you prepare your state, you measure your state and get the energy from the method I just showed and then you calculate the energy and then you change the parameters just so that the energy will go down basically. You have a gradient optimizer or whatever optimizer you use. Basically you're changing the parameters so the total energy of the system goes down. And the key point here is that you have this measurement of power is h to h step and you're summing over each term in the Hamiltonian. So you just change the parameters in your ANDATS until your energy gets to a minimum. But obviously, because your ANDATS is, there's a large amount of choice in the ANDATS, right? So your ANDATS might not be good enough to get to, one ANDATS might not get to, but the same operates that one ANDATS might get to a lower energy than another ANDATS. Well, on the other case, you might have this baron plateau problem or local minima. One ANDATS might get stuck in a local minima from a different initial set of parameters, which is bad. So the reason why I say that, like I, and this is in my opinion, I don't wanna say it's not gonna work, it'll be useful, but calculate energy is certainly not the right use case for it because the amount of measurements you need, we know quantum computing is how we have a finite budget of measurement cost. These variational algorithms require a huge amount of measurements. Like you're looping of every term in the Hamiltonian and then you're changing the parameters each time. So if there's lots of reviews now with the same BQE, it will take like 10 million years to do, anything useful for chemistry. So that brings me to quantum CRELO methods, which I think are a useful application of quantum computing for NISC stuff. So we go back to exact diagonalization, where we're building, remember we have this, we have this huge basis, but I'm gonna speak now about quantum CRELO methods, which is probably, I would say is the nearest term application of NISC for chemistry, I would say. And it kind of leverages the classical quantum balance in a slightly different way, which I'll explain. Rather than with BQE, where you're leveraging the classical cost on the gradient optimizer and the quantum cost on the shot measurement, and the other sort of energy measurement to the operator averaging, here you do something slightly different, which I'll explain. So going back to that diagonalization or configuration interaction in chemistry, we have this exponentially scaling basis. This matrix obviously scales exponentially. This is, what, it's combinatoric, but it's within two to the N. I'll explain that, if I would explain that. Go ahead. So we have this exponentially scaling matrix. This is bad. So what quantum CRELO methods, so what CRELO methods do is it's a very general method. It's really cool. So you again start with the Tarty-Fock method, but then rather than expanding your basis via rotation, you expand your basis by our power of functions of the Hamiltonian and then with K times. So if you see, and typically the dimension of K is much smaller than the dimension of the previous matrix. So you'll scale it, these are typically in order of 100, maybe less. Whereas like before you're explaining, you're scaling it essentially. And the basis obviously scales just like this. So you have your wave function, now linear combination of your reference state plus your first state, which is the first power of the Hamiltonian function. Et cetera. Okay. And you still have this generalized eigenvalue problem, but it's much smaller. Okay. Now obviously the complexity is being hidden inside this function of H. Okay. But we'll talk about that now in a second. And then if you think about the pre-loved generalized eigenvalue problem, the matrix elements of these things, the Hij is now, you have this reference, bracket reference, I mean you have the function of H to the i again to the left, sort of H to the j again to the right. And then you end up with this matrix problem, the ij matrix elements are formed by, the ij basis function is formed by f to the h to the i and f to the h to the j. And the overlap matrix elements are formed in exactly the same way, whereas now there's no h, so we just have this function of h to the i. The function of h to the j. So clearly this is different, even if the basis is smaller, the eigenvalue problem needs to be smaller, these elements are much more complex. So we can't just solve this by operator averaging, or like these functions require a bit more thought and have to implement them. So there's a number of different ways to do them. There's a kind of three famous ones. So you have real time evolution. So this is the function of h to the j that you apply is the time evolution of it. The powers of the time evolution operator are just n times t in time, okay? So you've got j type, so f to the h to the j, as a power of the time evolution operator is just e to the i h t j t, okay? Right. And the reason time evolution seems to be very popular because there's many proposals to do time evolution efficiently on quantum computers. I'll be in a more fault-tolerant setting, but you can see that there are ways in time vision quite cheaply if you just have small t and low j's. It's obviously more expensive than VQE, but it's definitely cheaper than phase estimation, so yeah. And then if anyone's confused about what the function of a matrix is, the function of a matrix is defined as the singular by decomposition where the function acts on a singular value. As you think about it, if s v d is just a rotation, a scaling, and a rotation, so you can just scale, you can just act on the scaling part and then the rotation will be finite, so. Another famous one, another famous function of h and kilo method is imaginary time evolution, which is just basically missing the i here. So this will propagate all the states. You'll get a linear combination of basis states in the original space, and I go back to the space of this, and then imaginary time evolution, if you're not familiar with it, so I've done a lot of work in this, and I need both of these areas. But imaginary time evolution is essentially, it's a way to propagate to the ground state. It's a kind of, it's a quantum way of doing optimization. So if you propagate an imaginary time evolution far enough, you'll only end up with the ground state. That's a non-unitry evolution, because if you think about your initial state being a linear combination of all possible eigenvectors, the ground state is just one eigenvector. So you're killing all the other eigenvectors apart from one, but not a rotation, a neutralization, a projection. So this is difficult to implement on a quantum computer, but you can do it, and I put a paper up yesterday on it. And then finally, this is kind of the very sexy topic at the moment in quantum computing, is the cherish of polynomials. And these are kind of, the way I think about them are, you know, the double angle formula that you learn about at A level, whatever high school diploma you did. And then if you recursively do that to end the power, end time, you get the cherish of polynomials, basically. And these have recently been shown, can be implemented by Grober's reflections recursively, iteratively, so I know Calum may have given you too short on that. And then finding these functions is really open, implement these functions on quantum computers is a really open research area. So this is really, I'll talk about it in a minute. Okay, same. Then there's a, you can take this further, they might make the whole thing unitary, you can even change the eigenvalue problem itself to be unitary. So you can take real-time evolution, and then take the Hamiltonian operator and make that itself a unitary, rather than it being a Hamiltonian operator. So you know, I still have a unitary generalized eigenvalue problem, rather than the original one. And eigenvalues related to the phi n's here are the solutions of, so the lambda n's are related to the solutions of IHT, by this, and then they're related to the eigenvalues of the original Hamiltonian by this equation. So what you can do is you treat your, we're working in the time evolution space, we're using time evolution as a function, a Kredov function, but then we also do the operator as a Kredov, as the, as it's a, so we replace what was the H, so what was, what was before was this H. We've now replaced that with, E to the, we've now replaced that with E to the IHT. So what you can do, which is really cool, is because it's all exponential, you can add together all the power. So you end up with this, much smaller and simpler object here, so this is really powerful here, and you just reduce the complexity of your problem massively, because if you think about it, you only have to calculate, like the overlap on the Hamiltonian makes them, it's going to be calculated from the same set of objects, right, so you end up like reducing the, I think you go from, quadratic to a linear scaling problem this way. So this is really cool. And then this can be implemented, these are, yeah, so this is what I'm trying to say here. The overlap, and the Hamilton, the unitary Hamiltonian, whatever you want to call it, elements are from the same set, and they're generated from this small set here, okay. These are just transition matrix elements, right? So this is your reference state, acting all the time of each object, turning it into another state, and then taking the overlap. Now these can be implemented very easily on quantum computers, and cheaply-ish, if you think medium-term cheaply, with the Hadamard test, where the Hadamard test, we're taking that time of each operator, controlling it, and then doing the Hadamard test with that, and then doing that for different values of k, you can generate this object. So we need, this is a complex value here, so we need the real and imaginary parts of this, so that's quite easy to do. So you basically, to get the real and imaginary parts of the Hadamard test, expectation, you do basically the Hadamard test, but then you have an s here. Now you can generate the energy, this is calculated for E0 minus E1 here, so you basically add, you can get a difference, but like this object is quite straight forward, so you can get a difference, this object is quite straight forward to calculate, that you can, there's a nice, I can show you the proof of this, but showing that this is an expectation value, is quite straight forward, you just have to combine the shots in the correct way. You have to do some post-processing of the shots, but it's straight forward. I think it's E1 minus E2, I end the shot, don't correct me. Okay, so now you can see, so what I mean by, this is leveraging the quantum classical balance in a different way, is that the quantum balance is now thrown into, you have to take these Hadamards, this small set of Hadamard tests, where you have to do this controlled tidal region operator. So there's a lot of complexity hidden inside this box. So like, trotterization is the simplest way, I will speak about that later in the fourth lecture, but there's a whole field, like looking at different ways to improve time evolution operator. So time evolution is really a subroutine for many algorithms, so learning how to do time evolution is crucial, I think, if you want to work in quantum computing. Okay, yeah, and so the balance, so the classical balance is really this matrix problem that you're uploading everything to, but that's quite small now, and the quantum balance lies more heavily than BQE, but in these time evolution, controlled time evolution operator. So, I've also got a paper in the internet in this as well, people are interested. So it's under the name of variational phase estimation with variational fast forwarding, where we basically try to do some approximate compilation of these objects to solve the same problem. Okay, so we've got 15 minutes to work phase estimation, that is not enough time to put it away. Okay, so we have, okay, so now we're moving on from the kind of hybrid methods, where there's a balance between the classical part and the quantum part, we're now moving into the fully quantum algorithm, and the famous one of these is called quantum phase estimation, okay. So maybe you may have read this, Nielsen-Schwang is really good, and we can use this in quantum chemistry to a great advantage. So what is quantum phase estimation? So whenever we have a unit tree acting on, the eigenstate of that unit tree, the phase will be generated into, the phase will be generated from that operation. So we get n is to eigenphase, if all that. Now, we can apply that to N at, we can use that property in quantum chemistry, because that should be in minus two, sorry. We can then use this phase idea, so if we have an eigenstate of the Hamiltonian and we apply e to the iht to it, we then get this object, which is like a phase, the phase contains ej times t, okay. So the idea of phase estimation essentially can we exploit this and get this out, basically. So we can implement this and get this e to the i out. And again, this is going back to what I said before, and how phase estimation for quantum chemistry uses time evolution as its main primitive. So there's lots of, there's a lot of papers in phase estimation that study the complexity of phase estimation, energy values, where you change the method of Hamiltonian simulation that you use. So a lot of the Google papers, they like to use cupization and Chebyshev polynomial framework, and trotterization, et cetera. LCU, yeah, I will speak all of these next session. Okay, so can we get the argument? So what's actually happening when we think about phase estimation? I like this, as a physics person, well, quantum chemistry, I should say, I like to think about physical problems rather than kind of a computer science approach or thing. So if you look at what the, if we do have an eigenstate of the system, and we apply this transition matrix element, and you measure, so this is a measure along the, this is time along here, and this is in real and imaginary part. You get this perfect spiral where the phase just propagates you in time, and it's quite nice. And essentially the phase is related to basically the distance between these spirals, which is really cool. Now if you have, you don't have an eigenstate, when you have some linear combination of eigenstates, which is what we have in practice most of the time, because if we could form the grand state, then we'd solve the problem. And this is one of the things that has caused a lot of issues with phase estimation, whether it's useful or not according to chemistry. But you can see here, the massive, if we have a linear combination of eigenstates, we then basically get weightings of the competing phases in here for the different eigenstates. And that results in kind of this messy spiral way where you've got all these different phases coming up. And you have the weightings of each eigenstate. Oh, it's here. But this is how I think about phase estimation. It's a really nice physical way of doing it. Okay, so what the canonical form of phase estimation, the old school way that you'll see from Milton Trang is basically, you have to really understand the motivation for it. You have to understand that to the quantum Fourier expansion. So I will do my best to explain this. But obviously we know the Fourier expansion basically has any function can be expressed as an infinite weight and sum of cosines and sines. And you can see this here. Obviously in computers, we don't have an infinite matrix. We have a finite dimensional matrix and finite dimensional vectors. So we have to truncate this to some realistic amount. And this is essentially what happens. So you get an approximation here. Now this underpins most of modern signal processing. There's lots of people that say that the fast Fourier transmission is the most powerful algorithm that's been passed for 20 years, et cetera. But you can take this sine cosine sum. I'm going quite quick, but you take this sine cosine sum and you can always form an exponential of that. And then basically what that means is that your Fourier expansion can be represented as a sum of exponentials where the exponentials have the specific power relation. Okay, so it's quite a, you can see you take a vector y. Vector y can be formed by acting with these omega nk's to the n. And these omega nk's are these complex exponentials where you have index n and k of here divided by n. Okay. And it does, it is basically magic that you can trust this, but it does work. And then I think about, I'm actually thinking about these things in a matrix form. So you can think about the quantifier transform as this linear map or basis transformation. So you're just basically rotating the input basis into this new basis where the first one doesn't, nothing happens to it. The second one is times by omega. The first n times by one, second time n times by omega, omega squared, et cetera. And you get these powers going down here. So this one goes up to the power of one, the power of two, the power of three, et cetera. Okay. Now you can think about this. The way that I think about these problems is basically matrix vector product where this linear map is a neutral matrix. And we know that new trees can be implemented on complex computers. So the game of the quantum Fourier transform is implement the discrete Fourier transform unit tree on a quantum computer. Now you can see if we take a simple three cube example, so the classical discrete Fourier transform, that would be an eight dimensional or eight dimensional discrete Fourier transform. You can see here, the first line goes up to the power of one, to the x, yes. And the second line is x to the power of two. What you see is here, when you get to x to the power of eight, you loop back around, because you get, this is one of the things that, you have this equation here. You always loop back around in the Fourier transform. You end up with this one thing. And this is a three-cube, I say this is a three-cube, the nth input is a three-cube thing, because we've got two to the power of three, like by two to the power of three, unit tree matrix. Okay, so now to really understand the quantum Fourier transform, you have to understand binary notation and some of the, like the, how you can get integer values from binary values. This is something I struggled a lot with there, because if each scientist is quite native with this kind of stuff, but I was certainly not. So if you think about, in typical binary notation, we have the x here represents the value in the binary, and then you have the two to the k is there, but k is the position, and then two to the k is the value of the bit, okay. So you have, so this is position two, bit value four, position one, bit value two, position, yeah, you see what I mean, and two to the k, cool. So we can then, basically, we can get the integer value by just summing our x's and our two to the k's. So this is six, this is four times one, two times one, one times zero, okay. Now, quantum Fourier transform uses fixed point binary fraction notation, so this is the same idea, but we shift decimal point, and we do it with fraction. So here, it's a bit confusing, but we now use the position is one, two, three, four, to the right of the decimal point, and then we do, what times in by powers of two, times by powers of minus two, so minus powers, two to the minus powers for the position. So you can see here, this is the bit value is one half, a quarter, one eighth, or 16, and then we times it by, it's a bit value. So you can see one times a half is a half, one times a quarter is a quarter, plus one, and then one eighth times zero is zero, and one is six feet, times one is one, 15. And we add these together, and then this basically gives us our fraction, binary fraction, which represents our decimal. So the more bits you have, the high resolution that you get, and we represent it whenever there's a square of brackets in my notes, this is a binary fraction, so just be really careful with that. And this really did confuse me a lot, so. And I'll leave this for a second, because it's really central to the quantum Fourier transform. So, this is quite heavy, so I'll try and follow the derivation or just get the main idea. The main idea that you map between two states, but what I'm mapping is done by is, what I'm mapping is, is given by this equation. So remember we had that, so we have a three cube example, there we have this ij times k over n. Now this is n in the two, in the three cube cases, two to the power of three. So these exponential powers that we have here, in the three cube case, which is eight dimensional to n equals eight, I've changed them, I really have changed the indexing, so I apologize, but you can see it's basically over n here. Instead of, it's what we put in a qubit form, so it's two to the power of three, okay? And then we've normalized it with qubit form as well, so it's two to the power of three. I said qubit form, I mean just the powers of two, okay? Now, this is a fraction, and we can explain that using the binary fractions that I just fixed and the point of binary fractions that I showed before. Now, one eight, two over eight, three over eight, et cetera, then take this binary fraction form over this kL, which is either one or zero, and then we can represent this integer as a bit string, and you can see now we've got this sum over an exponential, sum of an exponential for each kL, so it's quite natural to break this exponential up into its constituent tensor product. Got what we'd be now. Okay, I'll say any of these slides, don't worry about trying to understand it, but the main idea is that you basically break up the tensor product, you use the binary fraction notation to get some of this weight of this coefficient, you then use that to break it into a tensor product for each bit, and then use this very powerful idea that when you have k to the zero, you have exponential of zero, which is always one, but then you get basically all the terms on the zero vanish and you just get these terms on the one, and this is a product state in quantum information, so there's actually been a lot of work suggesting or proving that the quantum Fourier transform is not, it's classically assimilated for which is well known, but it doesn't generate strongly entangled states. Okay. So basically what you can do now is you can then, for all these binary fractions that you have, for the j's, you can basically take out the whole number and they'll always be a dominant fraction, so you have half a quarter. The halves can be represented as the simplest binary fraction, which is a binary fraction. The quarters can be represented as the four element term and the eighth can represent it like this. That's true. I encourage you to work through this. Okay. So then you end up with this binary fraction kind of mapping. You get x to this, e to the binary fraction. Yeah, and you can, I'm rushing through this, but you can cut, when you expand the binary fraction, which is a sum into its exponential term, you can see where the quantum Fourier transform comes from, because you have h, this is just two, this is a h, and this is e to the power two and three, yeah. And you can kind of see it acts only to keep it this way. Again, I encourage you to work through this. Basically, the map, you basically can map from one state, you basically map from the non-Fury coefficients into the Fourier coefficient state. You've got your bit string state, and it goes to this Fourier basis state. And then the quite subtle step in the, in Nielsen and Schwang, is this kind of mapping from, you map back from the integer bit, you map back from the integer mapping to the bit mapping to the integer mapping, which is very, this is very confusing. Okay, but basically, and the generalization is this, and you can see you can get a more and more, more and more higher resolution fraction. But the game of, the real like, likeable moment happens when, if you think that, the way, like if you have an input state for a given phi in this form, because this is a Fourier basis, the quantum Fourier transform will map from that to the bit string, so you can read out the binary fraction, the binary fraction of the phase from the input Fourier basis. The problem is getting the system into the Fourier basis to read out the phase. Now that's the game of quantum phase estimation is, how do we get the phase into the structure which can read out by the quantum Fourier transform? So there's the, and that's what phase kickback is really, is really that, so you might have a property called phase kickback, so when we have an eigenstate, when we have an eigenstate, when we have a controlled unit reacting on an eigenstate, you can see that the unit trade applied. This is the one here, this is the controlled one here. But then because this is the eigenstate, the phase just gets generated, and they can just form this product, it's almost like this wasn't even touched. We just have the phase that's kickback on the anteater. So this is how we get the input state of the quantum Fourier transform in the way we want. This is the game we play to get the input state of the quantum Fourier transform to read out the phase from the Fourier basis to the, from the Fourier basis to the phase basis. And then we can, as we noticed there was powers of phases in the input state of the quantum, inverse quantum Fourier transform, I should say, sorry. So we just did that by playing a powers of u to the m. So you can now see, let's have a look at this example. So you can see we applied a Hadamard to the first one, and then we applied these powers of controlled unitries, and then we ended up with powers of controlled phases. And remember this is an eigenstate here. Now we basically, we've got this eigen, we've got this Fourier basis here. We're going to map out the Fourier basis to get this phase. So we read out the phase from that, and then the phase can then be extracted by this binary fraction equation here. So how are we going to apply that to eigenvalues? So the idea is the same. But rather than our controlled unitry now as our controlled time evolution operator, and the phase that we're trying to calculate is Ej times T. Okay. And we're trying to extract this in the same way with the inverse quantum Fourier transform, why excessive powers of controlled time evolution operators. This is, again, an example of why time evolution is so important, and why polarization, and these things can be seen up, it's really useful. So again, we have this example, and then we apply this excessive powers of controlled time evolution, we then get this phase, so phase missing out, the T missing out. And then we can extract it this way, and we can then, because the phase is E to the T, and then we can extract the energy from that. Okay, so the problem with the canonical phase estimation is that it's very expensive. So you show that you had these controlled time-efficient operators. These time-efficient operators themselves are a really expensive thing, and there's lots of work to try and reduce the cost of that. And the main one is that you need, so you don't need the, like there's obviously the, to define the algorithm, you need the eigenstate, but you must, but you don't need it, as long as you have a significant overlap, I think over half, because obviously successive applications of the algorithm will then boost the signal of the correct phase, which is, yeah. Yeah, so as I said, controlled time-efficient is really important, and there's lots of different ways of doing it, and I explained that. And one of the other problems is obviously you need lots of ancillars, oh, that's the ancillars. And you probably, I mean, you almost definitely will need fault component compilation and error correction for these algorithms to work. Okay. I'm gonna stop there because, any question? So as you mentioned, and as it was mentioned earlier there, VQA doesn't work for, doesn't have any useful applications for near term, like NICSR converters, right? I want converters. So what are the exact points where like Crelob methods, like what, at which points do Crelob methods or like these quantum phase estimations, like due to like space or time complexity, or like at which points are they more advantageous? And for the near term, quantum Crelob method is the best, I think. But again, it's how you compile these functions as H. I think the simplest trotterized time-relation operator combined with Crelob is a simple way to go. It's probably the best near term circuit primitive that will be the most useful. So controlled Hadamard test with trotterized time-relation operator. Does that answer your question? The second part, like at which points does quantum phase estimation, or which of the parts does it show advantage, or like as you mentioned QPE has practical applicants, right? Like which part of the problem encoding makes it so? Like what enables it so? I mean, there's lots of studies in complexity theory of like the precision needed phase estimation, I think. It's quite hard to compare them to Crelob methods because they're not the same. I would say if you have enough answers, phase estimation will win every time and you can get really precise and right. I think Crelob methods are probably slightly better. But phase estimation is a long way off, I think. Okay. I had another question as well. So Garnetian works on tensor network states for classical simulations, right, of quantum chemistry. So is there a quantum equivalent of like answers, like tensor network states answer them to the work well? Or is it? Yeah, great question. So there's two ways to approach this. The first one is that all quantum circuits are tensor networks. So you've got a tensor network and that's, if you've got a quantum circuit, okay? So that's always a great, but there have had this work with my colleagues, Michael Fitzfrog, et cetera, there's other people like that where they're doing these quantum tensor networks, which is a slightly different approach where I'm sure you're aware, if you know about tensor networks and like matrix product states mirror it, the bond dimension is a limiting factor. Like, because like, basically the dimension of the matrix multiplication between the tensors needs to be truncated to a point that it will fit on a classical computer. That sometimes has been shown to scale exponentially for certain problems. So the quantum tensor networks are a way of, even with the size of the base of the bond dimension on the quantum computer. So if you read the work on my colleagues, I think that's very recent actually, even the quantum mirror approach. So that's a way of doing, leveraging the exponential scaling of few bits combined with the classical tensor networks, which is a neat method. So much. Thank you very much for the wonderful lecture. I'm not sure if this is like some basic knowledge, but is there a way of getting the ansatz like a smart kind of method or is it just like a heuristic method, like a trial and error approach? So there have been some papers which show in the limit, the extra limit, you can get an ansatz which will be, get the exact ground state. For example, the famous paper, the symmetry-preserving ansatz, but I think you need exponential parameters for that to be sure. I personally think that the group, the internal group symmetries give a lot of argument as to why you can reduce the parameters, but that does not make the answer, but. The internal symmetries of the wave function often mean that parts of it don't need to talk because they're in different irreducible representations. That's often represented in matrix holdings by like block diagonalization, when you have lots of zeros. So I think maybe it's an ansatz using total spin, for example, to take advantage of that. I don't know if you're familiar with the genealogical coupling of spinargo functions, but you can basically couple spinargo functions in like a tree network. I always thought it would be cool to map a quantum circuit ansatz to that, but then if you want to just go for the heuristic, you can be this variational compilation approach, which is where you kind of successfully add gates. There's like the adapt ansatz, and there's also, I got to work on this as well, but you basically make the cost function overlap with some state that you want, and you keep adding gates until you get closer to this cost function, but that's very heuristic. Making ansatz is a very difficult problem. It's the same thing in 10th network, in fact, they're faster space. You're just throwing NPS as a mirror problem, which doesn't necessarily have that inherent structure. Okay, thank you. Thank you for the talk. I wanted to ask first a technical question about the Hadamard test, where there was a W operation that I just wanted to ask which operation is it? That's a phase gate. So that'll give you the imaginary part of the expectation rate. Okay, and this guy, yeah. Okay, perfect. All right, all right. And also in the same slide, there is that wave function scale linearly with a qubit number. I was wondering if this means an advantage respect to the classical numerical methods or? Yeah, so this is probably badly worded, but what I'm trying to say is that the basis dimension here is that you can represent an exponential number of states in this part by a linear number of qubits because you have two to the N possible, like one to zero combination. Whereas you have to store this explicitly as a vector in a classical space. So you've got to store an exponential scaling vector. So you have a linear object to store rather than an exponential scaling object. But there's a lot of other problems with storing the quantum state. It's not as simple as this one to one. Okay, and last question about phase estimation. If there was any, like, since with phase estimation, we get a bit representation that is an approximation of the real eigenvalue. I was wondering if it was possible to apply any rescaling to the operator that we are applying in order to get integer eigenvalues and so more precise bit representation of the eigenvalue. They change the time step in the phase, almost, right now. That's a very good idea. I'm not sure how it's short, either that would be able to apply to the canonical form because that algorithm is kind of like a recipe that you can't really touch. But I actually have some slides on some modern approaches to phase information that I will talk about in the next slide, which kind of do use a similar argument. You use different time step lengths, which can still calculate the phase. But it's not quite a rescaling, but it's just a different way to get the phase. Okay, thank you so much. Hello, Christian. Can we go back to the phase gadget? Yeah, yeah, yeah. My favorite primitive. Yeah. So you suggested that people put in a bunch of, like, cats and analyze what happens. Shouldn't the students instead use the powerful and convenient stabilizer formalism to analyze what happens with these circuits? I like to think about everything in terms of states because I like the way functions flying around. Massacism. Yeah. I think if anyone's interested around the break, I'll show people how to decompose this circuit with the stabilizer formalism and show that this, you know, executes some polygon. Okay, yeah, let's thank the next one again. So, after the session, we'll be given at Labo, at the Adoria Tico Kessars, from 2PM. Yeah. So, yeah, I'll try.