 It's a real pleasure to be here So for those of you who are expecting 23 pages or 23 slides of triangle inequalities Unfortunately, I hate to disappoint you. This presentation is going to be a little bit different Okay, cool. Excellent. Great. I'm glad I don't have to tell my jokes twice. So so Basically the question is From this from this talk is I mean to some extent I want to address what for me is has been Historically a really embarrassing question that you get asked all the time the question is well deep down Why are we building a quantum computer? What real practical valuable problem do we hope to solve? Using using one of these devices now. I've always said quantum simulation clearly That is the application that will provide the most value in the shortest time, but What sort of quantum computer are we going to need in order to be able to capitalize on this vision? How many qubits are we going to need air correction? Are we not going to need air correction? And how do I know that we're looking at a problem that is both important and Hard to simulate using classical devices and that really is the aim of this work and it should be seen as kind of Complementary I feel to Andrew's talk earlier. What we're doing here is we're looking at a much harder problem We're looking at something that to the best of our knowledge the best Classical methods for these for this the problem. I'm going to be talking about will struggle And so the aim here that I hope that you guys will take out of this is an idea of what the Challenge is right now for quantum computing and how we should improve things not just on the hardware level but also on the algorithmic level so that we can capitalize on this vision of Building a quantum computer that can solve the world's hard hard problems in chemistry so But first we have to solve the world's hard problems in pointers Excellent that one was easier than nitrogen fixation so The basic idea behind quantum quantum simulation is Really simple what we want to be able to do is we wanted to be able to get hard problems like problems involving photosynthesis high-temperature superconductors or this problem I'm going to be talking about about nitrogen fixation and we'd like to be able to map them directly onto the quantum computer So the idea behind it is really primitive in some ways what you want to be able to do is you want to be able to get the Lie to the quantum computer and get it to trick it into thinking that it's the molecule that you want to simulate then because of Universality you can ask any question that you could ask of the molecule and perform that logical experiment on the device So at a high level that's basically what we're we're looking at And so what we want for this talk the question is well What is the application that we want to look at and the one that we chose is biological nitrogen fixation now? Why this topic you know a fertilizer doesn't exactly sound like the sexiest thing in the world? especially once you see what most of it's made out of but The point is is that actually? Fertilizer production or in particular in particular ammonia production is One of the most costly things on the planet energetically speaking It uses up a sizable fraction of our energy budget every year just carrying out this process known as the Haber process The Haber process was well invented by Fritz Haber basically about the turn of the century and it is Arguably one of the most important industrial chemistry processes and no one it single-handedly kept Germany in the first world war and it as I said is responsible for almost everything to some extent that we eat and so Given all this time you would assume that this process has been optimized to the point where we have Wonderful catalysts in order to be able to do this ideally at low temperatures and pressures But the truth is we don't and that's one of the reasons why it uses up so much energy Because what we need to do is we need in order to make this reaction go forward in an industrially scalable way High temperatures and pressures and those cost energy So given all of this you might think hey well, you know, maybe this is just a fundamental thing in nature Maybe we can't beat this but unfortunately compared to The the true masters of fertilizer production. We know nothing and the true masters are bacteria Bacteria have actually found a way to do take nitrogen from the air Split the triple bond that holds it together and make ammonia out of this at room temperature and pressures And if we could find something like this that we could execute on an industrial scale It would fundamentally change the world But a problem with it is that if we take a look at the molecule that's responsible for it This is the active site. It's a little poetic to say this But the this is sort of the molecular knife that slices the triple bond that holds together the nitrogen and so in order to understand how this process works and Build a synthetic catalyst that we could actually use in a scalable way One thing that we'd like to be able to do is simulate this active sighting here Problem is is that we can't do it and to give you an idea about how hard this is even Experimentally until very recently. We didn't know what X is yes For those of you, I'm glad I can allay this concern X is not a new element X is simply an unknown element In this case and tell very recently we didn't know that that was carbon So that's how unknown this This was and how recent this development is that we could even begin thinking about this So the point is is that the despite the fact that that bacteria use this mold this knife essentially to cut it We our best classical methods really struggle with simulating this and one of the reasons why they struggle is because of the fact that you if You take a look at this this thing is chock full of Heavy metals molybdenum iron things with D electrons and typically is a rule of thumb When you see things like that a lot of classical methods will begin to fail because of the fact you get very Correlated electrons as they start moving around these heavy positive charges and so Classical methods tend to fail for this and so That's why this is a good target for a quantum computer. It's important and it's hard and so that's why we look at this as a particular example because it's a it's a Fantastic case study of something that we would one day want to simulate in order to actually be able to probe reaction dynamics for any sort of industrial process So but again the main question is what type of quantum computer are we going to need for it? You know, it's a small quantum computer of about a hundred and eleven cubits going to be sufficient Or are we going to need a bigger one? And if we want to do this at a sensible speed how fast this is is going to be and to me This isn't just an academic question because giving these kinds of numbers right now I feel is an inspirational thing to let the experimentalist know what we're looking at for hard problems That we'll want to see it solved down the line so that hopefully we can meet in the middle with the hardware that'll be required to do this So to give you an idea about you know, what this actually ends up looking like is this is the whole protein That's responsible for for what we're looking at so Something that I thought before I started looking looking at all of this is I assumed with a quantum computer What you naturally would do is you would take this entire mess out here and encode that in cubits in the In the system because that's just the most direct way of doing it But it turns out that that actually isn't the information that chemists need What hybrid methods can be used where portions of the problem are solved classically and other parts are solved quantum mechanically So the idea basically is this entire protein sheath kind of around the outside of this active site Almost all of that actually can be simulated classically very easily The only part that we actually have to simulate quantum mechanically is this Portion which you can see zoomed in in that part of the protein over there And so that's our goal. We wish to simulate this This part of the molecule and combine that with a classical calculation in order to give us all the information We need to figure out reaction rates and so just to give you an idea this is In some sense strongly hybrid algorithm because of the fact that classical and quantum computation are going to have to be Going on for us to really realistically be able to estimate these reaction rates We have to start and generate particular structures optimize them compute the Hamiltonian terms You know those end of the fourth terms or n squared depending on how you do it But I guess those n squared terms are a little bit easier, but I digress so the for in the the the We have to generate these terms then feed them to the quantum computer the quantum computer is the only thing That's that's used over here in the quantum level. So we figure out energies from this And those energies are passed back to the classical algorithm They're combined with a classical part of the energy that's being computed and also in tropic Calculations because in order to figure out the reaction rates We need to figure out free energy differences and free energies are have an atropic part and an energy part Entropy is easy to get the energy is hard to get so that's all what we're doing here and yes Cool and so just to give you again an idea Ryan did a great job of doing this But I'll just recap it you know in order to in order to jog the memory a bit But basically what the idea behind behind what we want to do in order to simulate that quantum part of it Is we want to begin by taking some physical molecule over here, and we want to map that to a quantum computer So the way that we do this is we have to have a math mathematical model standing in the way All right, so this mathematical model that that's going to be used in at least my talk for both of the methods that I'm going to be talking about is a second quantized representation so We use that and we just look up in a book Unfortunately, I guess it's a bad example because they don't really talk about second quantization here But I like the cover better of this book. So that's how it made it. So anyways the that's the basic idea We take the physical system we mathematically model it and then we simulate it on a quantum computer So how do these second quantized quantized models work? Well, what we do is we break up our space into a series of basis functions these can be chosen in a number of different ways and Actually, a lot of the art of chemistry goes into selecting what basis functions are going to be needed in order to be able to do a calculation From my perspective, it's it seems almost magical actually when you take a look at real pros Who take a look at this because they have a lot of insight that they and domain expertise that they use for choosing the right functions To represent it these ones over here. These are just hydrogenic Orbitals so these ones are at least easy to understand but in principle you can pick anything you want for this problem And so how it works with second quantized is just like you remember from you know, your your undergraduate chemistry courses We have each of these individual orbitals over here and they can they can each take two electrons one spin up one spin down And so what we do is we model this by occupation. We say okay Well, if this orbital can have you know two different configurations in it We can store this as a pair of qubits right first qubit stores whether or not there's a spin up electrons second one stores Whether or not there's a spin down electron So in this particular case, we've got a full orbital the qubit representation would be one one this one over here Be zero one and these over there would be zero that's it So that's entirely how the encoding ends up working and the real art ends up coming in to choosing these orbitals in a Smart way to make the Hamiltonian simple and also to make the number of orbitals i.e. qubits that we need to represent the problem low All right, so that's basically the idea And so what we want to do in this process is the simulation go typically ends up going as follows What you do is you first begin by preparing the states in the qubits in some sort of a state in Our case in order to figure out these energy differences We really need to be able to figure out our the energies we need to be able to figure out what the Essentially the ground state energy is in different configurations. It's not always ground states Sometimes you're interested in excited states, but for this problem over here. We focus on ground states so First thing you do is you begin by preparing the the system in an approximation to the ground state Common things to do is to begin with a mean field approximation to it or adiabatic There's a bunch of different choices, but you got to start with something that is an approximation to the ground state Then what you do is you compile the dynamic dynamics so this is where you do a dynamical evolution very similar to what Andrew was talking about earlier and At that point you can use things like Trotter Suzuki formulas to do it Which is the method that I'm going to be exclusively talking about in this talk so Then afterwards we need to estimate the energy out of it So we use quantum phase estimation to be able to do it and These things together form kind of like the cycle that we we use in order to solve basically all of these problems So in order to give you an idea about how this works You could end up actually just looking at a Hamiltonian that's very similar to how Ryan ended up putting it This Hamiltonian is a direct consequence of the Coulomb interaction When you write out the Coulomb Interaction for any orbital basis in general unless there's symmetries like the kind that we were seeing with a plane wave basis You end up getting something it ends up looking like this with end of the end of the fourth terms in it And so these operators over here just to give you an idea they represent hopping So this one over here says you kill an electron at site s and site q and create one at site r and site p This one over here just talks about hopping it says you hop from site q to hop to site p And so these are these represent you know These are called one body terms and these are generally two body terms and they just the later Describes interactions between electrons and that's the one that we're usually most vexed by when it comes to quantum simulation so To give you an idea again about how many qubits we end up needing for this a lot of a lot of the the a Lot of the the optimism that ended up coming from this is that to do some of these molecules over here You need a relatively small number of qubits now these aren't natural targets It turns out for quantum simulation the reason why is because these sorts of organic molecules They tend to be very easily simulatable with existing classical methods at least to the level of precision we need But the key point is even large things like this They don't typically require a huge number of qubits in order to represent them. So there's some optimism for that Now Basically as I said before you know what we really want out of this is we want the ground state energy in order to be able to Estimate the reaction rates because that helps us figure out the free energy but we need to really be able to get an idea of the cost and How many trotter steps how much phase estimation we're going to need and the like and so that's where the majority of this ends up Going the effort ends up going in so from a theoretical level the the perhaps the biggest pain with doing these sorts of cost analyses is that When you're looking at this process, there's several steps that end up coming in That end up impacting the air in the simulation and we need to make sure the air in the simulation is sufficiently low that we Are going to be able to accurately predict at least the direction of that the chemical dynamics is going to go in So we need to make sure that the air is less than something known as chemical precision Which ends up at least giving you that sort of information at room temperature So the sources of error that we end up getting that come into here our phase estimation errors We get errors from using the trotter formula in our case And we also get errors from circuit synthesis because you know if you've you've got rotations that naturally end up showing up In these trotter decompositions and we've got to convert these to a to Clifford and teagates All of these things have errors that have to combine together And we've got to balance them off in such a way so that we minimize the total number of operations to hit it So for example, you know, we might want to throw You know the majority of our air budget into phase estimation because the air scaling for phase estimation is pretty awful Then throw some of it into the trotter decomposition and throw basically like none of our budget into circuit synthesis because it's a lot logarithmic and cost but Formalizing that balance between all three of these in order to get an optimal trade-off That's what actually ended up taking the majority of the work here and showing that these errors actually do lead directly in the worst-case scenario To errors in the estimated eigenvalue But there's another source of error that experimentalists tell me unfortunately exists Believe it or not gates aren't perfect. I know it came as a shock to me too, but The point is is that we're likely going to need air correction And that's something I'm going to talk about later in a talk How much air correction are we actually going to need in order to realize these algorithms? so Think it's a bit of a review to give you an idea of what phase estimation ends up looking like is that? We have a circuit that ends up looking like this you can really think about it You know for those of you who haven't dealt with it much this really is kind of like an interferometer What we're doing is we begin with this qubit at the top is in a state 0 we Apply had in our transform which puts it in a superposition of 0 and 1 we've got a kickback phase over here Which is somewhat optional and then conditioned on whether this bit is 0 or 1 we apply a unitary to this arm Then we recombine both the paths and measure and so the probabilities that we end up getting out depend on the phase That's accumulated in the path where the top qubit is 1 because that's the only one where the unitary is applied By measuring the statistics on the top just like an interferometer. You can estimate what the eigenvalues are That's the idea behind it So we apply this and by doing this a sufficiently large number of times changing M and theta accordingly What we can do is we can estimate what the eigenvalues of the Hamiltonian are and that's how we end up doing it So that's how this part of it works now for Trotterization again the key point is that if we look at our Hamiltonians that we end up getting out of this The Hamiltonian is naturally this monster up here to give you an idea about how many terms We end up having in our Hamiltonian. We've roughly got about 10 million terms in the descriptions of these Hamiltonians So these things are just monsters And so there's no way that I that you're likely going to have this as a native gate on your computer So the idea behind the Trotter decomposition really is just giving you a prescription for Compiling this over here down to a bunch of rotation gates once you have these rotations then you can implement those using Using your favorite Operations in your gate library. So the way that you do it basically is you just apply the Trotter Suzuki formula which says more or less take this formula and pretend all the terms actually commuted and Technically for we we symmetrized this over here although for ground state estimation It turns out believe it or not the basic Trotter formula that isn't symmetric has exactly the same air scaling as the one that is Symmetrized it's surprising it doesn't hold for dynamics, but for your ground state estimation for these guys it does So that's what we do and the idea behind it basically is kind of like a light switch Right if you flick a light switch on back and forth over and over again really fast It makes it seem like it's on the entire time and that's what's happening with the Trotter decomposition We want to simulate all of these individual Hamiltonian terms being on during the entire evolution And so what we do is we rapidly switch between each of the individual terms And the effect is is if they were on the entire time So that's how it all works and in order to make the air small what we have to do is we have to pick these T to be appropriately small in order to in order to reduce the air and so We do that and the air just in case that you're curious this value of T that you end up having to pick Basically ends up going like one over square root epsilon for this case But what we have to do is we take each of these individual terms though Over here, and we have to convert these into fundamental gates and in order to do this We have to choose a representation for our our cubits. There's many choices You can just go to the literature and find one that really fits your needs The three most popular ones right now are Jordan vigner bravy kid have and also there's this Really kind of slick encoding that I really like that bravy and some other people at IBM did that can also Be used, but we focus on Jordan vigner the Jordan vigner transform because it worked a little bit better than bravy kid have for for our stuff But that's basically how we do and so the idea basically is the Jordan vigner transformation takes these creation annihilation operators and ends up expressing everything as a series of exponentials of Pauli's and Basically the way that we end up doing this is once we have these exponentials of Pauli's given by Jordan vigner representation We just flip to the appropriate chapter in Nielsen and Schwung and look at the methods that they propose for Exponentiating Pauli operators and then you can execute it Turns out there's better methods that we also talk about in the paper than doing this But nonetheless the actual numerics are generated with this approach so Basically what we have to do in order to do this is we have to execute a series of these templates Okay, and so just to give you an idea these sorts of circuit templates depend on a type of term that we're looking at So for the one body terms the HPP terms, they're very easy They're just a phase that you have to apply to the the system for the hopping term Things actually aren't so bad What you what you have is you've got a series of pair of controlled rotations with these things these things by the way They're often called Jordan vigner strings, and they're used in order to Compute make sure that the the creation annihilation operators anti commute rather than commute when they they occur on different sites and so That's basically the idea behind it and incidentally Ryan's fermionic swap trick that he was mentioning during his talk It's a fantastic way to remove the need to do those But that's how these guys end up working now for these terms over here that have two mixed indices where they're diagonal Circuit ends up looking also pretty similar Okay, so cool things are looking pretty okay Hopefully nothing gets really gnarly when we go up, but once we get to three things start getting a little more complicated But four looks like this Isn't that fun? Oh Yeah, and the best part about it is these guys are actually the most numerous of all of the terms almost all the terms end up looking like this And so when we're looking at a quantum simulation We're going through a whole bunch of circuits over and over and over again that end up looking like this There's optimized it a few optimizations you can make but basically this is it This is the bulk of what our quantum simulation is doing Okay, so in order to figure out how all of these errors come together One of the the hardest parts about all of this as Andrew rightly pointed out is accurately Estimating what the Trotter Suzuki errors are so the approach that we do is the for ground state estimation One of the things you can do is you can use perturbation theory in order to be able to figure out what the shift in the effective Hamiltonian is because of the Commutators that you've neglected from your Hamiltonian And you can use those in order to be able to generate an upper bound on the air And so this upper bound over here for is is our commutator bound in the in the terms of that that Andrew used So we take into account that the air you know depends on how these double commutators between the individual terms end up appearing and We throw that in there and then use a triangle inequality in order to be able to upper bound that And so when we start taking a look at that This is what we end up seeing a Trotter number basically is how much I have to end up reducing the time by what fraction of it In order to make the air some fixed constant And so for this over here what we end up seeing is this is empirical data down at the bottom So you can see that with this empirical empirical data that things are actually much much much better Than what the worst case bound ends up saying So what we do is we say okay for most of the numbers that we present provide in the paper We give two things first we give upper bounds where everything has been upper bounded We use the most paranoid possible calculation, and then what we also do is we couple this with with Empirical bounds and because of the fact that it's hard to extrapolate What the scaling of the Trotter air actually is from the small sample out here? Especially because these molecules have nothing to do in many cases with nitrogen a's which is out here We have we have many choices about how we could extrapolate it So what we what we ended up doing is we ended up doing a couple of things this blue line down here is really just like a Least squared fit over the part that we could we could simulate and we get a very very optimistic scaling So I'll refer to this as the optimistic estimate from that these other two lines over here are Kind of more pessimistic a rescaling so what we do over here is we take the upper bound That we hear that we can prove and we assume all right. Let's assume the scaling from the upper bound is correct But the constant isn't so for this top line what we do is we just take that the scaling extrapolated from the upper bound and Shift it down over here so that that scaling intersects with the the largest point that we saw And then we extrapolate up from that this over here is we scale down the average that the average of this meets the average of that and Based on that these over here the that's nitrogenase up here Rescaled down to according to the average rescaling so these are the values that we use for most of it over here So that explains basically how we estimated what we thought the Trotter air is going to be for this large molecule That we really can't fully know what the Trotter air is going to be like Based on well the size of the simulations we can do So to give you an idea about where this lands and unfortunately Andrew stole my thunder a little bit with this and Gave away the order of magnitude of the gates, but one of the things you'll notice is that the number of gates They're kind of daunting They're at 10 to the 14 Now if you sit down and think about it, right? If you actually end up assuming though that you have a machine that can logically do T gates at a hundred megahertz, which is extremely fast Don't get me wrong But let's assume that because it's kind of comparable ish to what you can do with classical hardware You actually would be able to go through this in a relatively short period of time You know depending on the amount of parallelism that you end up using and these are some different schemes that we Ovid we use for it between 12 days and 11 hours so a quantum a quantum computer in principle if it were if it had the ability to stay coherent for that length of time and Was able to operate that quickly it could actually solve these problems within a sensible amount of time So that's that's kind of great The number of qubits that we require also depending on the amount of parallelism because we play this trade-off right more memory less time You know so even if we look at the case that's most memory efficient We only need a little over a hundred qubits logically in order to be able to carry it out So that part at least seems pretty good But the key word that I meant that I have to emphasize here is Logically if we have 10 to the 14 gates that we're carrying out in sequence in this Trotterized phase estimation circuit that those error rates had best be pretty small probably on the order of 10 to the minus 15 Or so in order to make the final error probability small So we're going to need error correction if this is right and the question is Does error correction sink this ship? Right and if a or I guess more positively What sorts of error rates would we like to be able to see in our physical hardware in order to be able to kind of deliver the sort of machine that could do this reasonably and So basically we've got a choice of different codes that we could use for this and the one that we chose to look at is the surface code Main reason is because well the surface code is I have great interest. It's got a very high threshold Arguably the one of the best studied codes around so it was very easy for us to gadgetize and replace all of our Components in our circuit with their fault tolerant analogs because of the breadth of literature there so that's what we end up looking at and Here are the results that end up coming out The key point down here is at the bottom What we have is that this column? I think is probably the most interesting one These are serial those are just this is just the serial algorithm So we don't try to parallelize whatsoever with this and these are different error rates that we we ended up assuming now for The air correction affectionados out there if we assume that we're just an order of magnitude below threshold here We need a pretty serious distance code for this We need like distance 35 and two rounds of magic state distillation in order to carry this out So that that's some pretty serious overheads if we end up getting down though Three orders of magnitude below this where some more exotic quantum computing Architectures might be able to one day hit We add things become much more reasonable. We only need a distance nine code and If we take a look down at the total number of physical qubits required for this The numbers that we end up seeing if we're if we're on the order of ten to the minus six physical error Then we only need about one point two million Cubits in order to hit that if we're up on the order of a tenth of Threshold on the other hand for this sort of thing We're going to need about a hundred million physical qubits now if we parallelize in order to make this run faster Then things can start getting much more absurd the most absurd being you know We could be at a hundred billion rotations if we want to be able to run this or sorry physical qubits If we want to run this at maximum speed So that is sort of the challenge that we have here if this is a task that we want to do Then and we want to be able to execute it within that those times that I gave on the other table Then the scale of a quantum computer that we need to build is pretty much unlike anything that we've seen in our labs at present We're going to need to think about how to control millions of qubits We're really going to need to think about how to reduce our errors Substantially below threshold in order to be able to get to the place where this seems like a favorable trade-off and Also, we're going to need to make sure that we can have relatively fast gates in order to make all of this even make sense Before you know a bunch of other things start becoming prohibitive But that's yet a hardware level now that we've seen these level of these these numbers For people like myself at a algorithmic level. We need to start thinking differently We need to start looking at new algorithms for being able to do this that can potentially cut this down and again going back to Andrews Discussion we need to start looking at some of these other Post-trotter methods to understand how they work in these contexts to see if in practice They can start giving us numbers that are closer to where we want to be But also the other thing that we want to do is we want to perhaps Reconsider how we represent our problems on the quantum computer because what happened with this problem is we chose a Representation that is optimal for chemists when chemists want to compute the energies They use this in this case a triple zeta basis function. That's based on Gaussian Gaussians in order to do it But it wasn't designed for our needs in quantum computing it wasn't designed to kind of trade off between accuracy and gate counts and that's fundamentally the Tweaking that went into all of these numbers up here and our basis set design did not end up hitting that part So one of the things that I feel needs to happen going forward is we need if we want to really solve these problems We also need to get chemists as an integral part of this loop Because if we're not talking to them and figuring out the best ways to represent their problems on the quantum computer We really are I feel going to have a hard time making the big leaps that we're going to need in order to make Solving hard problems like this Possible on the first generation of quantum computers, which ideally is something that I would like to see But to that end I'd like to I'd like to talk about work that you know Ryan Ian and I ended up doing pushing towards this This goal in one way. So what we did is we started looking at a different problem so we we looked at the problem of Trying to simulate the electronic structure problem for the plane wave basis and in particular We're looking at the plane wave dual basis just like Ryan ended up saying in this case It's very nice because the Hamiltonian is Actually, expressible in a closed form. So this is cool because it means you don't have to talk to a chemist in order to be able To actually express the Hamiltonian. This is everything that you need to know Whereas this triple zeta basis function that was used in order to represent the FAMO co problem Previously that required a domain expert to tweak everything in order to be able to give us something we could represent in our machine or Our simulated machine as a case may be But in any case we look at this and we focus on it because it's only got n squared rather than into the fourth terms and a problem that we want to want to simulate is we want to well we want to use these plane waves because of this orthogonality property and We want to be able to just Find a problem that really is well suited for this and also Well requires pretty very little resources and a problem as Ryan ended up alluding to that We end up looking at is Jellium which has the best name the best name. I swear. It looks nothing like this I've always kind of wondered what color it is whether it's more lime green or red But I perhaps perhaps I'll never know but the idea behind Jellium again is what you do is you just get a bunch of free electrons and let them Kind of roam in a box. There's no charges. There's no nothing. They're just interacting with each other And it's an as Ryan said it's an important benchmark problem And it's actually very important for understanding the foundations of DFT. So that's what we end up looking at and The numbers that we will end up seeing from the simulation of Jellium are as follows So this is the total t count now one of the things that I should mention here is what we did That's a little bit different is that these are not Empirical values for the Trotter error that we end up using we ended up using an upper bound on it But the reason why we ended up doing this is because there's order only order n squared terms so we could actually explicitly Construct the operator that's a leading order describes the air in the Trotterized decomposition and really come up with a good estimate for this case of what that was in the case of the end of The fourth terms we couldn't do it because the computational complexity was order into the tenth for that calculation Here it's no problem. It was only order like n cubed or something So or actually into this into the fifth, I think but anyways the key Maybe it's lower, but the point is is that when we go through and do this calculation Depending on whether or not we're looking at 2d or 3d Jellium one of the things that we end up seeing is that as we end up increasing the number of Number of plane waves in our problem We end up actually getting to a point where we can get to pretty large systems and under a billion teagates and Key point behind all of this is that these are upper bounds if we put empirical numbers in my suspicion Is they're probably going to go down by a factor of a hundred So this suggests that actually, you know once we change the basis in order to be able to reflect the problem We can actually get some pretty substantial savings and my hope is that going forward Investigating what we can do with plane waves for some chemical systems and other problems of interest that we may be able to see similar Benefits where we can have substantial reductions to chemistry problems that we're interested in just like we see for this over here And so to conclude one of the things that I really want to impress upon you is that There really have been some dramatic improvements in quantum chemistry simulation We've gone from end-to-the-tenth algorithms down in this case to you know something like a end of the 11th over 3 algorithm and this is a really really Important because of the fact that we started realizing that chemistry is much more plausible than it's been then it's ever seemed But the numbers that I'm showing are still a long way away From actually being something we could realistically do on a short-term quantum computer And I really want us to start thinking more as a community and working closer together In order to actually ask questions of what are the hard things that we want to solve on a quantum computer? And how do we get all of the experts together at a table who may not necessarily even speak the same language? That we need in order to be able to solve these problems. Thank you very much Okay, thanks Nathan for that talk We have time for some questions Do I go first? Well, this cost analysis is with somewhat of a older Algorithm isn't it? It's not with Ryan's best scaling the things in the table that had the best scaling This is this data over here. Oh, this okay is actually for for that particular problem itself We have not tried to work out what would be required in order to do FAMOCO using plain ways One of the questions is it's not clear to people like Marcus How many plane waves we actually would need in order to be able to represent it? So there's this debate about how large of a basis set is needed and until we can rectify that We're going it's it's hard for me to be able to figure out what a realistic count is I see so that was a bit of an easy question Maybe I could ask if I gave you a quantum computer with a trillion cubits and a gigahertz Clock rates and you calculate these energies for the whole potential energy surface And then you do the reaction dynamics and you know the reaction rate now for FAMOCO What are you going to do that? What are you going to do with that to solve the I begin by publishing a paper? But then after that what I the key point it really is actually and this is one of the things that is a big unknown We can get reaction rates out of this, but that doesn't help us do what we really want to do, which is the problem of design The problem of design is something that I really don't know and you know I've been talking to Marcus and people like that about how we could use a simulator like this in practice To do ab initio design But I think that there's a lot more work that needs to be done on a part of our community in order to understand How we would how these tools would really be used in practice by chemists who wish to solve these So I think it's a great question and an open one so My question is how important is the initial guess if you improve that? Will you then be able to cut down the size because some of these calculations? You can do a really pretty good job on a classical computer and then use the quantum computer to refine it Excellent question. Thank you very much. So just to rephrase it Basically classical computations are really really powerful for many for many things and often they can give Decent guesses or hints about what the answer actually is and how can we use that? Well, there's a couple of ways we can use it first way that we can we could do is if we're doing phase estimation We can use a prior distribution over what the phase is in order to really be able to give us a Estimate that takes advantage of all of all of our understanding we have from classical methods rather than starting from scratch Second point is is if we have a good understanding of what the gap is and other properties of the system Then we can actually use techniques like amplitude amplification in order to be able to boost the probability of success For the phase estimation step of this But the most important way I feel is that a big red herring or big sort of the big. I don't know Secret that we you know people often don't say and I didn't talk about here Is that I talked about ground state preparation? This is actually just the cost for sampling from the eigen specter I can't make a promise that this sample will be the ground state energy It'll be an eigenvalue, but it won't necessarily be the ground state and so by using our classical knowledge We can direct the quantum system to give us the eigen the ground state with higher probability And that first for highly correlated systems having good chemical intuition about what that is maybe essential for us to really make it work Charlie Bennett had a question, and then I think Earl and then we should move to the next speaker. Yes They well the harbor process it would be thermodynamically efficient Except you have to conduct it at high temperature and pressure and so there are trade-offs to get it to go fast enough But as I understand the nitrogenase is pretty inefficient also in terms of its usage of ATP and things like that. So I Guess this is a good thing to look at But there may be other There may not be actually a big thermodynamic reward in learning how to do it the way nitrogenase does it because It's pretty expensive for those little nodules under the ground on the clover, too Okay, excellent point now one of the things that I want to be most clear about is why we selected this now I I don't think we selected at least I I didn't advocate for this because I thought we'd solve all the world's the world's hunger by Doing this this was selected because of the fact that it's a heart It's a problem. That's typical of a hard problem in catalysis It's something that's been studied for a long time, and we know we don't have good classical approaches Yeah, yeah, and so that that was one of the that was the real reason why we ended up looking at it Whether it would have an impact or wouldn't have an impact understanding this mechanism That's something we can only tell After the fact but nonetheless, I think that this is a this is probably one of the better benchmark problems to look at for Understanding catalysis is an application. Okay, and one last quick question from Earl. Oh, I had two questions Will you only get one? Okay, so One of them is a quick yes. No, the only the second one's rambling so the quick yes No one was for the distillation factory. So there's this one trick that people often forget about which is For the first round you use like a smaller sized surface code for the second round you use a bigger one So did you do that? Yeah, we did brilliant That makes a huge difference. Yeah does and the slightly rambling one was for the Trotter Suzuki decomposition Which if I understand it the that's kind of the bottleneck point in terms of the error is Is there any hope? I mean, I'm not an expert in these trotters Suzuki methods But is there any hope that if you don't break it up into just individual terms But groups of terms and then try to synthesize a whole kind of e to the i some collection of terms That maybe that might cut the gate count down a bit I'll give you the annoying answer because we've run out of time sometimes. Yes, sometimes no, but I'd be happy to talk about that later Let's thank Nathan again