 It's great to be here, and it's quite interesting for me to be here. So if none of you have seen me before, that's because I'm not really a quantum computing theorist. I primarily work on classical algorithms for simulating quantum problems. But I think we all agree that if we're to develop improved quantum algorithms or understand the frontier between quantum and the quantum classical crossover, then we need to understand what is the best we can do with current classical methods. And so that's really what I'm going to try and tell you about today. I'm going to tell you about what the frontiers are when we're trying to simulate quantum problems on the classical computer and where some opportune errors are for penetrating that frontier with near-term quantum hardware. OK. But first, let me just say a few things about chemistry and probably one of the very few chemists in the audience, and most of you haven't looked at chemistry since high school. So in the modern day, what most chemists are interested in is in understanding complex systems of atoms and molecules. And as an example of a complex system, we might take a biological problem such as an enzyme. And of course, enzymes are really central to life. They participate in many processes, such as the reactions which power the planet, synthesis, or reactions which feed the planet, nitrogen fixation. And what the enzymes are doing in these complex chemical reactions is they're helping to shuttle electrons back and forth between the reagents and the products. And these are quantum mechanical events because electrons being very like particles are quantum particles. So what experimental chemists or theoretical chemists like myself are trying to understand is how the enzyme is able to coordinate these different types of quantum events and, in my case, try and understand that through simulation. As another example of a complex system that we're interested in chemistry on the material science, we might take materials with exotic properties. And we all know that you can make materials with different kinds of interesting electronic properties. And one of the perhaps most interesting electronic properties is that most exotic ones is that of high-temperature superconductivity. And actually, this was a phenomenon which was discovered at IBM. This is partly why I'm going to talk about it here. And even though this is something which has been around for now, I think, more than 30 years, maybe it's 31 years this year, we still don't really know why the particular choice of atoms and the range of atoms that you have in high-temperature superconducting materials gives rise to this phenomenon. So this is something that is again a question of modern day chemistry and modern day material science. Okay, so in principle we all know that the answers to these questions just comes by simulating quantum mechanics. Now, this viewpoint was already espoused by Dirac many, many years ago when he recognized that the fundamental laws for most of physics and indeed the whole of chemistry are already known. But he also recognized that solving the quantum mechanical equations, namely solving the Schrodinger equation, is in general pretty difficult for any problem of interest. And indeed there are many pessimistic statements you can find in the literature about the difficulty of solving the Schrodinger equation. I decided to choose the most pessimistic statement I could find, which was made not so long ago by two famous physicists, David Pine, Bob Loughlin, who argued that the Schrodinger equation probably will never be solved when you have more than about ten interacting quantum particles. Okay, so this presents a little bit of a quandary because on the one hand we say we have the fundamental theory of nature. On the other hand, we say we can extract no useful information from it. So the question is what do you then do? Well, of course, as you all know, Feynman said that the solution is just to try and solve these equations on the quantum computer and that's a very natural thing to do and I guess that's part of the reason why we're here today. But the fact remains that at least for any problem of real modern day chemical interest, we don't yet have quantum computers that are powerful enough in terms of the number of qubits available or the coherence time in which they can run simulations. So my talk is really about what we've been doing in the meantime and that sort of brings me to the main part of my talk. How it is that we are able to simulate quantum chemistry problems on the types of computers we have today, which are basically classical computers. And I'll outline the different strategies that we use to try and bypass the complexity in the Schrodinger equation and show a few case studies of real life problems. And this will take up about two thirds of my talk. And then in the last of my talk, I'll try and give a personal perspective on where quantum computing comes in. And this is sort of an opinion, so you should take it with a grain of salt, but I'll try and say where I see some useful developments occurring. Okay, so let's first define the quantum chemistry problem. So essentially, quantum chemistry is just the quantum mechanics of a series of electrons and nuclei. And the nuclei are very, very heavy. So in almost all quantum chemistry simulations, you assume that they behave like classical particles. So you can write down a Hamiltonian and it contains a part for the quantum electrons and the nuclei just exerts an external classical potential on the system. And so the primary goal, the simplest task in quantum chemistry, is just to find the low lying eigenstates of this Hamiltonian. For example, the ground state or a few excited states. Okay, so that sort of defines the problem. Now if we write it down in this form, however, it's written down as a sort of continuous eigenvalue problem. And it's not very suitable for computation. So the first thing that people do when doing quantum chemistry is they discretize a problem by introducing a single pascal basis. And in chemistry, that single pascal basis is known, they're known as orbitals. So we introduce orbitals in the problem. And once you've discretized it, then the Hamiltonian assumes it's familiar second quantized form containing up-to-quartic interactions. Okay, so from a computational perspective, the size of the problem is determined by the number of orbitals that you have to simulate. And as it turns out, the complexity of eigenstates of this Hamiltonian by classical methods differs quite widely depending on the form of the coefficients in the Hamiltonian. And in some cases you can simulate many, many orbitals, in other cases you can only simulate a few. But roughly speaking, the types of problem sizes that people are interested in today in chemistry are problems involving on the lower end 40 orbitals. This is in the cases where you have very, very difficult Hamiltonians to simulate all the way up to, say, 10,000 orbitals or even more. Now, if you look at these types of problem sizes, they clearly involve much more than 10 particles. They involve enormous Hilbert spaces. If you think about 10,000 orbitals, that's four to the 10,000, the size of the Hilbert space. And so you can ask, how is it even possible that we can do these calculations on a classical computer? And the answer is quite simple. It's basically because the types of quantum states that you typically see in chemical systems, the natural quantum states, are in fact quite boring. So there's an enormous Hilbert space of possibilities that you can engineer a quantum system to travel into, but the types of states that we see occurring naturally, like the state of this lecture in here or the state of this pointer, are actually quite simple. And so what you really have to do to be able to do a classical simulation is you have to identify what characterizes the simple structure of the states you see in nature. And that's essentially the intellectual content of quantum chemistry, identifying the structure and then using it to perform classical simulations. So roughly speaking, the structure of physical quantum states is imbued on them by the Hamiltonian itself. And the Hamiltonian, as we wrote it down, has two pieces. It's got a first piece, the sort of kinetic energy piece, and a second piece, which is the interaction piece. And roughly speaking, these two pieces of the Hamiltonian create two main kinds of structures that you see in the physical OMG states. So you can imagine a scenario where the first term in the Hamiltonian is the dominant term. The kinetic energy dominates. And in that case, the basic physics of the electrons is that they just spread out. They mainly want to delocalize to minimize the kinetic energy if you're interested in the low energy eigenstates. And in this setting, the corresponding eigenstates are very well described qualitatively by traditional mean field techniques, and then you can improve on the mean field answers essentially by carrying out perturbation theory. So that's the basic classical approach in this regime of physical space. There's the other regime where the interactions dominate. And you might think that sounds very complicated. It's strongly interacting physics. But actually, it's not really. So if interactions are all that occurs in the system and they're very, very strong, then what happens to your particles, because they all want to avoid each other as much as possible, is they'll just crystallize out and localize. So the main sort of qualitative physics in the strongly interacting regime is that of localized quantum objects. And in this case, we can also write down structures to represent the eigenstates based on this local character. And these are so-called local approximations or low entanglement approximations. So what I'm going to do the next 10, 15 minutes is to walk you through the simulation methods with both these two techniques and they'll show you some real-world chemical examples where these techniques are applied. So first, let's start with mean field theory and perturbative expansions. So the kind of chemistry and material systems where this type of theory is appropriate is the chemistry of elements which have so-called S and P valence shells. And if that doesn't mean anything to you, it's basically the chemistry of carbon, so organic chemistry. And it's also the chemistry of simple semiconductors, you know, silicon, gallium arsenide, and so on and so forth. And in these systems, it's generally a good simulation strategy to take your Hamiltonian, break it down into mean field part, which is very, very simple to solve, and then have a leftover perturbative part. And then you can obtain the eigenstates and the energies essentially by Taylor series in the perturbation theory. So you just do a power series in V. And as Feynman showed very long time ago, this is all done very nicely in a diagrammatic language, and you just sum over all the so-called linked diagrams. Now the key thing is, when you're doing perturbation theory, if you stop at any finite order, you have a polynomial cost algorithm for your problem. But if you increase the order of perturbation theory, hopefully, for example, to get higher accuracy, then the polynomial order itself goes up. So the key to the success in this regime of physical space is to be able to sort of beat this race between the perturbation theory converging quickly enough and having enough classical computing power to go to the high enough order in the perturbation series. Okay, now there's a very simple mind approach and you might think it's sort of crazy that just doing a Taylor series and adding in the effects of the interactions to high order should work. But it's in fact the approach that is the basis for the most accurate theories of nature that we know. So if we go to a slightly different domain, QED, then we're all familiar with the idea that in quantum electrodynamics, you can measure something. For example, you can get a fine structure constant, say from the plateaus of the quantum Hall effect. But you can also calculate or infer from calculations the same quantity by summing up diagrams in perturbation theory to very high order, in this case, up to tenth order, and you can get many, many, many decimal places of agreement. This is one of the triumphs of modern theoretical physics. And essentially the approach to simulate chemistry and materials by doing mean field theory and adding up diagrams perturbation theory in this way is really just the same philosophy. And there's a very systematic way to include the diagrams and that so-called couple cluster theory, which we've heard in some contexts already in the quantum computing area. Okay, so let's just see this now at play in a real world example. So I'm going to show you an example of some work we did some years ago doing lattice energies of crystals. So the lattice energy of a crystal is the amount of energy that holds a crystal together. It determines the stability of the crystal. And we're interested in this problem, or this problem is interesting, because if you take typical chemical compounds, which are molecules, when you just write down the chemical formula, when they turn into a solid, when they crystallize, they can crystallize into many different similar forms. They're not the same crystal structure, they're only slightly different, and so we call these different crystal structures where the molecules pack differently polymorphs. But even though the packing is only very slightly different, in many cases the polymorphs have very different properties. And so I can show you some examples you might be familiar with. You know, chocolate is a material where most of us are familiar with. And there are several polymorphs of chocolate, and the chocolate you buy in the store is in this polymorph, which looks nice. The tempering process in chocolate produces a particular packing of the chocolate molecules into this structure. But you all know that you leave chocolate out sometimes under the wrong conditions. It will start looking like this. And this doesn't look so nice. It looks like moulds growing on it, but it's not actually mould. It's just that the molecules have rearranged their packing into different, the lipid chains have moved a little bit, they've rearranged their packing into slightly different crystal structure. Okay, so of course this is kind of a facetious example, but there are also important kind of commercial implications for polymorphism in the pharmaceutical industry. Because in the pharmaceutical industry, you make drugs, these are small molecules, and then you crystallize them, you turn them to solids so people can take them. And these drugs can crystallize, again, into different polymorphs, and the polymorphs can have different bioavailability. And there are famous examples of cases where during the testing and development phase of a drug, in particular this is a famous example for example, people always use the freshly crystallized compound, which had crystallized into one form, and it worked very well, but in the case of Retinovir, when it was finally released, and sat on shelves and so on and so forth, that crystal structure slowly converted itself to a different form, a different polymorph, which was no longer bioavailable. And you can imagine that caused a lot of problems, you know, it had to be recalled, there were all sorts of lawsuits and so on and so forth. So determining the correct and stable crystal structure, most stable crystal structure is an important thing in this field. Okay, so the main challenge is that because these crystal structures only differ by very small packings, they have very, very similar energies. So it's a problem of obtaining very high accuracies. And so we need to compute the lattice energies to very high, to many, many decimal places, to very high precision, which on the chemical scale is about a kilojoule per mole. But this is something that, you know, we can do these days for simple molecular crystals. So I'll show you an example that we did some years ago on a very simple molecular crystal, crystalline benzene. In this case, we can carry out the mean field theory and then add on diagrams, carry out the perturbative expansion to very high order. And so this is what it looks like. So you calculate the mean field energy, then you add on some diagrams and add on some more diagrams and then add on some more diagrams. And you can see the perturbative correction starting to converge down to very small numbers. And you add this all up just like you did with calculating fine structure constant in QED. And you obtain some energy. So this is a lattice energy for the crystal, for molecular crystal. And just like in QED, you can compare the theoretical number with the experimentally measured number. And there's a called widely quoted experimental number for the system. And it's this here, 51. Okay. Now, if you look at these two numbers, you know, they don't actually agree very well, right? So this is 56 to plus minus 1. This is 51 plus and minus 1. They're kind of similar, but they don't quite agree. So there has to be a problem. And so in one case, so you can ask where's the problem, right? So in one case, you know, we're solving quantum mechanics, doing theory very, very accurately, doing perturbation theory until it converges. No, it sounds like there shouldn't be any problem with that. So one would infer that the problems were the experiment. Okay, so the experiment, it's not the usual way that one thinks, but for theorists there's a natural way to think. And indeed that's the case and that's what happened in this system. So after we calculated the results, it was clear that you go back and reexamine the experimental data, and it would seem that the experiments, they had sort of misinterpreted their results. They had missed out some thermal corrections. And after they had put those thermal corrections, then that puts the experiment back into agreement with theory. Very similar to the case in QD, we can obtain not 10 digits of accuracy, but maybe three or four significant figures of agreement. So the point of this story is just to show the good agreement that can be achieved between just simulating the Schroding equation and the real world, at least in this case of very simple perturbative quantum states. Okay, so let me now move on to the second class of quantum states, which are the ones where the repartions dominate and the electrons localize. Now the kinds of chemistry or materials where this is important is really for the elements right in the middle of the periodic table. So you might remember the periodic table has got and then there's a big block of elements in the middle which are known as the transition metals. And in the transition metal atoms, the electrons live in these orbitals known as the orbitals, where they're very sluggish, they just don't have a lot of kinetic energy, so they feel the effect of the Coulomb interaction very strongly. They can't whiz past each other, they just sit and interact with each other all day long. Okay, and so these strong interactions give rise to very rich behavior, and it's certainly no coincidence that all of catalysis essentially relies on these transition metals and the strongly interacting physics. And by the same token, all the really exotic material behavior also involves materials with transition metals inside them. Now the quality of physics, as I've mentioned, is essentially physics of localized particles. Okay, so under strong repulsion, the electrons localize, and if they're completely localized, then they just get frozen out, and all they're left with is then their spin degree of freedom, which can of course flop back and forth. So in the extremely strongly interacting limit, then the physics really just becomes a physics of spins, and that's of course a physics of magnetism. So the strongly interacting limit is very close related to understanding magnetism. Now, if the electrons are not completely localized, so their positions are not completely frozen out, then you can have competition between many different effects. So they might be on the verge of localizing and delocalizing, and all this is going on with some background of magnetism, and this is why you can expect very rich and complicated behavior in these kinds of physical quorum states. But from a computational perspective, it's much more natural in these settings to start somehow from a localized description of the quorum state rather than a delocalized description of the quorum state, and that's the strategy that we take in classical simulations. We try and construct so-called local parameterizations. Okay, so what's the local parameterization or local approximation? I don't have time to go into all the details, but very roughly speaking, we want to characterize the quorum state in some kind of local variable. It might be local coefficients in some local Hilbert space. It might be local expectation values. And the advantage of doing that is instead of having to deal with, say, the wave function, the global space, in which case you have some exponential dependence on the system size, if you're only dealing with wave function in little bits of space, then you basically lose the exponential scaling and you get some kind of polynomial cost parameterization of your problem. Now, probably the most well-known example of this, probably in this community, is the so-called tensor network formalism. And very roughly speaking, what you do here is you write that wave functions in your local Hilbert spaces, and they're not quite wave functions. They have some additional mathematical indices which make them tensors, and then by gluing together these indices you can recover the global state. But this is not the only way to construct a local approximation. Another way is to characterize your system in terms of local observables, not local wave functions, so local expectation values. And this is the basis of so-called quorum embeddings where you just use local observables and then glue together them in some way to get a global description. Okay. But let's now go on to a case study and see how some of this works in practice. I'll think about whether to talk about biology or materials, but, you know, as I said, high-temperature superconductivity was discovered at IBM, so I think it's a good thing to talk about here. Okay, so high-temperature superconductors, you know, the main class of these materials are the so-called cuprates. There's another class, but the cuprates are perhaps the most famous class. And there are many different cuprates. They contain the atoms in different arrangements and so on and so forth. But if you look at the phase diagram what that tells you is that the superconductivity has some sort of generic quality or it has some universal quality and it can be described in terms of a simpler model. And so the most famous model and perhaps the most fundamental model which to understand high-temperature superconductivity is a so-called 2D Hubbard model, what people here would call the 2D Fermi Hubbard model. And it's described by Hamiltonian, which again has these two pieces, the kinetic NG piece has been greatly simplified and the interaction piece has been greatly simplified. So one of the first tasks, as a theorist, trying to understand high-temperature superconductivity is to ask, can we compute all the properties of the 2D Hubbard model in order to obtain the accurate phase diagram? Now, many people have worked on the 2D Hubbard model over the years. That's a slight understatement. And historically it's been a bit of a graveyard for classical numerical methods as well as the inventors of those methods. But recently a lot of progress has been made. And to illustrate what I mean by this progress recently in a collaboration between many different numerical theorists, including myself, we compared different state-of-the-art classical approximations for the energy of the Hubbard model at a particular point in the phase diagram, which is not important at which point. And all these different methods, many of which use these type of local approximations that I introduced a couple of slides ago, give almost exactly the same energy. So that's telling us that we can now compute the energy to many, many significant figures, say three or even four significant figures. And this is to be compared with what you could do about 10 years ago where you could only get about one significant figure or so. So what this means is that there's been two orders of magnitude improvement in accuracy in the last decade, and that's what's finally allowing us to obtain definitive solutions of this basic model of high-temperature superconductivity. And to just show you one of the, I think, one of the results that we're most proud of, very recently it's been possible to see exactly how the high-temperature superconductivity develops inside the Hubbard model itself. So you can try to study this by asking what does the material, this is a theoretical material, the Hubbard material, look like before it's superconducting, just before it's superconducting. And what you find then is that the charges of the materials spontaneously crystallize out into these one-dimensional structures called stripes. And then even more intriguingly, you find that if you allow these stripes to wiggle, you squeeze them together, you allow them to fluctuate, then you see the pairing develop between the electrons themselves. So this is a very evocative picture of how high-temperature superconductivity emerges, and it's a sort of numerical confirmation of a long-standing conjectural, long-standing hypothesis that arises from fluctuating stripes. But in any case, this is just an illustration of the kinds of things that we can do with classical methods these days within the strong coupling regime of quantum problems. So let me now move on to the next part of my talk, which is about quantum algorithms. So so far I've told you about what we can do with classical methods, and the key point is that they're often very effective, in some sense they're unreasonably effective, simply because the states we're studying are not so complicated. So the important question to ask is, where do we then look for quantum advantages in real quantum simulations, or simulations of real quantum materials? And what's the practical boundary between quantum and classical algorithms? A related question one might also be interested in is, given all the domain knowledge we've developed in the world of classical simulations, can we use it to improve quantum algorithms? And that's something I've recently become interested in, but I won't talk about this here because the work that I've done in this area is going to be described by Ryan in the next talk. Okay, so let me focus on these questions. Okay, so one way to ask, you know, what's the practical boundary between quantum and classical algorithms is to say, if we just froze all progress in classical computing today and the development of classical algorithms, what exactly can we do? Can we put some numbers on that? Okay, so first let's look at weak coupling problems, which are mainly the kinds of problems you study in quantum chemistry. And if we're interested in getting the ground state energy of a typical weak coupling molecule to an accuracy of about 1 kT per bond, and I choose this accuracy to see energies to this accuracy, then you know the reaction rates to an order of magnitude. Then with current classical techniques, you can treat really quite large systems, you know, thousands of orbitals. And that tells you that this isn't quite the best, this isn't quite the right frontier to see immediate crossover with quantum algorithms. However, if I change the question a little bit, if you remember these energies or these calculations are being done in some sense with an underlying perturbative framework, and if you want to go get higher accuracy in the perturbation theory, you have to go to higher order, which is more expensive. So if I change the question a little bit and say, what if I want the ground state energy to a higher accuracy, let's say 0.1 kT per bond. And this is still a reasonable accuracy, everyone would like to have energies to 0.1 kT per bond. It's not a ridiculous accuracy. Then with current classical techniques, you can treat much smaller systems. You'll need maybe a hundred orbitals or so. And that's now not looking so far away from where you could get a near-term crossover. And then finally, if we move from looking just at the lowest eigenstate of the Hamiltonian just to say the second or third eigenstate, it's empirically observed that in most cases these perturbative methods don't work as well. And so you have to go to higher order. So if we look at the excitations of the system and even ask just for a more discrease, which is 0.1 EV if people are thinking chemical units, then the classical frontier is smaller still. So that tells you that if we're trying to find the best place to see this type of quantum classical crossover, we should ask the right question. And in particular, if we insist on high accuracy and look at beyond ground state simulations, then even in small molecules, it may be possible to beat current classical implementations. Okay, now moving into the strong coupling regime, which is typically a more interesting intense method of physics, we can think about, say, moving around in the phase diagram. And in certain points in the phase diagram, it's much like the previous slide, classical methods do really, really well. So in the case of the Hubbard model, at near-harf filling, you can obtain extremely high accuracy. There's really no point in trying to do that with a quantum computer. But if you move away from the Hubbard model, for example, in the regime where these stripes are seen to appear, then the accuracy you can obtain is much less. It's only about 1%. And that's enough to kind of answer some physical questions if you're very, very careful. But it's not enough always to have an unambiguous answer. So in the case of looking at the origin of superconductivity and stripes in the 2D Hubbard model, we had to use many, many different classical techniques and cross-check them to make sure they gave a consistent answer because a single method by itself was not sufficiently reliable. And it would be great if a quantum computer could, say, obtain more than a better than 1% accuracy because then we could have a one-shot calculation to understand the physics. And then finally, if we move away from the ground state, for example, to low but finite temperatures, currently there are no good classical methods. So again, by tuning the problem in the right way, we may be able to beat classical techniques. I just want to finish by saying that a point I'd like to make, however, is that although a lot of quantum simulation that I see today seems to be focused on the ground state, it's not really clear that that's the best thing to focus on. And in particular, I think quantum dynamics is perhaps a bare target for seeing near-term crossover between quantum and classical methods. And that's because we know it's generically hard on a classical computer, but quantum dynamics as well, a quantum computer, is doing all day long. So from a classical perspective, we understand the complexity of doing quantum dynamics by saying exact quantum dynamics, by saying generically the entanglement in the quantum system, if you just allow it to evolve, grows linearly with time. And so if you construct some kind of classical approximation to the dynamics, such as a tensor network simulation, then you'll find that you start to need an exponentially growing set of parameters to capture the quantum state as a function of time. A point I want to make is that when we're dealing with quantum dynamics with these reasonable Hamiltonians, these physical Hamiltonians, we don't think of the classical algorithm as really having an exponential dependence on the number of qubits. It only has an exponential dependence on the amount of time you're simulating. So it's very different from the kind of Google problem where you're doing these randomized circuits and the classical complex itself depends on the number of qubits. In most physical simulations, we really care more about the time you can simulate, than the number of qubits. Okay. So if we really take quantum dynamics seriously as the testing ground or the first place to see a crossover between quantum classical techniques, it's important to ask if there are any classical loopholes in the complexity. And the question which I think needs to be answered is for the kind of questions that we ask, which is what is the value of a typical average observable? Do we really need to have a faithful representation of all the entanglement? And this is something which we don't know the answer to today, but there are some hints that in some cases you don't actually need to model all the entanglement system to answer the typical questions you want to know. So to give you an analogy, it's sometimes forgotten that even if you do classical dynamics, so solve Newton's equations, the cost of solving Newton's equations faithfully increases exponentially with time because of chaos. So because of chaotic trajectories, you need to get, solve the trajectories to more and more precision to get the trajectories correct. But on the other hand, this type of, the fact that we can't simulate classical dynamics even precisely at very long times doesn't actually matter because the quantities we're interested in are always averaged either over some time or over a large number of atoms. So the consequence in quantum systems is that if you take quantum dynamics for a large set of particles over long enough time, the presence of chaos should lead to an emergent hydrodynamics. And the hydrodynamics may be something you can simulate without too much entanglement. And there's certainly evidence for this in some numerics. For example, if we try and calculate the diffusion coefficients, one finds that you can get it right in classical simulations as a function of entanglement, even in simulations of very low entanglement. So these are questions which still remain to be characterized by I think they're important to think about. But if we just say, let's imagine that we make no progress in classical algorithms and ask what is the current classical limit to simulating quantum systems in real time, we can make some estimates. So let's take a physical Hamiltonian, let's say a nearest neighbor Hamiltonian, and let's think about carrying out the best possible classical simulation. So let's take for example a tensor network algorithm, you build it forwards in time to make a very complicated structure with a lot of depth. And eventually you run out of steam and you can compare that to the same simulation on a quantum processor and run it for a long period of time and eventually it runs out of steam and can just ask which will lose steam first. And we can estimate the maximum depth you can simulate in a classical tensor network, really from the largest tensor network calculations that have been carried out in ground states. And this goes right to an effective depth of 256, this is just something that's characterizing the depth of this simulation. And if we think about what this translates into real-time units, it's roughly a classically accessible time of four units of Hamiltonian time. So this is all back of the envelope but this tells you about roughly how far we can simulate out in time, currently on a classical computer. And if one can go further than this with a near-term quantum device, then essentially you'll have beaten all the classical algorithms for dynamics. Okay, so that really brings me to the end and to my conclusions. So the take-home messages are that classical algorithms are quite capable for many different problems but if you tune your problems correctly you should be able to see this quantum computing crossover and it's clearly time to push on that frontier. Thank you. Thanks Garner for the talk. We have time for some questions. So thank you very much, that was really interesting. I like the idea that dynamics are potentially the best application for quantum computers simulating quantum systems. What questions about dynamics are really urgent and interesting? I mean, if I want a $10 million dynamics question, what would it be? Yeah, so I think it depends a lot in the discipline you're working in but I think in the... For a long time it's been hard to ask the question what do we want to study with dynamics because it's also a question of what experiments can really characterize in dynamics. But nowadays we have free electron lasers which basically can pump materials into out-of-equilibrium states and so I think one of the very interesting things that people try to understand theoretically is if you pump, you know, you excite the entire material, produces a number of excitations, can you change its phase, can you induce new kinds of, for example, transient superconductivity and so on and so forth. Is there a dynamical phase transition? So I think these are questions we can all agree on are interesting. But it's true that for a long time it wasn't quite clear what one should study in a dynamical area. Thank you for the talk. I know that what you call quantum chemistry has undergone its own quite rapid development as well as quantum computing has and you talked about that order of magnitude improvement or the past decade, that one example. And one question I've had for a while, actually, was is that this development of classical simulation of quantum chemistry, has it come about from classical algorithms being developed that better tackle that quantum approximation or has it taken sort of a different sort of direction from what we're sort of doing? Is it just all about parallelizability and that kind of thing? Yeah, you know, some comes from both but I think that the... and I think the main things have come about from having a better understanding of the problem. So in other words, if we have a better understanding of the types of states that are to the typical ground states the better types of wave function ansatz and approximations, that's probably what's led to the biggest improvement. But there's of course been a lot of technical improvement as well ranging from, you know, parallelization to just better ways of computing numerical integrals, you know, lots of things like that as well. So Garnet, can you hear me? Yes, another chemist. You focused largely on the minimum energy structure but of course we're interested in mapping out potential energy surfaces. And so there are a lot of difficulties with transition states and very mixed states that don't have Born-Oppenheimer states. Could you say something about what you think the current limits of those are? On a classical? On a classical, where you might get some bang for your buck on a quantum computer. Yeah, so, you know, I focused on calculations, mainly because that's what my research is about, first on calculations where the nuclei are stuck, you know, they're basically fixed. But there's of course a whole layer, many layers of the field where you're concerned about how the nuclei themselves move around and move into different structures and sample different pieces of spaces as well. So I think there are two parts to that problem. So one is you need to be able to calculate the right energy for a given set of nuclear configurations and you need to be able to calculate that very quickly. So that's one problem. And the other is you need to be able to solve the classical sampling problem. In other words, you need the nuclei to explore the phase space, the classical phase space as quickly as possible. So there have been many developments in both of these, both of these areas as you know. I mean, I think the most difficult problem in the sampling problem is the fact that although the atoms move on the time scale, that is say, picoseconds, there are many events, for example, protein formings which take place really on very long time scale, like a second. And so there's 10 to the 12 type of atomic motions you have to consider before you see one event of the protein folding. And how you bridge that time scale is a big issue and there have been advances by developing specialized classical hardware for that problem. How a quantum computer might deal with that, I don't know. I mean, how it can solve this time scale problem, I don't know. Okay. If there are no further questions, let's thank Arne again.