 Thank you very much and let me also start by thanking the organizers, in particular David, for putting together this really interesting workshop. It's been great so far and I'm sure there's lots more interesting talks to come. So the title that I put on the program was about quantum thermalization. I actually then decided to split my talk in two and spent some time talking about an older work on simulating materials and spent maybe the last ten minutes or so talking about thermalization, which many other people are also talking about today. So this is generally in the context of quantum simulation and the problem that we're all facing. What do we do with a relatively small quantum computer? And in the simulation context there's a lot of different application areas that people have looked at. Quantum chemistry is maybe the original one. Letters model is something that, if you ask a condensed matter physicist, comes to mind right away. But we were interested in adding another one, which is studying correlated materials. So just to give you a flavor of the kind of problems that we would like to be able to study, a typical one would be high TC superconductivity, which is of course one of the oldest riddles of condensed matter theory. If you could solve this problem, we would have enormous economic impact. This is a plot of how much electricity is lost in power lines in different US states. I think the average for all of the US is around 7%. If you could replace all these power lines by superconducting power lines, you could immediately save a sizable chunk of the energy costs in the US. Why is that not possible? There's just no practical way to do that right now. Superconductivity is very old phenomenon. It's been known for a long time. But the progress on finding superconductors with higher and higher critical temperature has been pretty slow. This is a plot of most of the currently known superconductors. So it starts in the early 1900s and all the red dots are what's called conventional superconductors. And then in the 80s, you find this jump, which is actually an achievement of IBM research in Zurich. So then you find these new compounds, these blue dots here, which are cuprate superconductors. The key vertical scale here is liquid nitrogen. This is where it becomes a practical application in at least some cases where you can use liquid nitrogen to cool. But to build a power line, you would have to get much cheaper materials that are easier to grow and so on and so forth. So this hasn't become a technology yet because we just haven't been able to improve superconductors to that point. So why haven't we been able to do that? Well, the reason is that conventional theory for this largely fails. So just to explain what the computation is that we would like to do. So we would like to start from, my point is giving up unfortunately. We would like to start from just the structure of the material. This is a picture of IPCO, the so-called perovskite structure. And we would like to connect that to the phase diagram that is measured for this kind of material. And then these cuprate superconductors, the phase diagram turns out to be incredibly complicated, exhibit many different phases. And establishing this connection, despite enormous amounts of effort, hasn't been successful over the last 30 years. There's two different ways you could go about this. One way which is in a sense the more popular one in condensed matter theory is to look at this complicated system and kind of go, maybe it's only these copper oxide layers that matter and there's of course arguments for that. And so you reduce this complicated system to something much simpler, like the Harvard model in two dimensions and then you try to do that. But what we were hoping to get to, if you look at the Harvard model that could maybe explain the structure of the problem and maybe the pairing mechanism, but it doesn't tell you a lot about the detail of a particular material and it's not going to tell you necessarily how to improve your material parameters and things like that. So to study that kind of problem you really need a quantitative description of the specific material compound. And so that's the problem that we would like to tackle. What's the state-of-the-art method for that right now? It's something called density functional theory, which goes back to a famous theorem by Hohenbeck and Kohn in the 1960s who argued that the lowest energy of an electronic system can be computed knowing only the density of electrons at each point in space. So this is extremely powerful, right? It tells you that instead of having to find the full wave function and basically optimize over an exponential number of parameters, you really only need to optimize over the density, which is a polynomial number of parameters. So that's great, right? That sounds like we've just solved quantum mechanics right there. We've gone from an exponential problem to a polynomial problem. Of course there's a catch. To compute the energy from this density you need this functional and computing this functional is NP-hard. So while we know that such a functional exists and you can in principle compute the energy of the system, at least the ground state energy, as a function only of the density, in practice we don't know how to do that computation. So why does anyone care about this? The reason people care about this is that there's many approximations that work reasonably well in practice. Things called like the local density approximation or the generalized gradient approximation turn out in practice to work reasonably well for large classes of systems. But you should think of this as basically mapping the many electron problem to sort of an effective non-interacting problem in some more complicated potential. And so it fails if, you know, correlation effects entanglement between electrons is really crucial for the physics of the underlying system. So while it works in sort of weakly interacting systems, it does fail for many interesting systems and for example fails completely to predict high temperature superconductivity or actually superconductivity in general. Okay, so the natural question to be asking is, okay, can quantum computers bail us out here? Can we solve this problem on a quantum computer? So let's think about that for a moment. So let's think about just doing a direct simulation, just taking say, you know, quantum chemistry type methods and applying it directly to this problem. So there's a large unit cell, so you have in this case, I don't know, a total of 10 atoms per unit cell. Then every atom has, you know, a couple of relevant orbitals that you need to take into account. Then, you know, spatial correlations might be very important, so you're going to need to take into account maybe a few dozen in 3D, probably more like hundreds or thousands of unit cells. And then, okay, you need one qubit per orbital to represent this on your quantum computer. You add it all up or you multiply it all together. You're going to need several thousand qubits to store the wave function, even in sort of a reasonably simple case. So that's not a practical application for sort of the near-term quantum computer that we have in mind that is just beyond what a classical supercomputer can do. This would be a long ways down the road. So the question is, really the challenge is that we need a better approach to tackle these sort of complex materials with a small quantum computer. And what we proposed is to look at this problem in sort of a hierarchy of methods. So we're not going to throw away DFT. We'll actually use DFT, but what we can do with DFT is we can start from this very complex multiband correlated material and we can reduce it to a simple lattice model. So we can reduce it to a 2D or 3D hub-type model, which still may have several orbitals so we still carry along a lot of the physics and we try to quantitatively and correctly compute the matrix elements like the interaction matrix in this hub-type model using the DFT calculation. But we use that sort of as an intermediate step. And then in the next step we use a method called dynamical mean field theory to take this hub-type model and reduce it further to something called an impurity model, which is just a few interacting degrees of freedom coupled to a non-interacting bath. And then ultimately the idea is to solve that on a quantum computer. So we use a hierarchy of methods to reduce a complex problem to sort of increasingly simpler problems until we find one that we can realistically study on a near-term quantum computer. And then the idea is to self-consistently solve all these equations. So you can actually feedback what you learn from your DMFT calculation into your DFT calculation, improve the functional that you use and try to sort of self-consistently improve on this. So let me talk a little bit about this dynamical mean field theory. So as I've mentioned what it does is it takes an interacting lattice system, an extended system, and reduces it to sort of a zero-dimensional system which is just a few degrees of freedom coupled to this non-interacting medium. Of course this doesn't come for free. It's an approximate method. It's exact in some limits in particular on the beta-letters or the k-letree with infinite coordination number. In all other cases it's approximate. And you should think of the approximation as disregarding the spatial dependence of the self-energy of the system. So in a sense you're throwing away spatial correlations but you're keeping dynamical correlations. And there's a way of systematically improving it so you can include more and more degrees of freedom in your impurity and in principle systematically approach an exact solution. So a priori of course these models seem completely disconnected. There's something called the self-consistency condition which ties the parameters of this non-interacting medium that your impurity lives in to the original lattice model. So there's sort of a connection between the two which needs to be self-consistently solved. I'm going to explain the self-consistency condition in detail. It's just important to know that there is such an equation that relates the two models to each other. In practice the way this is solved is that you start from some guess for the parameters for this non-interacting bath. You then solve this impurity problem. Solving this case means that you have to extract the greens function. Preferably in imaginary frequencies but you could also work in real frequencies. We have this impurity greens function and that's fed into the self-consistency equation that gives you a new estimate for the parameters of the bath and you sort of iterate this process until you reach convergence. So the computationally hard part of all of this is solving this impurity problem. So going from this Hamiltonian describing a few interacting degrees of freedom coupled to a non-interacting bath to say the impurity greens function in imaginary frequencies. Classically there's a variety of methods that do this. You could do it exactly using kind of exact diagonalization methods that gets you to a total system size of maybe 30 orbitals and bath sites. So pretty small systems. There's all kinds of largely uncontrolled perturbative methods that people use. The main workhorse is quantum Monte Carlo which happens to be sign free for some problems but in general requires a sign problem and so the kind of interesting challenging multi-orbital problems where we're keeping a lot of the physics of the original system tend to have a sign problem and tend to be extremely difficult for Monte Carlo. There's been some progress using tensor networks in the last few years and also actually Segebavi and David Gasset have a very interesting result in the structure of these kinds of impurity problems and actually showed that there's a polynomial time algorithm in the ground state energy and the low energy states of such a system which is not quite enough to solve the self-consistency because you need the greens function so you need dynamical information but it's providing at least part of the picture for getting there. And so the idea was to use a quantum computer to solve this particular problem. I'm not actually going to discuss this quantum algorithm in great detail. It's kind of what you would expect. There are a lot of quantum chemistry methods that are probably familiar to many people so you need to prepare the ground state of this impurity problem. In the paper we discussed doing that, you're doing adiabatic state preparation but one could also use a variational algorithm just as well. Then you need to measure the greens function which on a quantum computer, of course, you naturally do in real time. And there's a lot of tricks you can play on improving the measurement of the greens function to get away with sort of, you know, one bit every time you prepare the ground state every time you try to extract as much as you can get. So there's a lot of tricks you can play but it's basically what you would expect. And then, okay, you have the greens function in real time you need to get to imaginary frequency. That's just an integral. That's analytic continuation but in the easy direction so that's not hard to do. And so the final scheme that we end up with is this. We start from some complex material. You initially solve DFT. You select the relevant orbitals. You compute the Hamiltonian that you want to solve in DMFT. Then you do the self-consistency loop in DMFT and all of that happens on a classical computer. All of this is sort of pretty easy preprocessing and only the final step of solving this impurity problem happens on a quantum computer. So this is an example of a hybrid method that sort of tries to use the quantum computer only for what it's best at and use the classical computer for everything else. And then you self-consistently solve the DMFT equations and then you can also self-consistently feed that back into the DFT solution. Okay, now of course the question is, is this practical? How expensive is this? So the number of qubits required is pretty easy to figure out. It's basically the number of orbitals that you want to keep so the number of degrees of freedom in your interacting system and then the number of bathsides you want to keep. And as usual we expect an advantage and the number goes about 50 or so. That's when exact methods like exact organization and so on give up. The real question is what is the time requirement? How many gates do you need to run this? Do you need to run to do this? And as usual we can sort of estimate the worst-case scaling but the pre-factors are generally unknown so this worst-case scaling may not be very useful so what we do is a usual trick. We classically simulate a quantum computer so we take a big computer, classical computer and sort of run the quantum algorithm which of course takes really long so this is a terrible way to solve the impurity model but it tells us something about the gate counts and what's required to do this. So what we do is just kind of a very simple sort of bare bones example. We have one spin for a degree of freedom in the impurity five spin for bathsides that ends up being a 12 qubit system and then we measure the Green's function on a grid of a thousand time points ranging from 10 to the minus 5 to 50 of course logarithmically spaced. At each time point we take a certain number of measurements and all of this is done at pretty low temperatures. This is actually the self-consistent solution of the DMFT equations that we get by emulating this quantum algorithm which is shown is the spectral function in the two different phases that this model has so the Mott insulating phase and the Fermi-Liquid phase and you see what you expect to see. So you see the Mott bands at frequency plus and minus u over 2 and then in the Fermi-Liquid you see that you have spectral rate at zero energy. All of this is exactly what you would expect. What do you expect from this also is it gets better and better as you take more measurements and so with maybe a few hundred measurements per time point we can get away. So how much does it take to do this calculation on a quantum computer? So we have a thousand time points, 400 measurements. We need to measure sort of the two spin components, particle-hole components, real and imaginary part of the Green's function. You multiply it all together you need to do about 3 million measurements to produce that plot and if you play all the tricks that you can come up with and sort of enumerate this you do about 200,000 sort of separate ones where you prepare the ground state from scratch. You need to do quantum phase estimation about 7 million times and then you need to run the measurement circuits and if you add it all up you end up with about 10 to the 12 gates. I should emphasize that this is not one run of 10 to the 12 gates it's 200,000 separate ones that add up to 10 to the 12 gates together. So you only need to be coherent for about 10 to the 7 gates. Only. Yes. So it's not, you know, it's in line with I think what's required for quantum chemistry and a lot of other applications that people have looked at. And then you can think a little bit about the scaling. So it's about cubic and the number of orbitals you want to keep. That's the number of Hamiltonian terms, the number of measurements is quadratic and the number of orbitals you can keep and then of course the question is how much does it take to prepare the states and the physics of your system. So I think the bottom line is that the gate cons are definitely very challenging but they're not, you know, fantasies unless it's not 10 to the 20, I mean it could be worse. But it's definitely true that this application and I think this is also true for lattice models and for quantum chemistry, you know, we find very relevant and interesting problems for sort of small quantum computers in terms of the size of the quantum computing from 50 to 100 logical qubits. But generally all of these problems seem to have very large gate counts. So I don't think there's anything that we found that we can do with, you know, less than, say, a million gates. And so what I would like to spend the remainder of my talk on is the question of whether quantum dynamics is sort of a different area where useful applications come sooner. Of course quantum dynamics is naturally the most, you know, the first thing that should come to mind. I mean what a quantum computer does is quantum dynamics. So you would think that they're very good at this. Let me be a little bit more specific on what I have in mind. You know, when you say quantum dynamics or dynamical probes to a condensed meta-theorist, you might think of something like this which is sort of low-energy dynamical properties, say a structure factor which is probably a neutron scattering experiment. You know, this is a neutron scattering on Herbert Smith's side, one of the most interesting frustrated magnets. This is not the kind of thing I have in mind. Another thing that I don't have in mind is sort of non-equilibrium dynamics, like say pump-propes microscopy. What these problems have in common is that you prepare the ground state and then you have some dynamical probe. So you, you know, hit it with neutrons or you hit it with a strong laser pulse and then you watch the relaxation of the system. But that still requires you to prepare the ground state or at least the low-energy states. And I want to get rid of that entirely. And so the class of problems where we can get rid of that is one that's maybe of more academic interest than some of these problems, but has definitely received a lot of attention in condensed meta recently, which is the question of how statistical mechanics emerges in a closed quantum system. So basically the question of quantum thermalization. So there's questions such as, you know, how does an ensemble of microstates emerge if your system is globally in a pure state? Or how do irreversible dynamics that we expect in statistical systems emerge if the evolution of your system is just unitary and just Hamiltonian? A lot of these questions where an answer was conjectured in the form of the eigenstate thermalization hypothesis in the 90s, which posits, and I'll explain this in more detail, that the system can essentially serve as its own bath. It's sort of local properties look thermal, even if the entire system is very far away from being thermal. And just to sort of make that point, the reason why I think this is a particularly interesting application is that this is a high temperature phenomenon, or actually in the appropriate sense an infinite temperature phenomenon. So you don't need to worry at all about state preparation. You don't need to prepare a low energy state or an eigenstate of your system. You can do it in some sense for arbitrary initial states, which we call the infinite temperature ensemble. So let me explain this eigenstate thermalization hypothesis in a little more detail. So the setup is, let's consider just a local Hamiltonian, whatever 1D, 2D, the details really shouldn't matter. And there's a couple different ways to look at it. So first let's look at it from the perspective of eigenstate. So prepare an eigenstate of this Hamiltonian at finite energy density. So this is a really important point, not just at finite energy, but I want the energy density to be finite. So I want the energy to be extensive. It should be a large number of excitations. And then look at only a patch of your system. So look at some small subsystem of your entire system. And then what ETH states is that this small subsystem looks thermal. So the reduced density matrix on this small system essentially looks like a Gibbs state for some other Hamiltonian. And this other Hamiltonian should be closely related to the Hamiltonian you started from. You know, your original Hamiltonian restricted to that subregion and then you need to deal with boundaries in some smart way. Another way to look at this is in the dynamical context. So here you initialize the system far from equilibrium. So you could prepare some eigenstate and then rapidly change the parameters of your system. Or you could just take, say, a random product state, something that is very far away from an eigenstate of the system. You just evolve under the Hamiltonian to reasonably long times. And then if you look at time-average observables, you should find again that these time-average observables are very close to thermal expectation values for that same system. What do we mean by very close? So one thing is expectation values of local operators should match both the micro-canonical and the canonical ensemble. Another, a lot of interesting signatures are in the entanglement of the system. So what ETH implies is that as opposed to ground states which have area loss, these systems should have volume loss. So the entanglement entropy of some patch of the system with the rest of the system should scale as the volume of that patch and not just as the size of the boundary. What is the evidence that we have that this is actually true? So starting in about, this was one of the first papers, so 2005, 2006, 2007 people started to look at this numerically in a lot of detail. This is a nature paper from Marcos Regal, which I think was sort of the first convincing numerical evidence that this ETH actually holds. The top plot is showing a observable, in this case the momentum distribution function following a quantum quench. So this calculation is a small Hubbard model. You just divide the system in two, prepare the I-ing state of that, and then you sort of open the barrier between the two halves and let particles go across that. And you wait long enough, and what you find is that something like say the momentum distribution function matches very closely with, so the diagonal ensemble is the long-time evolution. They look at the micro-canonical ensemble and they compare it to I-ing states. And all of these three things match very closely. You can see that it's a smooth function of energy, which of course you also require, if you want to match the canonical ensemble. Some more recent evidence to this paper focused on the entanglement properties and showed that not only do you have a volume law, but the coefficient of the volume law is thermal. So the entanglement entropy of some small region of your system is very close to the thermal entropy that the system would have in equilibrium in the canonical ensemble. And you can also look carefully at how it converges to that as you increase both the system size and the size of the subregion. And you see pretty rapid convergence. So this ETH seems to be satisfied to very good accuracy. What are the limitations of this? So it's generally believed that ETH is extremely generic. There are very few ways that we know to get out of this behavior. And all of them are related to having the dynamics restricted in some way by having conserved quantities. The simplest way is to have a quadratic system, say non-interacting fermions. Their dynamics is just too constrained. So they obey ETH in some sort of very particular sense. They go to not just the general canonical ensemble, but they have sort of a particular ensemble that they go to. So here we say that ETH would be violated. Another example is what we call sort of integrable system in condensed variables. They are solvable by the beta-unsets. These also have a very large number of conserved quantities. And so their relaxation is too restricted. Both of these are fine-tuned systems, though. If you take a free fermion system and you add weak interactions, it's generally expected that it behaves according to ETH. Similarly for these integrable systems, if you perturb them away from these fine-tuned integrable points, the property goes away. So the only robust violation of this eigenstate termization that we know of is called many-body localization. And it occurs in this audit system. So you imagine electrons moving in a very strong disorder potential. And so this disorder potential is too strong to allow particles or energy to diffuse in the system. So if you have, say, some initial inhomogeneity in the energy distribution in your system, that can never relax. Basically the walls that your disorder potential is building are too high to tunnel across. And you can never relax to a thermal state. This study, this is actually a very old question in condensed matter. It was first studied by Anderson in the 50s who solved the case of non-interacting electrons but already mentioned the interacting case. And then in the early 2000s, two groups actually, a group in Karlsruhe and a group in Columbia, looked at this problem and did a perturbative calculation to all orders showing that localization can persist against weak interactions. And then just a few months later, really, this was picked up by David Yousse and his collaborators who sort of brought this down to a numerically tactible level. So in particular they looked at models with a limited bandwidth, so bounded spectrum, and argued that in that case maybe all eigenstates can localize. And so that makes it very easy now. You just need to look at the ensemble of all eigenstates, which we call the infinite temperature ensemble. And you can check whether all of them look localized. And this really opened up sort of the floodgates to numerical studies. So starting in about 2010 or so, there's been a huge proliferation of numerical studies. And at this point we have overwhelming numerical evidence for this. We've characterized many body localization in many different ways through entanglement to other properties. There is in fact a rigorous proof of the existence of such a many body localized phase in a particular one-dimensional system. And there is experimental evidence in cold atom systems. So just very briefly, what do we understand and what do we not understand about MBL? So as I said, the characteristics of eigenstates are really well understood in the MBL case. I think the dynamical behavior is also well understood, how entanglement spreads and so on. There's a lot of understanding of sort of the novel phenomena that you can get from many body localizations. So for example, you can get interesting things such as topological phases at finite temperature. You can get new non-equilibrium phenomena such as time crystals, systems that spontaneously break time translation symmetry. So there's a lot of interesting new physics that has come from studying these MBL systems. But there's at the same time also a lot of things we don't understand. The most important one may be the nature of the phase transition. We know that there has to be a phase transition between the many body localized phase and the ergodic phase. But we really know very little about that phase transition. There's essentially no sort of controlled numerics for large enough systems to tell us something about what is the nature of that phase transition? What's even driving that phase transition? How should we describe it? Is there a field theory? All of these things we don't know. In higher dimensions, there's a lot of questions. There are some experiments that suggest the existence of MBL in higher dimensions, but I think this is still pretty controversial. And then there's a lot of questions surrounding the stability. So what happens if you couple an MBL system to an external bath? What happens if you have noise in your system? I think all of this is very poorly understood at this point. So as a computational problem, how do we go about studying these systems? So if you're interested in excited eigenstates, and the key point here is really excited. And again, the fact that we're looking at things, you know, not just low energy states, but states somewhere in the middle of the many-body spectrum, states that have an extensive number of excitations. So that means that most of our numerical methods, which are tailored to low energy states, basically go out the window. We have to start from scratch thinking about this problem. So the first thing you do is you just fully diagonalize your Hamiltonian. Just use lay pack or your favorite linear algebra routine. And that gets you to, you know, 16 to 18 qubits roughly. That's where we generally run out of steam. If that doesn't quite do it, you can try sparse diagonalization methods, Krullov methods. This is a little bit difficult, because these Krullov methods generally solve for exterior eigenvalues, so the bottom or the top of your spectrum. But there's tricks to sort of move your spectrum around. It's called a shift and invert method that, you know, turns interior eigenvalues into extremely eigenvalues. And then you can push this, and I think the record for practical calculations is somewhere around 22 qubits. So these are exact methods. Then you can look at approximate methods, so you can look at adapting, say, the DMRG method to MBL systems. And DMRG is, of course, originally intended for low energy systems, but it turns out that you can adapt it. And that gets you to maybe 50 to 100 qubits, but really only works very deep in the localized phase and breaks down as you approach the transition. And then, of course, there's uncontrolled methods that sort of guess the structure of the problem and try to make progress with that. And they can get to, of course, hundreds of thousands of sites. If we think about the dynamics of the system, so here, actually, I should mention the axis here. So plotting these methods basically is the function of the time they can reach and the function of the system size they can treat. There's obviously a trade-off here, also in the case of classical methods. So if you look at full diagonalization, that gets you basically to arbitrarily long times, limited only by sort of the numerical position of extracting the eigenvalues, but is limited to very small volumes. You can, again, look at sparse exact methods, so you can exactly represent your state, but then do sort of a Khrulov type evolution, and that maybe gets you to 40 to 50 qubits. 50 is pushing it, but 40 seems maybe possible. Then you can do time evolution using matrix product states. Their system size is really not your constraint. You can get to hundreds to millions of sites, but you're usually very limited in the time scale that you can reach. And then, again, there's uncontrolled methods. You can do time-dependent DFT. You can do time-dependent dynamical mean-filled theory, which work in some cases, but don't work in the general case. Just to spend a little bit more time on this. So the full diagonalization, the memory is basically 2 to the 2L because you have to store a dense matrix of size 2 to the L by 2 to the L. And then the time is cubic in that, so that gets very expensive very quickly. These sparse methods, the memory is only linear in the size of the orbit space. And the time is basically, you know, I'm not a computer scientist, so I'm sure these bounds aren't strictly right, but the sort of intuition is that it's roughly linear in the time you need to go to. So, you know, you should be only memory limited and by your patients. And then these matrix product state methods. Here, linear, the system size is only one parameter. The other parameter is sort of the accuracy of your representation, which we call the bond dimension. And the cost is really driven by how large you have to choose this bond dimension. So your algorithm is cubic in this bond dimension. And the bond dimension, you have to choose roughly exponentially large in the entanglement of the system. So this, you should think of this as being exponentially hard in the amount of bipartite entanglement in your system. In practice, we can get to something like 4 to 6 for the entanglement. That's where you get if you have like 5 to 10,000 states in the bond dimension. So if you now look at the physics, if we have like no growth of the entanglement in time, say we're doing an adiabatic evolution near the ground state, we could go to very long times. If we have large growth of entanglement, which is what we have in an MBL system, then, you know, we'd have to grow the bond dimension polynomially in time. And if you have a linear growth of entanglement, which is what you have in an ETH system, it would grow, the cost would grow exponentially in time. So to summarize how this plays out in the different phases, on the ergodic side, so a system that obeys ETH, the classical simulation, since the entropy grows linearly in time, it's generally very hard to do these NPS-based simulations. So that works for relatively large systems, but only very short times. But there's an interesting catch to this. Because the system thermalizes, the long-time dynamics are just described by the thermal equilibrium. So that you can compute not using dynamical methods, but just some other method, mean-field method, whatever. So for example, the example that Jens showed, of course, the long-time behavior, this density just goes to one-half. So that you could just get, you know, the long-time behavior you can just get from mean-field. So there's sort of an interesting catch there. Also, non-local properties, of course, don't thermalize. So thermalization is only for things that are local. So, you know, some complicated correlation function would not thermalize. This is, of course, the trick that, you know, from a physicist's point of view, these quantum supremacy proposals are doing. They're doing a time evolution of some Hamiltonian that has very rapidly varying parameters. So it's very strongly agotic, it very quickly thermalizes. But you carefully engineer some correlation function that doesn't thermalize. For example, just measuring all qubits in the computational basis. This is a very complicated correlation function. From the point of view of quantum simulation, I think it's important to keep in mind that ergodicity is sort of the most generic case in the quantum system. So if you just build a bad quantum computer, it'll be ergodic. It'll show quantum chaos. That in itself is not a surprise. So I think that's something to keep in mind, that if you just build a poor quantum computer, it'll basically behave like an ETH system. On the localized side, the entropy in an MBL system grows logarithmically in time. So in principle, you should be able to go to very long times, at least in one dimension where we have MPS methods. The catch here is that a lot of the relevant physics questions are about very long time dynamics. So for example, this question of whether MBL exists in two dimensions is really a question about the long time behavior. Is there an exponentially diverging time scale or is there only a parametrically large time scale? So even if we have fairly efficient methods, that's still hard to reach in practice because we can't push to very long times. And then here in the case of quantum simulation, because MBL is generally expected to be unstable to noise, I think that seeing MBL on a quantum computer actually demonstrates excellent levels of control. You have to be able to simulate the evolution with very little noise and very little coupling to the environment or the behavior, as opposed to the organic side, which is kind of the generic case on a quantum computer. So I think that's kind of interesting. So the MBL transition you can think of as a transition in classical simulability of the system. But it's sort of reverse. In a sense, MBL is classically easy, but difficult to achieve on a quantum computer whereas ergodicity is classically hard. But on a quantum computer it's in a sense easier, at least if you're looking at local properties of course predicting sort of specific instances and complicated collection functions is still hard. So basically to wrap up, what does that leave us with for interesting problems for a quantum computer? I think an interesting challenge is to test the limits of ETH. So for example, identify operators that show deviations from this thermal behavior, look at the crossover from integrability to ergodicity. This is actually something we're working on at station Q and I was hoping I could show some results, but I don't have anything to show quite yet but something we're actively working on. The MBL transition is a natural application because this is really, you know, in between the cases that we can classically simulate. Here we really have no good classical methods at all. But just to add a caveat, there's of course competition from sort of analog quantum simulation. In particular, this is the kind of problem that's sort of ideally suited to cold atoms and there's been a bunch of interesting experiments there. Ian's already showed a bunch of this. But they are, unfortunately, this is hard to see here. I was going to point at this particular plot. They are severely limited in the time they can go to because they have coupling to the environment, they're losing the main sources that they're just losing particles from their traps and so they cannot really access very long-time dynamics. So I think this is an opportunity where maybe a quantum computer could be better. And with that, let me just fast forward to my conclusions and thank you for your attention. Questions? Yeah, so, is this working? Yeah, so a long time ago, well, I was interested in this question of dynamical localization as some demonstration of control and quantumness. In fact, we have a paper from, I don't know, 2005 maybe. Back when four qubit NMR experiments in liquid state were exciting, we had a paper looking at dynamical localization and a quantum chaotic map. And as you know, there's this kind of quasi heuristic proof that dynamical localization and quantum chaos and the kick rotor maps onto Anderson localization. I think it's in 1D. Here's the problem. I was interested in this as a supremacy test as well, but the problem is that the time scale on which localization occurs is polynomial in the Hilbert space dimension of the system you're simulating, at least for these kick rotor type maps. And so, the number of gate complexity might be just through the roof. But it'd be interesting to see if looking at it at the eigenstate level is more achievable. Yeah, I think looking at it at the level of eigenstates is actually a bit challenging because of course preparing eigenstates in the middle of the spectrum is difficult even on a quantum computer. I have a few slides on that which has skipped. We thought about that a bit, but that's hard. But I think in the dynamics there should be, you know, things you can see on time scales that are maybe linear in system size. Not Hilbert space dimension, just physical system size. So it shouldn't be too bad, I hope. I think that there must, yeah, that's probably an artifact of the mapping of this sort of non-interacting problem. I would guess. I'd have to look at the details. Okay, let's take one more question. Yeah. That's actually an interesting question. I think we don't really quantitatively understand what level of noise destroys MBL. So that's actually an interesting physics question in itself. Okay, so due to time, so maybe we're going to postpone the question to the tea break, so let's thank our speaker again. Thank you.