 Thank you, Antonello. Thank you all for being here. Thanks to the organizers for putting together this wonderful school and for inviting me. So the title of my lecture is Ergodic and Non-Ergodic Quantum Dynamics, or a little more technically, Thermalization and Localization in Many Body Quantum Systems. And over the course of these lectures, I'll explain to you what I mean by thermalization in the particular setting I'll be talking about, what I mean by localization. And I'll also spend some time talking about the transition between a thermalizing phase and a localized phase, OK? So all right, it's the last week. It's the end of the day. You all want to be at the beach. So what are you going to get out of this? So why is this important? Over these last two weeks, you've already heard many, many wonderful lectures on many topics ranging from frustrated magnetism to superconductivity to topological phases, the quantum Hall effect. And indeed, a central enterprise in condensed matter physics over the last century has been thinking about phases of matter and the transitions between them. And we've made tremendous progress in understanding how to think about different phases. We have lots of frameworks for understanding this. For example, we know that temperature or symmetry, conserved quantities, play a big role in how we think about phases of matter. So if you have a magnet at high temperatures, it could just be disordered and paramagnetic. But as you lower your temperature, it orders into a nice ferromagnetic state. So this has really been the bread and butter of condensed matter physics for decades. But all of this progress has been built on the framework of equilibrium statistical mechanics. So equilibrium statistical mechanics allows us to see useful things about macroscopic quantum systems with many, many degrees of freedom. As you may recall from your statistical mechanics course, a phase of matter is something that's only well-defined in the thermodynamic limit. So you want to think of a system with 10 to the 23 degrees of freedom. And how do you describe such a system? Individually tracking the position or the momentum or the charge or the spin of every one of these particles is a prohibitively expensive task. So instead, what you do is say core screen things about some statistically average properties of the system. So in a usual textbook formulation of statistical mechanics, you have some system. It may have a few conservation laws, like maybe energy is conserved, maybe the total particle number is conserved. And then what you assume is that here is your system of interest. It's usually in contact with some very large bath or reservoir, and this is just something that's just hanging out there for your convenience. You don't really talk about the bath or reservoir in any great detail. But the bath serves as a source with which the system can exchange energy and particles. So this system is interacting with the bath, exchanging energy and particles. And over some time, the system reaches a state of thermal equilibrium. And this equilibrium state is now only characterized by a few average parameters, for instance temperature, which is coupled to the energy, or chemical potential, which is coupled to the particle number. And then we can take this equilibrium state, and we can evaluate various observables in it, like correlation functions. Like if you want to know, is the equilibrium state at a given temperature magnetically ordered, then you go in and you measure some magnetic correlation function in this equilibrium ensemble and see what answer you get. So this is the basis on which everything has been established. However, you can ask, what about isolated many body quantum systems? What if there is no bath? This is really a topic which is going to the fundamentals of quantum statistical mechanics, right? It's a very, very foundational question. Why do we need this extra bath hanging around? If you treat the universe, if you had the system and the bath, you can just combine them into a bigger system and say, well, that's my whole system. Is that OK? How does that behave? So you can successfully keep doing this till your entire universe is your system of interest. And you can ask, what is the statistical mechanics of this isolated quantum system? So the focus of these lectures is going to be describing the dynamics of isolated. So there's no external bath. It's just the system as a whole. Isolated, strongly interacting many body quantum systems. So these could be systems of spins, atoms, qubits, whichever degrees of freedom you want to imagine. And we're going to be in this very, very strongly interacting limit. For the majority of these lectures, I'll actually be at very high energy densities or temperatures. I'll actually be at infinite temperature. So we're not going to have any nice descriptions of quasi-particles or any nice analytical handles that we often have at low temperatures. So this is this very messy, strongly interacting many body quantum system. And we're going to ask, what are the dynamics of the system in time? And really, this is a topic that is not only important from this foundational point of view. Here's quantum statistical mechanics. Why is there this big hole? And why are we only now realizing that there's this big hole that we haven't really addressed? This question could have been asked many, many decades in the past, really, when quantum mechanics itself was being formulated. So it's interesting from just a fundamental point of view. But it's also a topic that's really at the junction of many topical areas in condensed matter and quantum information and quantum gravity and atomic molecular and optical physics. So to just, again, give you one motivational example, if you think about quantum gravity, for instance. So there's the famous black hole information paradox, which says that if you treat the black hole as a quantum mechanical system, so if it's a closed quantum system and it's undergoing what we call unitary time evolution, then in principle, there can be no loss in quantum information about this black hole. Everything in this black hole is preserved forever. However, if you look at the black hole radiation, then Hawking said that this radiation can only depend on a few small parameters like temperature or charge of the black hole. This is very much like thermal equilibrium. I can take this bottle of water. I can prepare it in some very non-generic initial state. But after a while, the equilibrium state is only defined by a few small parameters. But all that information is still there. Where did it go? So that's the black hole information paradox. And it really comes from thinking about this tension between unitary time evolution on one hand, which preserves all information forever, and some kind of effective equilibrium state or some radiation, which is characterized only by a few parameters like temperature. So I'll talk a lot more about what I mean by reaching equilibrium in this setting of closed quantum systems. And to address this question of why now, really a large part of that answer connects with what you may have heard from Emmanuel Bloch during his lectures last week, which is really in the last decade or so there has been tremendous progress in engineering these very, very well-isolated fundamental building blocks with which to explore physics in a laboratory setting. So these could be systems of ultra-cold atoms, nitrogen vacancy centers, Rydberg atoms, trapped ions, these are just very few examples of design or quantum systems that are to a very, very good approximation isolated from their environment. So in these systems, they have a great degree of tunability. You can engineer some kind of Hamiltonian. And for long periods of time, the system acts effectively isolated. And you can ask, what is this closed quantum system do? So good. So that's our question. What are the dynamics in this setting of highly excited, strongly interacting, many body systems? And the standard assumption in statistical mechanics. So if you said, OK, I want to take this system and just put it in the box that I studied in my textbooks, well, then you'll say, OK, this is this huge, messy, interacting system. Surely it has to go to thermal equilibrium at late times. Things just settle down in this messy system. It goes to equilibrium. And then I can use all the tools that I had in the past. I can treat my equilibrium state. I can measure correlation functions in it. I can measure observables in it. And I can bootstrap everything that I had done in the past to this setting. But must this always be true? Does this system always go to thermal equilibrium? And of course, the point of these lectures is the answer is no. And you get all sorts of new possibilities when this fails. But before we get there, we have to even ask, what does thermal equilibrium mean in this context? And I'll get to how we define equilibration in this context. And then you can ask, OK, when it reaches thermal equilibrium, how is it reached? What's microscopically going on in this big system that enables it to get to equilibrium at late times? So to start with, must this always be true? Just want to flag this really visionary paper by Phil Anderson, the founder of modern condensed metaphysics from 1958. And Anderson really provided the first example of a system which is many bodies localized. And by localized, we mean this is a system which fails to go to thermal equilibrium. So Anderson had in mind a system of spins in a semiconductor, and the spins were randomly placed. And he was asking about the state of these spins at late times. And look at this. I don't know if you can read it, so I'll read it out. But he says, this is interesting as an example of a real physical system with an infinite number of degrees of freedom, having no obvious oversimplification in which the approach to equilibrium is simply impossible. So in this system, which Anderson was thinking about in 1958, even though it was this large, messy system, it never reached local equilibrium. It was localized or stuck at all times. And then he says, and third, in particular, it re-emphasizes the caution with which we must read ideas, such as the thermodynamic system of spin interactions, when there is no obvious contact with a real external heat bath. So most of the recent progress in this topic is actually over the last decade or so. So Anderson posed this question back in 1958. And it's just been simmering. And in the last decade, this confluence of numerical progress, experimental techniques, theoretical advancements, all of these have kind of come together to put this back at the forefront of inquiry. So this is Anderson's question, which is, can an isolated, strongly interacting many body system act as its own bath, if you will? There's no external bath. But can the system act as its own bath in some way that I'll make precise in a bit? And bring its subsystems to thermal equilibrium. And from what we know now, there are two generic possibilities with a sharp dynamical distinction at late times and for large sizes. So we have two answers to this question at a very coarse-grained level. And the distinction between these is only sharp once you take these limits of long times and late large sizes. And the two answers are, yes, the system is thermalizing. So it does eventually reach thermal equilibrium at late times. And no, which is the system is many body localized. And further, what you'll find is by tuning some parameters in your problem or by tuning the temperature or energy, you can drive a kind of phase transition between a thermalizing phase and a many body localized phase. And by its very nature, this is an absolutely new kind of quantum phase transition. So all the transitions that you studied so far were between two systems in equilibrium. Here is a ferromagnet in equilibrium and a paramagnet in equilibrium. Statistical mechanics holds in both sides in the usual way. And how does the system go from one phase to another? So then you can look at some free energy, study how the free energy behaves, look at its derivatives, and so on. But in this transition, it's a transition between a phase in which on one side, quantum statistical mechanics breaks down. And on the other side, your usual description of statistical mechanics holds. So it's a completely new kind of quantum phase transition which involves a restoration of statistical mechanics. And how to understand this and the properties of this transition are things that we're actively working on and we really don't have a full solution to this problem. So we can add a few extra layers here, too. So I said the transition between these two will be a new kind of phase transition. But these may not be the only two options, thermalizing an MBL. And indeed, figuring out the full scope of all possible dynamical universality classes, if you will, what are all the dynamical possibilities that the system could reach? Maybe a system is not many body localized, but it takes exponentially long in system size to reach thermal equilibrium. So for all practical purposes, that could look localized. So there's various nuances you can add to this to ask about what are all the full range of possibilities. And then what's even more interesting is that in this many body localized setting, which evades the usual rules of quantum statistical mechanics, you can get brand new types of possibilities for what's allowed. So we have all sorts of principles in thermal equilibrium at some finite temperature. This is what should happen. But once you've stepped out of this regime of thermal equilibrium, once you're out of equilibrium, suddenly you've got this whole new space in which you can play, and you can get all sorts of new possibilities. So one example that I won't have time to get into, but I'll briefly touch on, is a topic that's received a lot of attention recently, which is time crystals. Because the time crystals are a dynamical phase of matter which have been proven to not exist in equilibrium. But in this out of equilibrium, many body localized settings, suddenly they can show up. So this is going to be interesting from these two points of view. First is just to get this new kind of transition that's not been studied before. And then secondly, to see what kinds of new physics you can get in this out of equilibrium setting. So good. So now let's turn to what exactly do we mean by thermal equilibrium and out of equilibrium? How do we define this? So this is our question. Can unitary time evolution bring a system to equilibrium at late times? Oh, yeah. Oh, yeah. Yes. Yeah, so I'll come to that in a bit. But there's no notion of a gap here because you're at infinite temperature. So this is, so a gap is if you're thinking of zero temperature phase transitions. So there are zero temperature metal insulator transitions that people have studied and that there are experiments on. That's not the regime I'm going to be thinking in here. So you're in the middle of some many body spectrums. Your spectrum has two to the L energy levels and you've parked yourself right in the middle of it. So that's like infinite temperature. So there's no there's no gap. There's nothing like that. So it's going to be the singular rearrangement in the properties of your eigenstates, as I'll discuss. And what you'll find is that this transition is completely invisible to thermodynamic ensembles. Like you can look at a Gibbs ensemble as you tune some parameters and it'll see nothing. But then if you look at the dynamics of the system, the time dynamics or the eigenstates which encode the dynamics, then you will see this transition. So we'll get to that. But that's exactly why this is so hard to study. Because we can't use any of these usual intuitions about a gap closing or continuously connected and so on. All right, so what does thermal equilibrium mean in this setting? So again, usually when you think of thermal equilibrium, you think that you prepared the system. You could have prepared it in a horribly non-generic state. For instance, in this room, I could have started with all the gas molecules in the room in that one corner. But over time, the gas molecules spread out everywhere. And the system loses memory of this non-generic initial state and is only described by a few small parameters like temperature or chemical potential. I shake this up, but then very soon I've forgotten the state in which I started. But in this system, which is undergoing unitary time evolution, which is this closed quantum system, you can see that you can start with some initial state psi. And it's just going to evolve under a unitary operator. And I can consider, oh, I should have raised the boards. Let me do that quickly. Actually, while I do this, before I forget, tomorrow I'll be doing some numerics too. So you guys learned some Python from Chris Laumann. And I'll be building on that and actually having you do some numerics on MBL systems. So please, for those of you who have Jupiter installed in your laptops, please bring your laptops to reduce the load in the servers. Actually, everyone bring your laptop and either log in to the server or use your own local copy. But if you can use your own local copy, that would be great. OK. Yeah, so there's various cases for U of T that we can consider. So the most common one is if you have a time independent Hamiltonian, in which case U of T is just e to the iht. You can consider Flo K Hamiltonians, which is a time periodic model. So in this case, you have the time evolution operator for one period, which is T. And the point is that this evolution operator is itself periodic. So we can consider various different cases of how the unitary time dynamics is generated. But because we have this closed quantum system, everything is just described by this unitary evolution and time. So if I start with a pure state, then psi of psi naught, then we know that psi of T in time is just given by that expression there. So just to, this is simple, I know. But for psi naught, if you have a Hamiltonian system, h of T equals h, which is time independent, then we can expand our psi naught in the eigenbasis of the Hamiltonian. And then in time, psi of T just picks up phases. So really, all information about the initial condition is always in the state. All that's happening is some mixing up of phases under this unitary evolution. So there is no, as a matter of principle, there is no information loss under unitary time evolution. So what does equilibrium mean? So what equilibrium means is that even though the system as a whole preserves all your quantum information for all times, this information can get mixed up or scrambled in very, very non-local degrees of freedom. So that if you're an experimentalist and you have a system with 10 to the 23 degrees of freedom, you can't go in and make a measurement which actually probes all these different observables. You will make a local measurement which involves these five little degrees of freedom here or another local measurement which involves 10 sites over there. So as a matter of physical relevance, you only have access to small subsystems of your giant system which are observable. And we say that the system reaches thermal equilibrium if the rest of the system can act as a bath for the small system and bring it to equilibrium at late times. So what does that mean? So we have our state rho of T. So rho is the density matrix. We didn't have to start with this pure state. We could have started with the density matrix. In either case, we have rho of T, which is the state of our system at late times. And we can construct the reduced density matrix, which is just the part of rho A of T. We trace out everything that's in B. And we only keep the state of the system in this region A. And if this system reaches thermal equilibrium, then what that means is that the late time limit of rho A of T is the same as what you would get if you started with some equilibrium ensemble on your whole system. So this could be your Gibbs ensemble, for instance. Or it could be the Gibbs ensemble, the grand canonical ensemble, whichever your favorite equilibrium ensemble is. Say your entire system in your initial state is at some temperature and it's at some chemical potential. You can construct the equilibrium ensemble. You can just write it down. And then what you're saying is that even though you started in this particular initial state, as time goes on, as far as local observables are concerned, they look just as they would if you had an equilibrium ensemble at that temperature and chemical potential. Yeah, no, it should be small. Actually, there's a very recent paper which is saying that it doesn't even need to be as large as half the system. So how small is small is still an area of debate. But in general, it doesn't have to be a contiguous region. You could have a patch here and a patch here and a patch there. It should just be a small subset of the whole number of degrees of freedom of the system. I'm treating the rest of the system as going to infinity in some way. And you have to take these limits properly. But again, how small is small is not settled. And it actually looks like it can be much bigger than you may have likely expected. So any questions on what we mean by thermalization? Because this is something that we'll keep coming back to. So it's really, as far as local observables are concerned, it reaches some equilibrium. Very good. So I'm going to come to that in a couple of slides. So I'm not talking about eigenstates right now. So if you did the ground state, the temperature would be zero. The ground state is a zero temperature state. So what I'm thinking about now is you just have some generic initial state, which is expanded in your eigenbasis like that. So this could be, suppose you have a system of spins and you decide to start in a polarized state. That's a state that experimentals can measure. Why not? Let's start in it. But that's not going to be an eigenstate of your system in general. So you can expand it in terms of your eigenstates. So what I mean by temperature in this state is let's measure the energy in the state. So that's going to be psi naught h psi naught. And this energy is the same. I'm going to define my temperature, define t such that this is equal to trace of e to the minus beta h h. So the same energy that exists in the initial state corresponds to some temperature beta. And that's what sets that temperature in the appropriate equilibrium ensemble that you choose. So in a strong form, the statement that I just said about systems reaching thermal equilibrium at late times, this holds for all local subsystems up to what we just discussed and all reasonable initial states. So what are reasonable initial states? These should be states with some sub-extensive uncertainty in conserved quantities like energy, number, and so on so that you can actually define a temperature or a chemical potential. So if you have a state with well-defined number, so in usual thermodynamic ensembles, you have some expectation value of the energy, which is extensive in your system size. And then your fluctuations in that energy which scale a square root of volume. So those are sub-extensive. They're less than scaling linearly with the volume. So in such a case, you can define a temperature. Likewise, if you have number, the number fluctuations again suppress the square root of volume so you can define a chemical potential. So we're only going to consider, this form works for reasonable states like this. And in fact, you can show that any, do you know what a strong range correlated initial state is, like a product state? Maybe it came up in one of your tensor network lectures. But just imagine you had a state which was just a direct product on every side of like up, down, up, down, any randomly chosen pattern. So these are states with low entanglement. You can actually show that any short range correlated initial state, if you have a local Hamiltonian, has sub-extensive uncertainty in energy, number, and so on. So any reasonable state that you might think of preparing actually satisfies this. Things that don't satisfy this are these global shorting or cat state, like a state which is all up plus all down flipped over. Then you start running into issues. But this is just a side comment. But this will lead us to this question that someone asked about, what if I just start with the ground state? Which is, if all reasonable initial states reach thermal equilibrium, then the eigenstates of age themselves must be thermal. So this is the eigenstate thermalization hypothesis. And really early work on this goes back to the 80s and 90s. It has precursors in quantum chaos. But the point is that if you have eigenstates of your Hamiltonian, then you know that these are time independent. So if you look at psi of t when you start with the given eigenstate, the eigenstate is at some energy density, which corresponds to some temperature. But the eigenstate is time independent. So at late times, when you look at trace of b evaluated in this eigenstate, that needs to agree with what you would expect in equilibrium. So in every eigenstate itself encodes the full thermal distribution. So to put it a different way, in equilibrium statistical mechanics, we have different ensembles. So we have the canonical ensemble, we have the micro-canonical ensemble. This is the micro-canonical ensemble at the level of a single eigenstate in your system. And when a system is thermalizing, each and every eigenstate, and that's a strong form of this. And I should mention that this is just a hypothesis. This has been tested numerically in lots of reasonable examples, but there's no proof of this. Proving this in full generality is extremely hard. So the point is every single eigenstate at every energy density of a thermalizing system acts like the thermal ensemble. And local properties of eigenstates then very smoothly with energy. So here is a picture. I can say, but it's delocalized in what? It's only localized in i. So we'll come to localized eigenstates. But when a system is thermalizing, then its eigenstates are not localized, in a sense, by definition. Well, classical systems don't have eigenstates. I mean, it's not discreet, right? You just have a, yeah, yeah, yeah. No, no, so so far, if you're just talking about thermalization, and if it all works, then you're right that there's nothing really that new. Because at the end of the day, I'm looking at this Gibbs ensemble, and this can certainly be defined for a classical system. The new thing will happen, which only exists in a quantum system, when systems don't reach thermal equilibrium. So in a classical system, I'll come to that. But in this quantum setting, when things don't reach thermal equilibrium under unitary time evolution, that's the new piece of physics. And to understand what the new piece of physics is, you want to reconcile how the old physics works in this quantum setting, which requires you to think properly about eigenstates and so on. OK, so this is the eigenstate thermalization hypothesis, so this is just a picture of what I was saying about local properties. So this is a kind of numerical test that you may do. So you have some observable A, alpha labels your eigenstate, you're measuring your observable in your eigenstate. And then this is being plotted, so each dot is one measurement of A in a given eigenstate. As you increase your system size, you just have many, many more eigenstates that you can play with. So that's why you have more and more dots, the density of dots gets denser. And it's being plotted against the energy of that eigenstate. And then the red line is what you would expect from an equilibrium distribution at a temperature corresponding to this eigenstate energy. And what you find is that as your system size increases, every single one of these dots is actually very well converged about this equilibrium ensemble. So at the level of individual eigenstates, they agree with what you would expect from this equilibrium ensemble. And this variation, how wide is this distribution about the equilibrium value, this narrows exponentially with system size. Great, so that's another one of these open questions. So in many examples that we have studied, it is true that all eigenstates look thermal. There are a few cases that we're now discovering where this may not be the case. And what we're trying to understand is why. Like is there some easy explanation for it? Like it's being inherited. There are these classes of systems which are integrable. So that's more connected with classical systems that violate ergodicity. So in these integrable systems, you have extensively many conservation laws. So all your eigenstates don't look thermal. And there's a sense in which if you're very, very close to an integrable point, do your eigenstates inherit the properties of that integrable point? And do they continue to inherit it even in the infinite size limit? So you have lots of parameters to play with. So this is an absolutely open question. And again, so far we only have some few known examples where we know it works every eigenstate. But it's not known in full generality. OK, good. So this was ETH. And another thing that ETH gives us, as you've probably discovered over the last few weeks, is that really thinking about quantum entanglement in quantum entanglement has really given us a very, very useful lens into thinking about many body physics. And in fact, it'll play a very important role in trying to understand the properties of this MBL transition. And what the eigenstate thermalization hypothesis tells you is that if you look at a given eigenstate, so rho is psi psi, where this is an eigenstate at a given energy, rho A is trace over B of rho. And now if you look at the von Neumann entropy of, if you look at the von Neumann entropy of that reduced density matrix, then if ETH holds, this must agree with the thermodynamic entropy of the system. And in a thermal system, we all know that the entropy scales extensively with the system size. So in this setting where the ETH holds, in every single eigenstate, if you go in and you take out a little chunk A, and you ask about its entanglement entropy, what you find that it's a volume law, which means it scales as the volume of that subsystem. And the coefficient of that volume law is exactly the thermal entropy density at that given temperature. So what this means is that the B, the rest of the system, acts as a reservoir. And really the essential function of a reservoir now is to serve as something that your subsystem can get entangled with. So you reach thermal equilibrium by getting entangled with your reservoir. And now this view works even if you don't have any conserved quantities. So initially, when I set it up, I said, oh, there was a temperature because energy is conserved. And there is a chemical potential because particle number is conserved. And you have this bus, and you're reaching some T and a mu by exchanging energy and particles with that bus. But really, I could consider a periodically driven flow case system where nothing is conserved. There's no energy conservation. There's no number conservation. And I can still meaningfully talk about thermal equilibrium because the only thing that really matters is that you have a bath or the rest of the system for your subsystem to get entangled with. And entanglement is a well-defined concept even when there's no conserved quantities. Good. So all right, so now when does this fail? So localized systems, in particular, I'll talk about many body localized systems. So these are interacting systems that are localized. They're the only generic exception to thermalization that we're aware of. By generic, I mean that these are systems that are not fine-tuned in any way. So we also have integrable systems that are very, very well studied. There's a huge literature on them. But they're usually very, very fine-tuned models with very specific coupling constants. And if you generically perturb this model weekly by adding whatever the hell you want to it, then the system eventually loses. So these are systems that many, many conservation laws. As a result of these conservation laws, your dynamics are highly constrained. So for example, if I told you, start your system off somewhere and it can evolve however it wants. So then it evolves in this very unconstrained way. But now you say, OK, no, energy has to be conserved. So then it can only evolve around trajectories, if you will, that preserve its energy. Then you add particle number to it. That constrains its dynamics further. And if you keep doing this to the end and in the end, if you've added in extensively many, so something that scales with system size, local conservation laws, then in the end, they end up constraining the dynamics so much that your system really does not explore all of the available Hilbert space. So that's why it's non-urgotic. So the case in which we know this happens are integrable systems, but they're very fine-tuned. They're not generic. Many body localization is going to be a generic exception to thermalization. And this occurs in systems that are not translationally invariant. So usually, some ingredient like disorder or some quasi periodicity in your on-site couplings or fields is going to be important to get these kinds of systems. And by localization, what we now mean is that the system retains local memory of its initial conditions out to infinitely late times. So we define thermalization as this idea that if you probe local observables, those relax to thermal equilibrium and forget about any non-genericity in the initial state at late times. When these localized systems, local memory about initial conditions persists forever. So that's really special. It acts like a kind of quantum memory. You can just prepare this crazy state and locally it will remember what you prepared. Let me just show you an example from it. This is from an actual experiment in a manual blocks group who you heard from. Did he show this slide already? All right, but OK. So fine, let me tell you what's going on here. So they have two different parameters here for your left and right columns. Corresponding to these two different parameters in one set of parameters, the system is supposed to be thermalizing and in the other one it's supposed to be localized. And in the initial state, they have this atom trap. So it's circular because you have this harmonic trap and all your atoms lie within this region of space. And in the initial condition, they prepare all the atoms in the left half of the trap. So they're like, OK, we're going to start there and that's your initial state. And now you're going to release this and let it go. And of course, if your system thermalizes, like if you start with all the gas in this room on one side of the room, it's going to eventually mix everywhere and you're going to forget that you started with this imbalance, right? But indeed in this set of parameters which is supposed to be thermalizing, as time goes on, if you look at the density of particles in your system, you see that it's completely filled up over this entire circle. So you lose memory of the fact that you started there. On the other hand, in this set of parameters, which is localizing, if you go to these late times, so this is only going to the time scales that are accessible in the system, you see that even after all these, after 250 what they call tunneling times, you still can tell that you preferentially started in the left half in this localized setting. So you can go in and you can make this local measurement which is just go and measure the density right there. And that will know that your system never, it still remembers where it started. So what persists in the initial state? Right, so the way to say it is that in localized systems, the system is unable to act as a bath for itself. Okay, so in a thermal system, you imagine cutting this little chunk out and then you say this little chunk is interacting with everything else and as a result it reaches thermal equilibrium, so everything else acts as a bath. But in the localized system, that process fails. So the rest of the system does not act as a bath, so it just, the bath isn't there anymore basically. There's no internal bath, yeah. Does that answer it or are you asking something else? Yeah, yeah. Right, but subsystem, yeah, so one way of saying, so you're saying that maybe A is at a different temperature than B, is that right? So this is sort of the equilibrium state for a different temperature. So one of the things is actually, when a temperature is only a concept that's well-defined in equilibrium, so what you're saying is getting at the fact that when you have this system that's out of equilibrium, you really can't define a temperature. Like I can formally do this e to the minus beta h, but it doesn't mean anything because there's no temperature to which the system is relaxing. Right, so it's always empty here, solid there. It's not, locally it's not relaxing to any given temperature and acting like the equilibrium states would at that temperature, yeah. So temperature ceases to be a well-defined. See, you can formally just compute the reduced density matrix, but it's not gonna correctly capture your physics in your system, yeah. Non-Marcovian, we're not talking about Marcovian. There's no bath, I mean there's no, it's just a closed quantum system with unitary evolution. So it's unitary, there's no non-unitary anything in it. Okay, so this panel is a single disorder realization and this panel is averaged over different realizations, yeah. Okay, so let's get a bit more concrete and think about what models show this kind of behavior, right? So this is kind of the standard model of MBL, okay. So I have a system of spins, okay. So I have some onsite field, okay, which has some detuning. By detuning I mean that the value of this field changes a lot from site to site, okay. So it could be completely random and the scale of that randomness could be set by some parameter w or it could be what's called quasi periodic, okay. So you, which also if it's, if you have a lattice system and you put in a cosine potential with a period that's incommensurate with the lattice, it actually also varies from site to site without ever coming back to itself, okay. So you have some random onsite fields, okay. And then I have this Heisenberg type interaction which is X, X, Y, Y and Z, Z. And actually as you may know, there is a mapping from this spin problem to a problem of spinless fermions where the up spate is occupied and the down state is empty. So this is really a problem of fermions on a lattice, okay. And in that problem, this is like a potential energy for the fermions. These two terms, which is the X, X and the Y, where they're like the hopping terms, the nearest neighbor hopping which cause the fermion to go from one to the other. So if I had no Z, Z, this would just be single particle fermions hopping on a lattice. So there's no interactions and the interactions are added by this Z, Z term, okay. So that's the system. And what we find is that as a function of J over W, this system will have a transition from a many body localized state to a thermalizing one, okay. And let's slowly build up to that. Yeah. Good, I'll come to that right at the end, okay. But to zero with order for what I'm saying right now, no, in that you get a transition for both. The difference will come in when you start thinking about rare events and fluctuations and Griffiths effects, but that's much more advanced, okay. So okay, so first let's do the single particle case which is no interactions. Okay, so this is what Anderson solved in 58. Okay, so Anderson was really thinking about a many body problem. You could tell from that introduction that I read out earlier that he had this many body system in mind, but this is what he solved. So here we just have this problem of fermions hopping on this lattice. There's this onsite potential that varies widely from site to site. And you can do what's called this locator expansion. And the picture is just that if you start with a fermion on this one site, a particle on this one site, which has a given energy, and it's gonna try to hop to the other side, okay. But if the energy on that side is like way up there, right, it really, and you're hopping matrix elements are weak, then it really doesn't hybridize too much with the site next door, okay. And then again with the one after that and so on. So because of this failure to hybridize, okay, because these energy differences which is said by the disorder scale are so much larger than your tunneling elements, what you find is that your single particle wave functions which tell you, which are your single particle eigenstates which tell you where your particle is located, they end up being exponentially localized in real space, okay. So your different eigen orbitals just look like they're exponentially localized with some localization length, and they can be localized on your different sites, okay. So this single particle Anderson localization there's a proof for, it's been worked out in various dimensions and so on, okay. But what, yeah, yes, yes, yeah, that's right. You get a different universality class of localize, so what's less, if you only disorder your j's and not your h's, then you get a slightly different problem with what's called particle hold symmetry, and then you end up with some localization length that's you have an E to all states at E have a different state at minus E. So if you didn't have this term, if you didn't have this onsite field and you just had this ordered hopping, then you would get that that model still has localization, but it has very special properties as this band center is approached and it has some special symmetries. So that's, but as long as you also have this, then the problem looks exactly like this. So it's like a relevant symmetry breaking direction if you will, okay. Good, so now we are doing MBL, okay. So this was worked out by Bascolina and Alschule in 2006. And their starting point was, let's take our single particle eigen orbitals, okay. So these were, whoa, yeah. So I have these orbitals, right. And the creation operator for an eigen orbital is this a dagger alpha here, which creates a single particle eigen state of energy E alpha. So this is the non-interacting problem, right, with all these orbitals. And now I'm gonna add in this interaction term, which basically creates, which destroys two of these single particle orbitals and creates two of them somewhere else, okay. So this is gonna cause these inelastic processes. So essentially you have your levels, which ones are getting destroyed and created alpha, beta, gamma, delta, okay. You have gamma and delta and alpha and beta. And you're creating in alpha and beta and you're destroying in gamma and delta, okay. So the energy difference for this process is E alpha plus E beta minus E gamma minus E delta, right. Where E, these are the single particle energies. I guess I call them little E, okay. So this is the energy difference. So heuristically this is like this, when a single particle was trying to hop from here to here, there was some energy difference, right. Penalty which was delta, which was said by the disorder. This is the energy difference here in this many body process. And then you have a matrix element V, alpha, beta, gamma, delta, which is trying to couple these, okay. And as long as this matrix element is much, much smaller than delta, then once again this coupling is very ineffectual and you end up with a many body localized state, okay. So this is the heuristic answer. These guys, they did it to all orders in perturbation theory. Their paper is hundreds of pages and it was really a tour de force in perturbative analysis to actually show this. And their perturbative proof works to all orders in perturbation theory, all dimensions, all of that stuff, okay. So this was a major breakthrough result which really got this field started again in 2006. And I'd say like the two major pieces of the two major events that happened were one this proof by these gentlemen, but then this proof was so intimidating in a way that it's like, okay, you have this demonstration and it's great, but what do you do with it, okay. And really thinking about these spin models like the ones I was presenting and trying to think about doing numerical simulations on the spin models, exploring those, starting to play with those models. These are things that people started doing a few years after this work and this really brought the subject to the masses, if you will. It made it much more accessible and we started thinking about it from a very different lens than what was being considered in this paper. So, okay, so let's think a little bit in modern language, okay. So I should also say and I'll come back to this point that for the Russians, right, like for this treatment, a localized system is something with zero DC conductivity, okay, so they're really thinking in terms of transport, right. So a localized system is an insulator and a thermalizing system is a metal, okay. So they were thinking of zero DC conductivity, if you will. And our modern way of thinking about MBL were really not focused on conductivity. If I have time, we're really thinking about thermalization of local observables, which means at late times, does this local observable retain a memory of its initial conditions. And if I have time, I'll discuss how when you have rare Griffiths effects or rare fluctuations in your disorder, you can have cases where you have zero DC conductivity, but the system is still thermalizing, okay. So that's one major difference between sort of a more modern approach to thermalization versus localization versus older approaches of metal insulator transitions, if you will, good. So okay, so how do we build this up, right. So the simplest example is here's my spin chain and I just decide that I don't want any hopping and I don't want any interactions, okay. So this is a problem I can exactly solve, right. All my eigenstates are product states of sigma z, so they're just different patterns of up, down, up, down, up, down. And indeed, the energy of any one pattern of up, down, up, down, up, down. And if you go to some given many body eigenstate which looks like that, it has some energy which is said by these local fields. And then you go to the very next, if you go to the very next energy, the very next many body eigenstate which has a very, very small energy difference from the one you were considering, then the pattern of spins is completely changed, right. Because imagine that you start with some reference state which is all up, okay. If you flip one spin in it, the energy cost that you would pay is said by this local field which is w, okay. So a single, a few rearrangements actually have, that has its gap, so that's the mobility gap if you will. If you make a local rearrangement, that actually has a finite, typically has a finite energy cost said by this local field, okay. But if you make a multi-particle rearrangement so you go in and you change everything everywhere, then you can sort of cancel all these different energies against each other and get something that's very, very close to your starting state, okay. And we want these many body energy spacings to be exponentially small in system size. So the pattern of ups and downs looks very different from one state to the state right next to it in energy, okay. And in particular, what that means is that what I told you about the eigenstate thermalization hypothesis which is that local observables very smoothly with energy, right. They only care about some global energy scale which is not changing from one state to the one right next to it, that's violated, right. Because if I have some pattern of spins and then the state right next to it has a very different pattern of spins and if I go in and I measure the spin on some one side, it'll fluctuate like crazy between plus and minus as you go from state to state, right. So in this system, your ETH, eigenstate thermalization hypothesis is violated and you can see that very, very clearly in just a simple example, okay. And notice that the reason the system is localized, right. So I said for integrable systems you have extensively many constants of motion that's exactly what you have here, right. You have every single spin operator sigma z commutes with the Hamiltonian, okay. So you can see that and they all commute with each other, okay. So the, of course this is a very, very trivial example but the point is that this is localized because every single integral of motion is, you have extensively many local integrals of motion that are all conserved by the system, okay. So give me a second. Let me see what's a good stopping point for a break soon. Okay, so give me two more slides and then we can take a break, okay. So, right. So this was, this was this very simple case with just with no coupling here. But now what a lot of work has shown is that MBL has a form of emergent integrability. By emergent we mean that the integrability is not fine tuned in any way but as you smoothly change the parameters in your system you smoothly change your integrals of motion and you keep getting them, okay. So if you start with this, if you start with this Hamiltonian now I've added in a J but I've added in a J that's really small compared to the disorder scale, okay. And what this work has shown you is that then there is a local unitary transformation. So this is like a local basis change if you will, okay. That relates this original Hamiltonian to another one that looks purely diagonal in these new variables which are called tau, okay. So this is now like a classical ising like model, okay. It's just all the terms that are appearing in it. It's a local model. So you have all these higher order terms but they're all locally exponentially decaying and these tau, these L bits, they, so there's, okay. So let me do the next slide which will pictorially illustrate what I'm talking about. So there's a local basis change that you can do to convert your original model when this extra perturbation is small to this purely diagonal model written in terms of the tau's. And this means that there's extensively many local integrals of motion again which are these tau's because each tau commutes with the H and the different tau's also commute with each other, okay. And the way you should think of the tau's is just a dressed version of the sigmas, okay. So you had your physical spin operators which were the sigmas and now in operator space you create some dressing, right, so instead of having just sigma on some one side maybe you have sigma X and Y on the sides next door and then you have sigma Z on two sides over but as you go further and further out the weight on these other operators in your dressed operators falling off exponentially with distance, okay. So these are these dressed observables that you can define and in terms of these dressed observables if you now look at a given eigenstate, this is a many body spectrum for this problem, each eigenstate looks like a product state of these L bits or these tau bits, okay. L for localized. And every eigenstate looks like a product state of the L bits which is a dressed version of your physical bits or P bits, okay. So this is the essential phenomenology of MBL and really when we come back I'll discuss this, most of the properties of MBL we can understand by thinking of this, of this problem, okay. Or by thinking in this phenomenological language. So this is the last slide before the break which is that what do we know about the existence of the L bits, right. This is a good story that I told you and as I said all orders in perturbation theory was proved that the fact that these L bits would exist would even though it was not in this language at all but it was proved in all orders in perturbation theory by Bascolain and Alschuler. However, their proof was only in perturbation theory it did not take into account non-perturbative effects, so what's a common example of non-perturbative effects? This is this Griffiths effect that I've mentioned a few times already, okay. So imagine that you have this disordered system which for some strength of the disorder is going to be localized and for a much weaker strength it's thermal, okay. But now locally you're just drawing your fields from this disorder distribution, right. So there could be some rare event it has a small probability but with some probability you could have a rare event where you have this patch of 10 sites here where locally the disorder on these 10 sites is very small, right. So all these 10 sites could be very similar in their on-site potentials. That could just happen, it's a probabilistic event, right. So now you've locally created a locally thermal bubble in your system because locally these things all hybridize and can become a thermal bubble. And now you can ask once you have this thermal bubble does it take over your entire system and destroy it? Okay, so you have this thermal bubble you have some localized degrees of freedom that are gonna interact with it and does that take over the entire system or is MBL stable, okay. Two such non-perturbative effects. And what we know and so it's basically a proof as far as any physicist is concerned but John Embry who's a mathematical physicist calls it an almost proof because you have to make one assumption that's a very, very reasonable assumption but basically there's a proof including all these non-perturbative effects in one dimension with exponentially decaying interactions, okay. And in all of these limits if your disorder is strong enough there is a proof by John Embry which says that you will have an MBL phase which means that this description in terms of these locally dressed L bits or this finite depth unitary transformation that converts you from your physical spins to these localized spins exists, okay. There's lots and lots of open questions about non-perturbative instabilities in higher dimensions with long range interactions, intermediate phases between MBL and thermal so all of these regimes which are stable some of which are stable by BAA's treatment are actually could be non-perturbatively unstable, okay. And this is all an active area of research so I just wanted to flag that, okay. What's actually very interesting is we've spent so much time worrying about the existence of the MBL phase and trying to prove its existence there's actually no proof of the thermal phase because this is just lots of numerical evidence but it's very, very hard apparently to prove that this thermal phase exists, okay. But we have, but by now I don't think anyone seriously doubts that you have this regime, okay. So there's numerical evidence of the thermal phase and in certain regimes there's proper proof of the existence of the MBL phase so there's a dynamical phase transition between these two as some parameter is tuned, okay. So, okay, so maybe we should stop there and then come back in five minutes. Okay, all right, didn't lose too many people, okay. All right, so using this, right so one question that came up quite a bit during this discussion that some people asked me is how do we know this unitary exists, how do we construct it and really how we know it exists is only in this one setting by Embry again and the best way to construct the most local unitary is still an open question but there's some construction that you can do which shows that you'll get these exponentially decaying bits, okay, but the best unitary out there is a very, very, it's a problem that scales as two to the L factorial so it's super exponentially hard and that's very hard to do, okay. So in the early days of MBL one game that was played a lot is let's just start with this phenomenological model for the L bits, so you have these L bits, you have these exponentially decaying interactions between them and try to infer what properties does it imply about the properties of the MBL phase knowing that that's our model, okay. So one property that I already mentioned to you is this persistence of local memories, okay and it's actually very easy to see that from this L bit picture because what you know is that every L bit commutes with your Hamiltonian so if you were able to measure exactly measure an L bit, then you know that it would be time invariant, right. It would look exactly like what it did in the initial state at T equals zero. Of course, experimentalists don't have access to this exponentially decaying dressed operator which is an L bit, they measure a physical bit which is like a physical spin observable but your physical spin observable will generically have some finite overlap with the L bit and that's part of it doesn't decay, okay and the rest of the part of the physical observable when you expand it in this L bit basis it's going to have some tau X's and some tau Y's and those are the parts that are going to that are going to de-phase and show some dynamics but the part of it which is on tau Z and it's going to be some order one finite part on tau Z because tau Z is itself local, that part doesn't decay so you end up with some local memory of the initial condition at late times. So this is just some numerics of this imbalance that was probed in a manual blocks experiment in some one-dimensional systems. You prepare some initial state and you measure some imbalance between say the left and the right half and these are too many body localized systems and you can see that even at late times there's a finite imbalance that's left and this value is not going to zero, okay. Another thing that you can show and I'll work through what I mean by defacing dynamics in the next slide is that because of these exponentially decaying J-I-J's in your L bit Hamiltonian you actually end up with dynamics that correspond to slow defacing because of these exponentially decaying interactions. If you had no interactions there would be no defacing and I'd explain what I mean by defacing but the point is that because you have those defacing dynamics the approach to equilibrium can be a slow power law in time and that was shown by these people and it's in this plot as well, okay. One of the most interesting consequences of MBL is the dynamics of quantum entanglement, okay. And in thermalizing systems entanglement generally grows ballistically in time, okay. So you have this log, you have this light cone if you will. You prepare some initial state and things start getting entangled with each other across this light cone and if you make an entanglement cut you've already had one lecture on entanglement, one series of lectures in entanglement, right. So you know what the general stuff is, okay. So then you end up with this ballistically growing entanglement but in the MBL phase even though we don't have any transport and things are stuck because things talk to each other through these defacing dynamics you actually end up with the slow logarithmic growth, okay. And I'll just work through that on the board to show you how that may come about. But this is numerics from this paper here Bartelsen Pullman and Moore which is measuring the entanglement entropy as a function of time and you can see this is for different interaction strengths, okay. So you see that for these different interaction strengths which are, when interactions are present which is all these different curves your entanglement entropy actually always just keeps growing in this logarithmic fashion but when you have no interactions present so you don't have those terms that cause your defacing then it just, this is the Anderson localized case it just saturates to a constant value without any growth in time, okay. So just as a poll, should I work through why you get law growth or have most of you seen that already? Work through it, okay. So the point is that I'm preparing my initial state I'm preparing my system in some initial state, okay. I'm going to measure, I'm going to compute psi of t which is either the i, h, t, psi naught and then I'm going to compute row A and for my purposes row A can just be half the system which is gonna divide the system into two halves. Row A is trace over B, psi of t, psi of t, okay. This is row A of t and then S of t. Tell me when I should, can you see all the way to the bottom of the board? Okay, so S of t is trace of row A of t, log row A of t with a minus sign and this is the quantity that's being plotted in those graphs, okay. So that's what we want to compute. So let's do a very simple case. So my h is h i tau i z plus I'm just gonna consider the leading piece j i j tau i z tau j z, okay. So this is the Hamiltonian written in the Elbit basis and to understand this property actually you can ignore all the higher order terms, okay. And in fact because entanglement is concerned with how one part of the system is talking to the other part of the system I can even drop this piece, okay. It's not gonna matter for the entanglement dynamics because this is just the Anderson localized case by the way if all you had was an Anderson insulator without interactions all these higher order terms would be absent and in that case you can see in the graph above that your entanglement entropy just saturates without giving you the log growth, okay. So all the log growth is coming from this piece right here, okay. And this is exponentially decaying in distance between your elements, okay. So now let's prepare an initial state and let's just do a simple case where in the left half here is a given Elbit, okay. So this is an Elbit on site I, that's an Elbit on site J. I have a bunch of things and suppose I could just prepare my initial state in this basis where these are all Elbits, okay. These arrows, these are Elbits. And of course we have prepared an exact Elbit eigenstate that's an eigenstate of the Hamiltonian so it would have no time evolution, okay. And its entanglement entropy would not change in time but to create time evolution on site I, let me make a superposition of these two Elbits on site I and let me do the same on site J, okay. So my initial state is a superposition of this and that, okay. So I'm just going to do this part of it which just doesn't change under the dynamics. I'm not going to need to deal with it at all actually. So psi naught and just focus on these two things is one half times up up plus up down plus down up plus down down, okay. This is in the Elbit basis. And now I'm going to call define J ij as J, okay. So psi of t is going to be one half E to the ij, right. Because I'm just picking up the plus plus in this state that these are Elbit states. E to the ijt up up plus down down plus E to the minus ijt up down plus down up, okay. Now if you want to trace over B on psi psi, okay. So that's your reduced density matrix. What you're going to do is take the state, make the ket of it, right. Put those two together and then you're going to trace over B which means you're going to act on. So this is what this means is I have psi psi and I'm going to act on it with up on side B, up on side B because now psi B is just one single site in the way I'm doing it plus down and down on B. So that's the trace over B, okay. So when you take psi psi you're looking for cross terms for which this B is the same, okay. So up up can meet up up and up up can meet this guy, right. Likewise this can meet down down can meet down down but down down can meet up down, okay. So when you do this what you get is E of t ends up looking like one half, one one cosine of two Jt, cosine of two Jt, okay. And the way to see that is when up up meets up up you have an E to the I Jt that cancels with an E to the minus I Jt because in the ket version of it but when up up meets up down which is the off diagonal part there then you end up with two I Jt, okay. So this is in this reduced density matrix just for spin up for spin A so this is the up up up down down up down down. Okay and now the point is we're going to look at the eigenvalues of this reduced density matrix and use that to compute the entanglement entropy but you know that the maximum eigenvalue that you can get which is log two you get when when the reduced density matrix looks purely diagonal, okay. So these off diagonal terms go away which is they de-phase away then what you find is that the eigenvalues are just one and one and these are gone and that corresponds to a log two entanglement entropy, okay. But when are they gone? So this is so S of t is maximized for t equals what two n plus one pi, what am I doing over two J? Yeah, right. So good, so now this was only two spins talking to each other, okay. And this is what I mean by de-phasing, right. So by de-phasing what's happening here? This off diagonal element is a cosine, right. It's something that's oscillating and when t becomes very large, right. This just oscillates a lot, right. And essentially you can say that this gets averaged out and you're ending up with this one and one, okay. Now that's not quite right because if you literally had this two side problem you would just see an oscillating S of t, okay. But what ends up happening is in a many body system, right. You have these two pairs of spins which are oscillating with some frequency which is said by their J and then you have some other two pairs of spins which are oscillating by a frequency said by a slightly different J and then something else with a slightly different J. So you have a bunch of different incommensurate frequencies that are being added up together, okay. So as a result of that you don't actually get any revivals and periodicity, okay. But the point is that to get these guys to be defaced against each other, right. To get them to mix up, I need Jt to be order one, right. That's this expression. Until they're order one your entanglement entropy is slow, is small. So again, when you have an exponential like e to the i Jt you say that this is going to become completely defaced when Jt is order two pi or order one, right. And now you know that J scales exponentially with your distance, okay. So you know that e to the minus r t is order one or t goes as log r, okay. Which means that in some time t spins which are within some distance which scales like that, right. T goes as, no sorry, r goes as log t. There we go, okay. So in some time t spins that are within some radius which scales logarithmically with the time are getting defaced against each other. And that's why your entanglement entropy is only growing as a log in time, okay. It's coming from these exponentially slow interactions which cause the phasing but no transport, okay. Notice that there's at no point in my initial state actually move anywhere. It was still this up down superposition on this side, up down superposition on this side, but the two energies were kind of talking to each other and doing this, okay. Is that clear? So the same kind of game when you play it to show how something approaches some equilibrium will give you relaxation dynamics as well. Good, yep. How do I, what? Current, what do you mean by current? Where's the, what current? Oh, right, so this is not current, sorry. This is imbalance, yeah. No, so this is just a two different strengths of the disorder. So the more disordered you are, the deeper you are in the MBL phase. So the more, more imbalance is persisted and by tuning the interactions you tune how deep you are in the MBL phase, okay. So the other thing that follows from this Elbit picture is that all your many body localized eigenstates, right? So they can be right in the middle of the spectrum where in a thermalizing system you'd have volume law but all your many body localized eigenstates actually only have area law entanglement, okay. And the picture is again your MBL eigenstates just look like product states of these Elbits. So if you make a cut somewhere, the Elbits that kind of straddle the boundary contribute strongly and give you some entanglement but the ones that are far away are exponentially decaying away from that boundary so they don't contribute to the entanglement, okay. So every single eigenstate looks like an, looks like an area law in an MBL system, okay. And this in particular, as you heard in the last slide ends up with, so I won't go into this but what in particular it tells you is that because you have low entanglement you can in principle try to find more efficient representations of these states which are not exponentially large in system size. So you don't need this two to the L dimensional, you don't need to in principle keep track of all this two to the L dimensional numbers but you can try to use matrix product states or matrix product operator approaches for them and try to use DMRG approaches to compute these eigenstates which could give you some speed up, okay. So in practice these work reasonably well deep in the MBL phase but as we approach the transition you end up creating all sorts of mixing between some Elbits here and Elbits there and you make super positions between them and trying to get that entire delicate structure correct is currently beyond the capability of these algorithms. Okay, good. So one of the things, one of the upshots is that you know we were looking for an order parameter for this transition, right. So there's, it's in dynamics, right. So dynamics you have some imbalance that either persists or doesn't persist but what about individual eigenstate properties? Can we find some order parameter in terms of individual eigenstates and it turns out that entanglement entropy is currently the best understood order parameter that we have, okay. Because it scales as a volume law in the thermal phase as we had discussed and then it transitions into this aerial law and this is something that's happening at the level of individual eigenstates, okay. So another way to think of this is this efficient to inefficient MBL transition, right. So on one side you have an efficient representation of your states because they have low entanglement and you can use some tensor network approaches to try to characterize them because even highly excited eigenstates in some way look like ground states of local quantum systems. Whereas on the thermal side only ground states have aerial law or logarithmic entanglement based on whether they've got a gap or not but highly excited states of volume law entanglement, okay. So one way of characterizing the MBL transition is that this is some, this is a phase transition involving a singular rearrangement in the entanglement structure of individual many body excited eigenstates, okay. So as you tune to the transition each and every one of the systems many body eigenstates has this rearrangement in its entanglement properties in some very special way, okay. Good, so I want to talk more about the properties of this transition but we can get into various rabbit holes when I start doing that. So before I get there I just want to very briefly flag one of the interesting things that we get when we have MBL, okay. Which is this idea of localization protected quantum order, okay. So far I've been telling you about two different phases localized and thermalizing, okay. And I was like, okay, those are our two phases we're talking about them but you can ask can you have different types of MBL phases, okay. So in the thermalizing category of course we know we can have lots and lots of different types of thermalizing phases stop logically ordered magnets, this that everything you've been hearing about, right. What about MBL? Can we have different kinds of MBL phases and how do we define these? Because now this requires thinking about phase structure in this out of equilibrium context, right. We no longer have access to this density matrix. But what you can do is use this single eigenstate equilibrium ensemble that we have talked about, right. Individual eigenstates. And you can ask whether individual eigenstates show patterns of order, okay. And indeed what you find is that because these highly excited MBL eigenstates only have aerial or entanglement, right. They're not these very fluctuating thermal hot states that usually associate with infinite temperature. Because they have this special structure individual highly excited eigenstates can display patterns of frozen order that would otherwise be disallowed in equilibrium, okay. And these actually have measurable dynamical signatures. So you've gone from equilibrium phases of matter to eigenstate phases of matter, okay. And in a system that obeys ETH there's no distinction because in a system that obeys ETH you can work with a single eigenstate or you can work with a Gibbs ensemble. You measure some correlation function you get the same answer. But in these MBL systems as we saw individual eigenstates look very, very different. So there is a distinction here, okay. And all the usual thermal fluctuations that would destroy your order could be made ineffectual. Okay. So as a very simple example, here is an Ising system, right. It's a 1D transverse field Ising model. It's kind of the workhorse of phase transitions. It has this spin-spin coupling that's in the longitudinal field, sigma Z, sigma Z and it has this transverse field. The equilibrium phase diagram of this model is well known in one dimension at zero temperatures when J is bigger than H. So when the ferromagnetic coupling is bigger than H you have a magnet. And when H is bigger than J you have a paramagnet, okay. So you can go in and you can measure some spin-spin correlation function and the ferromagnetic is ordered and the paramagnetic is disordered. That's this quantity here. But at any finite temperature, right, you only have a paramagnet. There's no ferromagnetic. And the way to understand that is that at any finite temperature imagine a ferromagnetic some ordered state of spins which is all up. And at any finite temperature you have some finite density of domain walls because you make spin flips with some finite probability. And these domain walls are delocalized so they're fluctuating all over your system. So if you go in and you measure some spin-spin correlation function as a result of all these fluctuating domain walls you actually get something that's exponentially decaying in system size, okay. So this is the Pyrrole's Mermin-Wagnerum which is that in a 1D system there's no finite temperature longer into order, okay. But now consider the same problem with interactions, okay, or not interactions with disorder. So I've added in the disorder in the couplings and actually this is just for those of you who know this can be mapped to a system of Majorana fermions and this is actually a free problem, okay. And if it's a free problem, a non-interacting problem with disorder then Anderson already showed this was going to be localized, okay. And what that means is that now at any given energy density, note that temperature is no longer well defined but I can pick eigenstates at any energy density. What happens is that the degrees of freedom in my system which are the domain walls in the spin glass phase and their spin flips in the sigma x direction and the paramagnet, at any given energy density these are going to be frozen or localized, okay. So you still have a finite density of domain walls but now instead of fluctuating all over the place they get pinned because of the disorder, they get localized. So if I now measure some spin-spin correlation function in a given eigenstate between a spin here and a spin there I have some fixed number of domain walls but they're not fluctuating back and forth, okay. So I could get a plus one or a minus one because your cartoon states you just like look like some up-down pattern. So I'll get either plus one or a minus one. So I'll have some random sign but I get something that's order one, okay. I don't get zero because I'm not averaging over all these different fluctuating combinations. And likewise in the paramagnet you have some local sigma x that's you have a disorder parameter that's always order one, okay. And this is true at every single energy density. So what this means is that in this problem if I can go in and I can compute an equilibrium Gibbs ensemble just like before and when I do that I'll do some trace of e to the minus beta h some adding together eigenstates with very, very different patterns, okay. And when I do that averaging then once again I've washed out my order, okay. So the same physics which already exists if you try to probe it using your usual Gibbs ensemble you won't see it. But if you go in and you look at the level of individual eigenstates then you see this frozen order, okay. And it's, yeah. So it's a new order that was previously forbidden in equilibrium which is long range order can now show up in individual eigenstates, yeah. So you can probe signatures of it. So what you can do is you can prepare some, you can't directly probe eigenstates but it shows up in the dynamics. So there's something called this Edwards Anderson order parameter which is essentially looking at auto correlations in time. So if you prepare some initial state in the, mostly if this spin is up it remains up which is this local memory business for a long period of time. And you can measure some two point and two time unequal space time correlation function from some initial state and see signatures of this. So that, yeah. So the reason we care about it is that properties of the eigenstates translate into properties of the dynamics because any quench experiment that you do you take your initial state you write it in terms of the eigenstates you see what happens, right. No, this is not equilibrium. So in equilibrium, like this is individual eigenstates look like this. But if equilibrium is trace of E to the minus beta H. So beta zero it's trace over all eigenstates at infinite temperature. So I can pick a single eigen so I have my full many body spectrum. Remember, so if you diagonalize your system you have some density of states row of E versus E which looks like this. And this is your infinite temperature. Because I can randomly sample eigenstates at infinite temperature and each of those will show this pattern of frozen order. Because each of those looks like some up down up down up down frozen configuration. But if I average over all of them then I've averaged over all possible up downs and then it's done. So if you probed it using your standard equilibrium ensemble you wouldn't see it but you have to look at it within the single eigenstate ensemble. What's the difference? So both of these sides are many body localized. So a spin glass has long range order in the Ising order parameter. So if you measure the spin-spin correlation function it's not zero. And here if you measure a spin-spin correlation function it is zero. But both if you measure something is local memory persisting. So if you measure some diagnostic of MBL like do these states have area law entanglement is local memory persisting. Both these sides would show that. But there's some additional order parameter here which couples to this Ising symmetry in this problem. So the spin glass is just a fancy way of saying a disordered ferromagnet where the order is glassy in the sense that the sign of this correlation function is fluctuating between up plus one and minus one. Okay, so this is just when you add interactions the fate of these MBL to MBL transitions is currently an open question which I just wanted to flag whether you can directly go from a localized to a localized phase of this intervening thermal regime in the middle. This is a phase diagram that you get with a periodically driven localized system. And actually the addition to MBL alone allows you to get possibilities that don't exist in equilibrium like this finite temperature long range order. When you combine many body localization and periodic driving you're now sort of marrying two different kinds of non-equilibrium and that gives you an even richer structure. So this is an Ising model again but now it's a periodically driven Ising model. So the couplings J and H that I had are not only disordered but they're also periodically varying in time. And now instead of getting just these two phases which is the spin glass and the paramagnet that I just told you about you actually get four phases. And one of these is the Flo-K SPT phase which is a symmetry protected topological phase which is a paramagnet. And the other is a phase again with long range order like the spin glass. But now this is a time crystalline it has interesting time dynamics. So this is an exact phase diagram in the non-interacting limit. But once you add in interactions then all of these lines get blurry. So you can have disorder and non-interacting limit and then that's exact. I mean, so in this problem you can actually show that there's a mapping which you can do which maps this to a model of my run-up fermions which are non-interacting but I can add in other terms like next nearest neighbor sigma z or sigma x, sigma x. I can just add in all sorts of other couplings which when you do this transformation look like interactions between your fermions. Okay, all right. So that's all I wanted to say about eigenstate order. In the last few minutes let me tell you a bit more about the MBL phase transition. Okay? Yeah, sorry. So can you speak louder? So what I'm looking at here is the eigenstate properties of that time evolution operator over one period. So instantly it's going to do its thing but then I construct U of T which I've written on that board corner over there which is the evolution operator over one period and I can look at the eigenvalues and eigenstates of that and then within those eigenstates and eigenvalues I can look at eigenspectrum order once again. Okay. So, all right. So now let's step away again into the MBL phase transition. So this can be like two hours on its own so I'm just gonna flash some results, okay? It's most of this is still very much work in progress and to be understood, okay? So this is one kind of schematic phase diagram which is a thermalizing phase and MBL phase. We've talked about some of the properties here on the thermalizing side of the MBL transition the transport and entanglement dynamics are actually governed by rare Griffiths effects, okay? So what are the, and then between these there's this phase transition which we really don't understand much about. One of the questions that was raised is, is this a direct phase transition? And that's also not clear because what could happen is you have this parameter W and over here you see some very slow, you see some crossover into some region with some very slow dynamics and then this is the actual MBL and thermal phase and then here are some intermediate regime, right? And could this be a possibility? All the numeric, I mean so we really have very few tools at our disposal to solve this problem. It's mostly like theorizing and coming up with like theories that sound plausible but may or may not be correct and then finite as numerics, okay? So there's various theories that we can come up with some with the direct transition some with something intermediate. If it's something intermediate with very slow dynamics we don't really know best how to characterize it and all the numerics that we've seen they could be consistent with the slow crossover into this, into the slow regime, okay? So right, just to tell you what do we mean by Griffiths phases or Griffiths effects? This is if you have a system which you know this is some phase A that's some phase B as a function of some parameters, okay? Then globally maybe your disorder strength is such that you're thermalizing, okay? But locally you can make patches which line the other phase, okay? If your system is disordered and it's allowed to vary through some distribution then there's always some probability that this happens, okay? So what could happen is in real space if that's your system you could have a little patch here which looks thermal and a little patch here which looks thermal and a little patch here which looks thermal but then most of it is MBL. And the question is how do these patches affect your system, okay? And obviously the converse is which is what I was talking about with slow transport which is you have little bottlenecks which are MBL in an otherwise thermalizing system. So what these bottlenecks do is they're exponentially rare, okay? So to get a bottleneck of a given size L has a probability P of L which goes as E to the minus RL to some or E to the minus L over side. Let's call it that, okay? Because you have to have some rare event which is your local disorder strength being less than the critical disorder strength happen L times, little L times. So the probability of this is exponentially small but it's effect on your dynamics. For instance, if you're measuring conductivity then the time to tunnel across this barrier the time to get entangled across this barrier is exponentially large, okay? So some kind of tau int across this barrier is E to the L over 8 amelie, okay? So these are exponentially rare events with exponentially large impact on transport or an entanglement dynamics on things you might be trying to measure, okay? And in the end, these two exponentials compete with each other and give you some kind of power loss low dynamics. So usually you would have diffusion in a system but when you have these bottlenecks your diffusion could become sub diffusion, okay? So instead of X squared going as T you have X to the beta goes as T, right? For some other smaller power, okay? Good. And maybe it's actually worth just quickly showing you how the power law comes because it's one line. So imagine that you're thinking about entanglement, yeah? Yeah, good, good. So it depends on which side of the transition you're on. So if you're on the thermal side of the transition and you're looking at the effect of bottlenecks the effect of bottlenecks is most severe in 1D, right? Because you can't really go through it. You always encounter it. In 2D and so on you have pathways around it. So when you look at transport and these properties on the thermal side of the MBL transition and you look at the effect of these Griffiths effects they're less severe in higher dimensions but on the MBL side, if you look at if you have a mostly many body localized systems so these are these non-perturbative instabilities that I was referring to earlier, right? With in breeze proof failing. So on the MBL side, if you have a mostly localized system and you have a little thermal bubble and you want to ask is this thermal bubble going to eat up your system or not, okay? So here's a very, very simple argument which is imagine you have some L bit which is some distance little R away from the bubble, okay? And the bubble has some, or let's call it big R and the bubble has some size little R, okay? And so it has some local energy spacing which is E to the minus R to the D, okay? Because that's how your level spacing scales exponentially in volume. But then the matrix element coupling the spin to that bubble is exponentially small in distance, linear distance, okay? So the two are going to hybridize if this exponentially, if the matrix element beats the level spacing. And in one dimension, you can get this competition because the both scalars are, right? So in one dimension, if your disorder strength is large enough, then your matrix element you can make it so it's always smaller than your level spacing and you're fine. But in any higher dimensions, your matrix elements are going down exponentially in volume. But your level, your matrix elements are going down linearly in some linear dimension and your level statistics are going down in volume. So this always wins and these non-perturbative instabilities apparently take over the entire system. So this is called an avalanche effect because you start with this and it just grows and takes it over, okay? So in higher dimensions, yeah. In higher dimensions, these Griffith effects are very dangerous and it's the source of these instabilities. Okay, all right. So I have a bunch of stuff on finite size scaling but let me skip over that and just tell you that the properties of this transition can actually be understood through some features of the entanglement, okay, quantum entanglement and it's really weird because the transition actually looks like this weird hybrid between continuous and discontinuous transitions, okay? So there is a sort of diverging lens scale which makes it look continuous and we can do some kind of finite size scaling theories for them but there's also this discontinuity in that if you're in the usual picture of phase transitions, you end up with, you say that if you have some correlation that's diverging and if your properties on lens scales that are smaller than the correlation length, then they look, like they could belong to the other phase but as long as you look on lens scales larger than your, sorry, the other way around. But yeah, then you're fine. But in this picture actually what you find is that even if you look at the scale of one site the entanglement entropy can vary discontinuously across the transition even though other properties look continuous. So let me not try to explain this in the two minutes that I have. But what I will say is this question that came up a few times about different kinds of disorder, right? So what if you had quasi periodic disorder or random disorder and is there a difference between them? And here is a possible finite size scaling diagram that has some evidence in the data that we've seen for what happens when you have random versus quasi periodic disorder, okay? So what's the picture here? First, the first point is that in everything that we've seen quasi periodic disorder actually appears to be more stable than random disorder. And the reason for that is these rare events that I was talking about, right? You have this rare fluctuation where you can have a patch of L sites where your disorder locally looks thermal and that destabilizes your entire system. That doesn't exist for quasi periodic disorder. It's not quasi periodic disorder, but quasi periodic couplings, right? That's entirely deterministic. So these rare region effects that tend to destabilize the MVL phase in higher dimensions and with longer range interactions, those are absent. So there's a sense in which your quasi periodic systems could be more stable, okay? And what we find is that if you just measure some detuning, okay? So detuning is just some sense of what are the local energy separations? It could come from some quasi periodic potential or it could come from some random potential, okay? So if you have a detuning and you have no randomness, so this is purely quasi periodic, then you have some non-random fixed point here, okay? And these fixed points are characterized by the scaling exponents new, okay? Which is how the correlation length is diverging at the transition. And these, there's bounds on these new, on this new which says that for systems with randomness, we know that this new has to be greater than or equal to two, okay, in one dimension. And indeed all renormalization group treatments of this transition do find new approximately three and consistent with this bound, but almost every exact diagonalization study of the transition that's been done so far has seen new approximately one, okay, in violation of this. So it's not that, it's not that numerics are not seeing any scaling, right? Because you could just see, okay, you're in such a small system, maybe you just don't see any scaling. They do see some hints of scaling, but it's just with a completely different new, okay? And what's also surprising is that at these small sizes, they're also seeing similar scalings for both random and quasi periodic models, but the random models are beginning to show a crossover into this disorder dominated phase, okay? So disorder is a Harris, these are generally called Harris bounds and this correlation length exponent, and disorder is a Harris relevant perturbation, because you know if you have disorder, your new has to get pushed up to obey this bound, okay? So one picture for this transition is that you have this non-random fixed point, which is quasi periodic, and as you add disorder, there's a flow towards this infinite randomness fixed point, which obeys this Harris scaling, okay? And the point is because this is more unstable, this has to be at a larger value of detuning, okay? So the flow is in that direction. And in most systems, in most numerical studies that we're doing so far, because we're really very, very far from this infinite randomness fixed point, and we're hanging out near this non-random fixed point and seeing the scaling that's actually, that's appropriate to the quasi periodic model, rather than the random one. So we need to find ways of getting closer up there, okay? So anyway, so there's lots of open questions, lots of, this is just a topic very much in, which is an active area of research, both about what are the possibilities in the MBL phase, what are the nature of the transition, what are the instabilities? So all of this is fertile ground and I'll just leave open some recent reviews on this, some of the stuff, okay? Yeah.