 So look at the process session outside. You can end your process around the, I mean over the last week, if you don't have that, try not to be too down because you don't have it in this place and you are doing many. So you will see that you will have a range of separately. Everybody just have to go back to the book. You will need a lot of time. With that, let me introduce with you the literature of the important and important. Okay. Thank you. Thank you all for braving a 9 a.m. lecture on week three. Okay. So today I'll switch pretty squarely onto the ergodic side of this title. Okay. So I'm gonna be talking about quantum dynamics in some systems that are very, very well thermalizing. So they're not going to be localized. Okay. In a couple of places I'll mention localization just to illustrate some contrast, but I'm gonna be on this side today. Okay. And so the central question that we're trying to ask is already what I raised on the first day, which is what is the dynamics of these isolated many body systems undergoing unitary time evolution? These are strongly interacting, highly excited again. And as I said, they can be spins, cold atoms, black holes, whatever you want, right? Just some strongly interacting many body system. And we spent most of our time discussing this case, which is a time independent Hamiltonian where the unitary operator is generated. It looks like that, right? I briefly mentioned periodically driven or flow case systems in which the unitary operator for some integer number of periods can just be obtained by looking at the operator for one period. Today I'm gonna spend a lot of time thinking about time evolutions generated by these random unitary circuits, okay? So in this problem, what we'll be doing is making the time evolution random in space and in time. So it's gonna be extremely non-structured. The only features that I want to retain are that of unitarity and locality. So sometimes if you're trying to solve a problem that has a lot of structure, you're not able to go very far as far as exact results are concerned, right? You can do numerics and so on. But sometimes one way to solve a problem is to get rid of all the features that are non-essential in some way and keep a minimal model for something, right? So yesterday we thought we looked at random matrices as a kind of minimal model that captures some of the dynamics. And today we look at random unitary circuits, okay? So again, there's no periodicity in time. There's no periodicity in space. They're just local unitary gates that are gonna be acting on some spin system, okay? And the questions are similar in spirit to some of the things that we were asking, which is how does, so the first question which we already addressed is can reversible time evolution bring a system to thermal equilibrium at late times? And we said the answer can be either yes or no. And when it's yes, we should think of thermal equilibrium in terms of subsystems, right? Subsystems reaching equilibrium at late times, okay? So we can now ask for a system that reaches thermal equilibrium, how does it happen, right? Can we understand this process of thermalization in some detailed sense, okay? So for example, if you have some local operator O, what I told you is that for a system that thermalizes at late times, the expectation value of O agrees with what you would expect from thermal equilibrium just by computing it in some thermodynamic Gibbs ensemble appropriately chosen, right? But how does this information, which was the expectation value of O, of A at the initial time, how does that get hidden, right? So this operator, it's a local operator, it started off with some value at T equals zero and then we did time evolution and I'm telling you that at late times, it just forgets all memory about this initial condition, so how does that happen under unitary time evolution in some concrete sense, okay? We can ask what are the dynamics of quantum entanglement, you know? So transport, how conserved charges relax over time. Transport is a subject that has been studied very, very for a long time, but we're only recently starting to think seriously about the dynamics of quantum entanglement, right? Both in localized systems and in thermalizing ones. And then I'll also briefly touch upon how hydrodynamics can emerge from reversible unitary dynamics, okay? So hydrodynamics is usually thought of as some relaxation of some slow modes in your problem, right? So suppose you have energy that's conserved or particle number that's conserved. You know that at late times, your system sees a slow hydrodynamic mode which has to do with the diffusive relaxation of this energy or particle charge, right? But this diffusion, this late time diffusion is a very, very, it's a slow hydrodynamic process and in particular, it's a dissipative process, right? It increases the entropy. It's just an irreversible process. So how does unitary time evolution give rise to this slow dissipative hydrodynamic mode in its late time dynamics, okay? So good. And then, so this will be the bulk of my talk, but towards the end, I also want to touch on some notions of what's meant by many-body quantum chaos, okay? So this is a word that's currently very fashionable due to connections with black holes that are very interesting. So everyone's been talking about many-body quantum chaos, but what exactly does this mean? And is there a useful definition for this that's distinct from thermalization, okay? So there have been a lot of discussions of quantum chaos in various semi-classical limits, okay? So you start with some classical system like this stadium billiard that I was talking about yesterday and you weakly quantize it and then you look in some limit when h bar goes to zero and you ask about properties of quantum chaos, okay? So this kind of single particle quantum chaos, a semi-classical quantum chaos, this has a long history going back in time, but in this, you know, today I want to ask about what about a regular spin-a-half chain, right? That's away from any of these semi-classical limits, that's away from any large end limits, anything that you can use to control your problem, right? I just have a generic thermalizing spin-a-half chain that I know reaches thermal equilibrium at late times. Is this model chaotic? And if it's chaotic, how should we think of that chaos? Okay, and in particular, this idea of chaos or not will turn into different time scales that you can see in the dynamics of your problem, okay? So there are some late time scales that are associated with thermalization when the system reaches thermal equilibrium. There are some time scales associated with this, with this level statistics that I was telling you about yesterday, right? So energy goes as inverse time. The inverse level spacings have some signatures of a time scale and yesterday we were looking at nearest neighbor energy level spacing, so that's exponentially small in system size, which corresponds to exponentially large in system size times, okay? So those are probing properties of thermalization and chaos at the longest time scales, okay? And there our notion was that if this system looks like a random matrix at those longest time scales, then it's chaotic in some sense, okay? But what about early time? What about intermediate times, okay? So usually when people think about chaos, they have in mind some kind of butterfly effect where you make a tiny perturbation somewhere and it very, very quickly, the perturbation that you made just gets mixed up over the entire system, right? So a small perturbation has an exponentially large effect on trajectories in some classically chaotic system, but that's a very early time property and we'll ask whether such early time signatures can exist in these genuinely many bodies, spin one-half quantum systems, okay? So good, so let me start with some of these questions, okay? And in this talk, what we'll find very useful is to look at the dynamics of operators, okay? So this is operators in the Heisenberg picture. So we know that if we start with a local operator a naught, which lives on some side zero, okay? So by that we mean that it acts as the identity on all sides up to side zero and so on, okay? And then we can ask about a naught of t, which is u dagger t a naught u of t, okay? And we ask what does this operator look like in this space of operators, okay? So this is an operator, the operator's time evolving and what does that time evolved operator look like, okay? So I'm not doing time evolution on states anymore. And what I'll show for you concretely is that this operator actually, so if it starts at the origin here, that's x, this operator spreads out, so this is for a clean chaotic system, for instance. Actually let me not use the word chaotic, a clean non-integrable system, yeah, sorry. For a clean, for such a system, this operator which started out at the origin in time, it actually develops weight over a region of space that grows ballistically in time with some speed, which is often called the butterfly speed. I'll talk about why butterfly, it has to do with this connection with chaos or you can really think of it as a Leib Robinson speed, okay? So this operator that started out over here in operator language, what's happening to it is that it's developing weight and getting entangled within this Leib Robinson cone, okay? And the speed of this cone is set by this Leib Robinson velocity, okay? So as a reminder, what Leib and Robinson showed way back when is that if you look at that commutator between two operators, right, and you look at the norm, now we can worry about what norm, but let's just treat this heuristically, then this is going to be less than or equal to. So what this says is that if you look at this operator, right, which is O zero of T, right? So it started with this operator in position zero and I evolved it in time and it has weight in some region like that, okay? And you look at the commutator of that time-evolved operator with a different local operator at position X, okay? Then that commutator cannot outside this Leib Robinson cone is going to be exponentially decaying, okay? And this Leib Robinson velocity is an intrinsic velocity in local quantum systems, which could depend on the couplings in your problem and so on. So in this picture, right, if you start your operator out at this position and you look very, very far away at some X that's at the other corner of the board and you ask does this time-evolved operator at time zero or time epsilon commute with an operator very far away? Well, the answer is it does, right? Because this operator lives here or that operator lives there and the two commute with each other, okay? So this is like a version of causality in condensed matter systems that are local, okay? But now as you start evolving this operator in time, when site X enters the light cone of this operator, right? Then the commutator between them grows, okay? Yeah, no, so this is just a statement about locality. So you can do it in any dimension, that's right. Yeah, so then you'll get some vector X, yeah? For a what system? What determines the Leib Robinson velocity for what? What is the question? Right, so it depends on the particular microscopic couplings in the problem and so on. So it's the fact that the Leib Robinson velocity exists, cares about locality, but the actual value of it will depend on microscopic details. So the actual value is not universal, right? Good, so, right. So let's, okay, and just as a contrast, because we've been, we spent so much time on lecture one talking about disordered systems, right? So this is this ballistic operator spreading that happens in these clean systems, right? So what about disorder? So with disorder, what you find is that if you're many body localized, then you actually get a logarithmic light cone, okay? So your operator spreads in some region which is only growing as log T, okay? And this has to do with the, this is connected with the log growth of entanglement. And it has to do with the same slow defacing effects due to the exponentially weak coupling between L bits that I had discussed last time, okay? And the other case is suppose you have a thermalizing but disordered system, right? So these are systems, suppose you, on the MBL, so if you're on the other side of the MBL transition, right? So this is MBL, that's W, this is some W critical, this is thermal, so suppose you're somewhere here, right? Where you're thermalizing, but this is disordered, okay? Then what you find is that because of these Griffiths bottlenecks, these rare events, right? So in the thermal phase, there will be some small probability for having inclusions of the many body localized phase. So those will be rare bottlenecks that will impede the spread of information or the growth of entanglement. So as a result, the light cone in these systems, both the entanglement growth and this operator spreading goes with some power T to the alpha, where alpha is less than one, okay? For some regions in the MBL, it doesn't have to happen everywhere, but, right, okay. Sorry, in one day, yeah. Good. If there's time at the end of this lecture, maybe I'll explain very briefly where this power law comes from because of these rare events, okay? All right, so now, good. So now let's try to actually derive this kind of spreading of operators, okay? So operator spreading in random circuits, okay? Yeah, very good question. So I just wrote a paper on this last week. So in integrable systems, you have this quasi-particle description that you heard about in Pasquale's talk, right? So if you act with a local operator, generically you're going, so let's distinguish between two types of integrable systems, between interacting integrable and non-interacting integrable, so like a free fermion system. So in an integrable system, you have a quasi-particle description. So when you act with a local operator, you produce pairs of quasi-particles, and because this operator is local, those quasi-particles are spread out in momentum space, right? So in a free fermion system, you know your quasi-particles are momentum operators, and they have some dispersion relation. So now those quasi-particles are going to spread ballistically, and this butterfly speed is going to be said by the speed of the fastest quasi-particle in your system. Okay, so this is going to, and so the zero-thorner picture that you have this spreading of, this ballistic spreading of your operator, that continues to hold even in an integrable system. So that's actually going to be something that I come back to when I talk about early time signatures of chaos, right? Because as we saw yesterday, when you look in the level statistics, which cares about very late-time dynamics, then you can clearly tell the difference between an integrable and a non-integrrable system. But if you look at these course-grain features of operator spreading, for many purposes, they actually look very similar between clean non-integrrable and clean integrable. But this is like last week's paper, so let me come back to that later, and I can talk about that, yeah. Oh, you shouldn't. I just wanted to focus on ergodic or chaotic dynamics. That's why I didn't bring it up, but you shouldn't, integrable system. I mean, the Leib Robinson bound only cares about locality, okay? So this bound is completely general, okay? So this does not care about thermalization. That's right, this is like an upper bound, right? So everything that I've seen, like an MBL, the logarithmic row is slower than the ballistic row, and this sub-ballistic row, right? This T to the alpha for alpha less than one is again slower than the ballistic row. So in these other cases which are non-chaotic or non-thermalizing, or no, not non-thermalizing, non, just disordered, okay? Because they can be thermalizing, and I don't know what chaos means, so I don't want to use that word, okay? So in these disordered cases, which are local, you can get a sub-ballistic spreading of the light comb, but that's still obviously Robinson, and I don't want to get into integrable systems right now, okay? So, all right, so let's see. Right, so there's some heuristics that we can work out about how these things happen, but ultimately we want some solvable models, right? Numerically you can compute this, you see that it looks linear and a large class of systems, but again it would be very nice to have a toy model in which this calculation can be done exactly. So the toy model that we'll consider is you have a spin one-half chain, and then in time you act with these local unitary gates. So I've just chosen this particular geometry, in general you can even make the geometry random if you want, okay? But each gate is independently drawn independently drawn from the Har measure and their unitary gates, okay? So this is a local unitary gate that acts on two spins, okay? And in particular I can consider a system of spin one-halfs or I can make the, or I can consider this little spin to be a spin q-cuted, okay? I don't need to work with spin-a-half, okay? There's nothing special about it, okay? And just like you have a complete basis of states in terms of, so in spin-a-half you can write every state as an expansion of this up-down basis in sigma z, right? We can write a complete basis of operators, okay? So for spin one-half the Pauli matrices are a complete orthonormal operator basis. So that means that on every site I can pick the identity, I can pick sigma xi, yi, or zi, okay? And I get Pauli strings, what I call Pauli strings, which are just elements of this basis, so you know it can be identity, identity, identity, x, y, identity, z, and so on, okay? So in total there are four to the l basis elements because on each site I can pick any one of these operators and these four to the l basis elements, they form a complete orthonormal set, so any operator in this spin one-half space can be expanded in this basis of operators, okay? So, right, and by orthonormal I mean orthonormal with respect to the trace norm, so I'll call these strings s, okay? So trace of s, s prime, one over two to the l is equal to delta s, s prime, okay? So for a single Pauli operator you can convince yourself, suppose s was sigma x on some site and y was sigma and s prime was y on that site, then the trace of x times y, which is the trace of sigma z is zero. But if it was x and x, then x squared is one and the trace of one divided by two to the l is one, okay? So the only way to get a, so that's why they're orthonormal, you need the two strings to be the same to get a non-zero trace, yep. Now how random I just mean that, so this is, so suppose this was a spin one-half, okay? So then because you're acting on two spin one-half degrees of freedom, this is a four by four unitary matrix, right? And yesterday we were talking about drawing elements in your orthogonal matrices using the Gaussian ensemble, right? So for unitary matrices there's a hard measure and what it does is it samples uniformly over the space of unitary matrices. So, yeah, yeah, yeah, so you can, so everything that I want to say right now can be understood by this geometry, but in general you can make your, I want to keep locality, okay, because if I make my system where I want to know what does operator evolution look like when you have locality and unitarity, because that's the general setting of Hamiltonians that I'll be considering, right? So, and that's the setting that's bounded by these Leib Robinson type bounds. So I want to insist on locality, so sure you can make your gates two-sites or three-sites or four-sites, but I don't want to make them long-range, okay? So there's various works that are now being done to try to extend versions of the Leib Robinson bound for power law interacting systems and so on, but all of that is still being worked out now, yeah. Yes, it's a hard measure in the unitary group, that's right. That's right. So all I mean by, I mean each gate is independent and you're sampling them uniformly from the space of unitary matrices, okay? Good, so that's my operator norm and then I'm going to wish there was a copy paste for the board, because that's my, okay. So now, given that this is an orthonormal basis, what you can do is you can take your time-evolving operator, I guess I'd used A, so A naught of T, and I can expand it in this basis of Fowley strings, okay? So this is just, you can always do this, okay? So the description of operator spreading is going to turn on describing how these coefficients, A sub S of T evolve in time, okay? And now of course, trying to describe how every one of these coefficients evolves in time is an exponentially hard problem, right? There's four to the L of them. So from one to four to the L, right? Even if you wanted to describe how a given state evolves in time, you can expand it in this basis of two to the L state vectors, but you're never gonna actually try to describe every one of those coefficients, right? So we want to figure out some core screen measures for describing this operator spreading, which don't require us to have knowledge of every one of those coefficients, okay? So, okay, so we want some measures. So let's think about what are the constraints that unitarity imposes on my time evolution, okay? So what you know is that the trace of your operator, right, the norm of your operator, which is trace of, okay, I should say here, this is S dagger, right? That's how the norm is defined, but when I'm working with these Pauli strings, S dagger is S, but I could be, when I'm doing Q spins, I could have a different choice of basis in which these are not Hermitian. So this should be S dagger and S prime. So the trace of A naught dagger of T, A naught of T under unitary time evolution, right, is just the same as the trace of A naught dagger A naught, right, because this has a U, U dagger U, U, U dagger U, the U, U dagger in the middle are the identity, and by cyclicity of trace, you can move that other U around and then you get that, okay, so that's just a statement that under unitary time evolution, norms of vectors don't change and norms of operators don't change, okay? And now, if you look at this quantity, when expanded like that, okay, what this means is sum over all S, A sub S squared of T equals one, and one is just, you know, I choose a normalized operator, and this should have a one over two to the L, okay? So how do I get that? If you just take this expression and you plug it in here, you end up with a double sum over S and S prime with this trace out front, but then using the orthonormality of trace, only the diagonal terms in that sum contribute, so then you pick up this A sub S of T, but you get it squared, okay? So what this tells you is that the fact that operator norm is conserved in time just means that the weights, the weights of this operator on all possible poly strings, the weights being amplitude squared, the sum of all those weights is a constant in time, okay, so that's like a normalization condition, okay? And now I want to convert this normalization which follows just from uniterity into a more coarse-grained picture which tells me where is this operator physically in space, okay? So to do that, let me define these two quantities which are called the right or the left weight of the operator at position X and time T, and this is the sum of all strings S, such that, and let me just do the right way, the left can be done, S such that the rightmost end point of S is X, position X, okay? So what is the times A sub S of T squared? So what this is asking, what this is doing is every poly string, right? Which, which, right? So every poly string has a starting point which is X beginning or X left, has a left end and a right end, right? And then it's identity everywhere here and identity everywhere here, and this is a non-identity, so this is X, Y or Z, okay? And that's X, Y or Z. So you can take your full space of four to the L poly strings, and because we want to know where is the spatial extent of this operator, okay? So to understand the spatial extent of spreading of this operator, we want to figure out, well, what's the right edge of the operator doing and what's the left edge of the operator doing? So let's take this basis of poly strings and classify them by where their right edges are and where their left edges are. So for every poly string, I can just identify it. And then to figure out the weight of the operator, the right weight of the operator at some position X, I say, okay, what's the weight of this operator on all poly strings that end at position X? Okay, so I'm only gonna consider that part of it. And because every poly string ends somewhere, what the normalization condition on A tells you is that the sum over X from one to L of row sub R of X and T is one, okay? So notice what we've done, right? So we started with some complicated expansion. Oh, sorry, before I do all this, I should have mentioned some references here, which is all of this was started by some papers by Adam Nahum and this operator spreading stuff was also looked at by Kurt von Kaiselling at all, okay? And this is all 2000 and 17, 18, maybe 16, okay. So, right, so what have we done? We've taken this expression, which is not very enlightening because you just have it, you have this expression in terms of four to the L coefficients and what you've reduced it to is a conservation law on something that looks like a spatial density, okay? Because this row sub R has a spatial location associated with it. You think of it as a lump living somewhere in space at position R and it has the interpretation of being a conserved density because the sum on all R is always one in all times, okay? So usually when you think of a charge, you think of, okay, a charge lives somewhere there or there or there. So this is like a charge density but it's this emergent operator density, right? So you can have a charge density associated with different locations in space and conservation of charge means that the sum of the charge densities everywhere is one, okay? So this is like operator density in different parts of space and unitarity tells you that this sum is one, okay? Yeah, so I said choose a normalized operator. No, no, what one over R is one. So this is, do you understand this expression? Okay, so sum of A sub S squared. So each A sub S uniquely is associated with one end point X, right? So I've just taken my set of four to the L strings and I've grouped them into sets, into L sets which have to do with where do they end, right? So there's some number of strings which end on site one and then others which end on site two and others which end on site three and by the time I've looked at every end point, I've counted every string exactly once. So, and this is just counting the weight on those end points. So if this sum adds up to one, this is just a different way of reorganizing that sum which builds the spatial structure into account. Yeah, is that clear? Can you talk loudly? Sorry, I can't hear you. That's right, so I said choose a normalized operator. Whatever your operator norm was at time zero if it was five just divide by one over root five to make it one, right? That's just overall scaling. Yeah, sorry, coefficient X is exponentially small but it cannot be zero. That's correct, so that's this. Why? No, your coefficients can definitely be zero. If you start with an operator that's a poly matrix on exactly one site. So if you start with sigma X and I, yes. In a random circuit it can be zero because you have a strict light cone. So, and even if that's the case, even if it's arbitrarily small, you can get exponentially small numbers to add up to give you one. And that's why I'm just saying you have to add them up. We can come back to this but in a light cone, this is just, it doesn't matter whether you have a random circuit or not. You start with an operator that's a single poly string. So then the right weight is a delta function at the origin because it only has weight on one poly string which lives at the origin. And all the ASs except the AS which lives exactly at the origin for the poly operator that lives exactly there is zero. So then your right weight only gets a contribution from that one point. So this starts out at the origin as a delta function and then it evolves in some way that I'll tell you about. But the fact that it adds up to one is just a restatement of the fact that these add up to one. This is just a identity. There's no assumptions here. It doesn't matter whether you're local or non-local, it's just a completeness of the operator basis and preservation of norm, nothing else. This one is a difficult to read, okay. So S such that right end point of S is X. And the right end point is defined like that. It's the right most location where your poly string has a non-identity. X is this row, is there. Okay, script X, yep. Okay, I'll start using that when I mean poly. All right, so the point is that we know intuitively that as you start your operator off in this location, it's going to develop weight on poly strings that get longer and longer, right. So the right end point of these poly strings is going to end up shifting, right. And the left end point is going to end up shifting. So what we're hoping to see is some kind of, we've interpreted this as some kind of emergent density. And like a particle and what we want to see is that this right end point is going to ballistically move out in some way, okay, because that's kind of capturing how the weight of this operator is growing on longer and longer strings with further and further right end points, okay. So all right, so how do we see that? So in this random circuit model, you can actually exactly work this out, okay. And the picture is that you start with some, you want to ask how is this poly, how is a given poly string getting updated by your random circuit, okay. So you start with some poly string and actually I have some nice pictures for this. Why don't I just show you on the slides since I'm trying to draw it. So I have some poly string S and it meets this gate, right. And that's to the left, that's to the right. Everything here is the identity, okay. And the front of the operator, right. So I'm going to talk about a left moving operator front and a right moving operator front. The front of the operator is the location of the last gate that sees something non-identity over there, okay. So the rules of the game are that when you have this completely high random gate, right, then on average every, so under the action of this gate, right. So this gate is locally only seeing this one operator that comes there, okay. So locally you have this action of this gate on some two by two operator that exists on that little patch. And locally every operator goes to every other operator with equal probability, okay. And I guess this should be every non-identity to every other non-identity, right. So now we're just doing a problem. You can forget about the rest of the string. You just want to know how does the end point evolve, okay. So now we're doing a problem where you have some operators here which is like maybe XZ which arrives on this gate and under the action of this gate which is like U dagger, UX, you want to see what does this go to, okay. Just for this two by two problem. And if these gates are picked uniformly and randomly, then every non-identity operator which is like X and Z goes to every other non-identity operator with equal probability, okay. So let's just look at which operators do move it in which ways, okay. So if this operator string arises at that gate, we say that the front moves forwards, right. Okay, let's do backwards actually. We say that the front moves backwards. If under the action of the gate, the string didn't grow at all, okay. Because of this even odd staggered structure, right. If the string did not grow under the action of this gate, then you see that actually in the next step, your front has moved one step back. And what's the probability that it didn't grow under the action of this gate, okay. So on this two by two, on this two-site gate with spin one halves, I have 15 non-identity operators, right. I would have had 16 because that's four choices on site one, four choices on site two. But I removed the case which is the identity because the identity is left invariant, okay. And with these 15 non-identity operators for that kind of configuration to happen, right, where the string does not grow, what you need is on these two sites, you need this site to be the identity and then this can be either X or Y or Z, okay. So there's only three out of the 15 choices which are consistent with your string, with your front taking a step back, okay. In all other choices, in all other cases, you end up with a non-identity there. So that's either X or Y or Z there. And then this can be anything. This is three choices and this is 12 choices, okay. So it is much more likely for your operator to grow under the action of this gate than it is for it to be left alone, okay. And that's just a, probabilistically you can just see why this operator spreads out and grows in time now, right. When you act on it with these gates, there's just many, many ways for it to grow as compared to for it to shrink, okay. So what's happened now, right. So we had this, we created a fake particle, if you will, right, this can look like the density of some particle. We created a fake particle that's normalized to live somewhere in our space. And then we're seeing that the dynamics of how this particle is moving is that of a biased random walk, okay. Because this particle arrives on some site and on the next time step, it can either move left or it can move right, but it's gonna move left with a higher probability, okay. So that's this statement right here that you see that the front dynamics is given by a biased diffusion equation, okay. So you have, this guy has some amplitudes for making S longer, for leaving it the same length. Okay, so it's not, right. So it's, yeah, and for making it shorter, but it's biased towards making it longer, which means that it's gonna move preferentially towards the right, okay. So this means, and I'll write these equations just to go slowly, but this means that the front dynamics is that of a biased random walk, right. And hydrodynamically, if you had to write the equation for this collection of this, what you've done is you've mapped this whole problem to the set of fictitious random walkers, okay. And this is encoding their density. And if you had to write an expression for what the hydrodynamics of that looks like, you'd see that you have this piece which tells you how the density is changing in time. You have a ballistic component, right, because the dx dt is giving you this Vb. You have a ballistic component, and then you also have this diffusive component which is coming from the biased random walk, right. If you didn't have Vb, then you're just, this is the usual diffusion equation where you can move either left or right, but with equal probability. And then all that happens is if you start with the particle somewhere, is this particle shows diffusion. But now if you're more likely to move left than right, then on average you actually move more to the left, right. So this Vb is given by the probability of moving right, minus the probability of moving left. So, good. So, any questions on why the front ends up looking like a biased random walk? And yeah, what that means. Yeah, I'm gonna come there and listen to you. Yeah, string could be what? Cat? Cut, okay, right? Not sure, I understand. I'm just looking at the end point of this string. So that, yeah, sorry, okay. Oh, that's fine, so in this picture, I'm not saying that all of these have to be the non-identity, right. This could be anything in the middle as long as it's a non-identity on the end point, right. So this could be x, z, y, and then one, one, one, one, one, one, right. So that's why every, that's why every Pauli string uniquely has one right weight, right, so it just wears the last end point. And under the action of the circuit, all of this stuff can do whatever it wants, but I just want to know where's the last, what's the last end point going to do, right. And to understand that last end point, I only have to look locally at this two-side gate, which exists on the last end point. And then I see that, okay, at the last end point, it is more likely for this front to take one step right rather than one step left, okay. And that's the bias random walk. And if you have a bias random walk, the expression for what you get under, like the expression for what a particle does when it's executing a bias random walk, the density of particles is exactly that, right. So this is, it's propagating at some butterfly speed, X minus VBT, and VB is just VB, is just the probability of moving right minus the probability of moving left, okay. So, yeah, yeah, yeah, right. So that's a great question. So starting in these papers, which we're looking at random circuit models, what is pretty amazing is that you're considering a very quantum mechanical object, which is this superposition of amplitudes in this space of operator strings, right. So this is a very quantum mechanical object, but really some core-screen dynamics of it are just described by classical differential equations. So this is a problem of bias diffusion. In higher dimensions, this ends up looking like a stochastic surface growth problem. So then you get these K-P-Z equations. So it's really interesting that, yeah, you get this emergent kind of classical description for a very quantum mechanical object, yeah. Right, so, yeah, it can be mapped to a classical surface growth problem. Good, any more questions? Good, so the way I defined my front, so you're right that at the level of a string, you can either make it shorter or leave it the same or grow longer. But if I'm just looking at the front, this is just microscopic, right, because it's some even odd effect at the particular circuit. So you can choose to group this in whichever way to get some core-screen description which looks like that. But the way I did it is I defined the front as living on the bond, okay. So when it lives on the bond, then notice that if the string length is grows by one, then it takes a step forward and if it is left invariant, it takes a step backward. But because of this even odd bond structure, it always has to either go forward or backward. There's no other option for it. How it appears, the butterfly velocity? Oh, well, just that the velocity that appears is what I'm calling the butterfly velocity. But the fact that there is a net velocity is because if you're more likely to go, so if you have a regular diffusion problem, right, what does that mean? You have some drunken man and he's more as likely to stumble left as right. So if you want to ask what's his position after time T, he's just diffused around where he started, right. He hasn't really gotten anywhere. But if you have a drunken man with some sense of direction, right, so he's somewhat more likely, he takes steps left and right, but he's still more likely to eventually drift right because he's taking, preferentially, taking steps up to the right, then what you'll find is that the profile of where that drunken man is is that he's net, he's drifting towards the right, but he has this diffusive form, right, where it's, this is the expression for what the butterfly speed is. The probability to move right minus the probability to move left. So in our problem, we had 12 and three, right. So it's just, yeah. Well, it's a 12 over 15 and three over 15, right. So for a general queue, okay, let me not do general queue, okay. So, right. So then if you look at the, right, okay, sorry. Let me say one thing about this large queue limit, okay. So for a spin one-half operator, we saw that there were four, for spin one-half, there's four operators on each side, right. For spin Q, there's Q squared operators on each side, okay. So on two sides, which is my gate, I have Q to the four minus one non-identities. So the probability of moving backwards, which is this case, is you stick an identity on this side, on the right most side, and you have a choice of non-identities on that one. So P back is Q squared minus one over Q to the fourth minus one. And P front is one minus that, right. So on this side, so you have Q squared times Q squared minus one over Q to the fourth minus one. So this is one over one plus Q squared, and this is Q squared over one plus Q squared, okay. So you can see that as you make Q large, right, so instead of spin one-half, if I start taking spin one, two, three, and so on, it's overwhelmingly more and more likely that I'm going to be moving forwards rather than backwards, right. So this quantity is basically going to one as Q increases. And just because it's so many as Q becomes large, there's just far, far fewer possibilities that you don't grow in time, okay. So your profile of this operator front, what it does is you started somewhere here, you see that you're going, your front is increasing within some light cone defined by this butterfly speed, but that because of this diffusion process, this randomness in moving left and right, your front is not only propagating ballistically in time, but it's also broadening diffusively, okay. So you have this ballistic propagation and diffusive broadening, and that's what the picture of your operator looks like in one dimension for these random circuit models. And in this circuit case, there's a strict light cone speed, right. So Lee Robinson tells you that if you have locality, the weight of your operator outside some velocity, which is this Lee Robinson velocity, is exponentially decaying, right. So that's certainly true here, but random circuit actually go one step further because the weight outside a particular, because there's a strict light cone velocity. So if you look, I'm running out of batteries, I'm gonna fix that in a second. So if you look at this picture right here, right. If you act with an operator which started out in that site, after some applications of the gate, it has to lie within strictly that area of space, right. There's nothing because it's all discreet, it's discreet in space and time. Nothing is active to connect this operator with anything that lives here even exponentially, even with exponentially weak amplitude. Because something here just mixes with these two sites and then it mixes with these two and then it mixes with these two. And then you end up with this strict, strict light cone speed, which is one. And the butterfly speed is less than this light cone speed, but as Q goes to infinity, VB approaches one in that way. And this butterfly speed and the light cone speed end up coinciding, okay. So, right. Yeah, yeah, I'm just thinking of like a spin one half, spin one, yeah. Whatever, whatever you want, yeah. And you can always generalize, you can always define these generalized power operators for any Q. It doesn't have to be integer, half integer, integer, yeah. Yeah, why? Q is, no, no, Q is like spin one half. So, Q is one half in a spin one half system. So, Q to the fourth is 16 in a, oh, sorry, no, no, you're right. It's 2S, yeah, yeah, yeah, it's 2S plus one, yes, sorry. Yeah, yeah, yeah, yeah, yeah, sorry, yes. Q is 2S plus one. It's a local Hilbert space dimension, not the spin. Yeah, yes, yes, Q is 2S plus one, that's right. Yeah, yeah, right, yeah. So, let me come back to that question. Okay, okay, yeah, all right. So, the question is Q to infinity related to the classical limit, and I'll come back to that, yeah. Yeah, very good. So, the question is what happens if you change the horror measure to something else, okay? So, of course, we have to change it in ways that we can still solve the problem, right? So, I'll describe to you, when we come back from a break, what happens if you change it in a way to impose a conservation law, right? So, right now, I said that everything just mixes exactly, like, in a completely random way. But what if I said that the circuit is designed to conserve spin in some way, right? So, then you have some conserved quantity which could have its own hydrodynamic behavior, and how does that couple to this emergent hydrodynamics of the operator? So, I'll describe that story to you. Another thing that was done is to model this case, right, which is this disordered thermalizing system. So, there was a paper with Nahom and Hughes where, and yeah, they may have been one mobile. So, what they did was they varied the rates. So, it's not, so you, the horror measure is very useful in being able to do averages, but instead of making it so that, you know, if you want one particular block of sites to not be as completely random as it can be, you can say, okay, what if I plop down my unitaries but at some variable rates, right? So, it's like, in this site, I'll put down unitaries once every, at every time, but on some other sites, I'll probabilistically put it down with some function of time, you know? So, then you artificially create bottlenecks, like in that problem, because if there's some site where it's very, very unlikely for you to plonk down a unitary, then nothing really gets entangled across that site, right? So, that looks like a very, very disordered region in this analogy to MBL problems. So, when they did that calculation, they were able to get these different exponents for the butterfly speed and how the front broadens and all of that stuff. But, yeah, this is all very, very new. So, more creative ways to make this less completely horrendous is definitely interesting to ask, yeah. This is the entanglement velocity, which I probably, I'll mention briefly, but I won't get time to talk much about. So, this is the speed at which entanglement also propagates ballistically, and that's the speed at which entanglement propagates, and in these circuits, you can show that VE is less than equal to VB. So, as Q goes to infinity, all these three speeds just become one, but generically, that's the separation. Well, so, the question is, what is the dimension of your space, right? So, you have infinitely many vectors, but you can say your vectors live in a two-to-the-L dimensional Hilbert space. So, that's the sense in which there's 15 or 16 non-identity basis elements. Yeah, yeah, so U of T is, there's an infinite set of U of T, but every U of T can be expanded in this basis of, that's right, that's right, that's right. Yeah, okay, so maybe we should take a break. You guys don't want to break? I can keep going. Anything about the Hamiltonian that would generate this unitary process? Would it correspond to a time dependent? Yes, it would correspond to a spatially local, okay, so again, it's a discrete time evolution, right? So, it's, I mean, it doesn't really have a well-defined Hamiltonian. I mean, it's, oh, sorry, so I'm struggling a little bit just because it's, oh, sorry, let me turn the mic. Yeah, I think it's probably a badly phrased question. No, no, it isn't, because it's, so, for floquet systems, periodically driven systems, there you can have discrete time evolution, but what that means is that suppose you have a Hamiltonian, all that means is you have an U of T, and you have an U of T, so you can look every period and you have this Hamiltonian that's just a function of time and it repeats itself every period. So here, one way to think with, if you just found, for your components at every period, it's completely random, it's like a stochastic evolution that's totally random in time, but also discrete in space, so that aspect can find it a bit hard to build in, because if you have- So you do need to have your Hamiltonian be random in time. You can't, it doesn't, you can't kind of get the randomness from having disorder in space. No, the disorder in space actually kills you, right? Because that gives you MBL, right? So this is, the randomness's time is really important to get this chaotic, right? But the point I wanted to make before finishing was actually that even though all of this has been worked out for, just like these random matrix theory stuff, it actually works for a Hamiltonian system, right? It seems to transfer to a simple Hamiltonian, yeah. So the locality and unitarity, those ingredients carry over. Okay, very cool, thank you. Given the Pauli matrix basis, we have some identity like x, y, j, n is equal to y, something like that. So why you are calling 4 to the power L basis element? So since they are not independent, identity and three Pauli matrices, why they are forming 4 to the power L? No, they are independent, they are orthogonal to each other. So it's a... Yeah, they are orthogonal. Yeah, but this measure, they are going to be independent. They're not, you... But we have, sorry, we have something like x, y, j is equal to y. Right. So they are sort of dependent. Oh, not exactly, they both are independent. The way you are defining this, which you are used to show that it is an area. Yeah, I mean they are all... I mean, they're just, so, so... Oh, okay. So you just have one x and one y. You call that your complete set. I'll just... I'll have to do it for a second, but I just don't believe that it is... Expand any operator in terms. All right, because you want a linear basis. But suppose you just have sigma b. And you'd say that sigma b in time, but I want to write it as a sum of my data cells. So for example, I want to do linear... So I want to complete or to normally be something to expand it in some linear form. In the usual linear algebra. I'm not even linearizing, I'm just... I want a linear... Yeah. Yeah, yeah. So, I want linear independence, and I want to be able to write the sum of my data. Then in a vector space... Identity of the sum of the data... That's right, that's right, that's right, that's right. Otherwise z squared is the identity of the data. Yeah, so otherwise we just have to move it. Yeah, yeah, yeah, yeah, yeah. Would be right. No, but it's also like if you look at... Just think more... For a second or so, suppose you just have a single site. So you know that it's in line with your data cells and up and down. And you can write your operator using your base. But now if you want to write an operator on the site, the base is the element that you need and kind of up, up, down, right, down, up, down, down. So all I'm saying is that a main... This is a vector space that has two... Literally independent things that you need to specify. And this is the main... Or four things that you need to specify. And these four, you can choose your base to tell them it's fact, you know, this is a perfectly good basis. But once you start thinking about these products, they're going to have a nice property. But I mean, in fact, no, this is a very good basis. So this is the one you want to up, the projector on the down, the raising, and the lowering operators. And in fact, in this paper that I wrote, which I will talk about next, which is when you have an extra conservation ability, we find it much more convenient to work in this case. But it's just a different basis to us. But you need all four to be able to specify to us. Go ahead. Yes. Yes, we have also developed measures that might be connected to your talk here. We define this as a normal basis for operator space. We also solve this coefficient A. And we study its property. And yes. I'm not quite sure of the difference between chaotic, integrable, and the different system. Because we also study a chaotic system. So what do you mean to study the property radio papers? My name is Pei Wan. Now you say that the linear rule of the length of operators for chaotic system, so how do you define that linear rules by using this Rho Xt? Right. And that's propagating ballistically. So that's the ballistic rule. What's the, which increase ballistically? The profile of Rho Xt looks like. Yeah, Rho Xt satisfies dispersion relation. Right. Not dispersion relation. Rho Xt is that. Yes. So Rho Xt is propagating at this velocity, v, vt. So if you look in time, where is Rho, you know, so Rho Xt is a lump that lives there, and then it's a lump that lives there, and then it's a lump. And the peak of that is moving ballistically, and the width of that is increasing. Yes. So it's a center of Rho Xt that increase linearly. That's correct. OK. However, if you consider an integrable system like a free-femio, and you choose these femionic field operators, you'll find OK. It's also. I know. So that's why I said I'll come back to integrable systems. And this measure does not really tell you the difference between integrable and non-intergrable. Yes. Yes. Yeah. So I'm not, I wanted to stick to chaotic, but that is a question someone asked in the beginning of the talk, and I said that that is true. You mean for integrable also we can see this cone. That's right, because it's ballistic quasi-particles of an integrable system. So the butterfly speed is set by the speed of the fastest quasi-particle. But they also talk about the MBL phase. So MBL it's not ballistic. MBL it's a non-intergrable system. So this can tell the difference between MBL and non-intergrable, but it cannot have a difference between integrable and non-intergrable. OK. OK. I put in this. What is independently hard random variables? I've never heard that before. So just it's like, imagine you have a complex sphere, and you're sampling on that sphere randomly. That's it. Yeah, yeah, yeah. Oh, OK. So I didn't get the point of this definition, this curly L definition. Curly L. Yeah, this is L index R. Or Rho, Rho. So it's, ah, this is Rho. It's a density. Yes. Why are you? Yeah, sorry. It looks like the density of just some of the probability. That's right. So this is why you, at the end, you end up with that type of formula for the definition. OK, so. But this is just belongs to a specific interval. It's made. Yes, it belongs to, I didn't get, you know, if you make all some, without any constraint, it would be one. That's not one. Yeah, but here, every string. Yeah. Yeah, yeah, yeah. But here, you just, because it's one part of the string. One part of the string. Yeah, that's right. So, yeah. So this is the local. Yeah, that's the interpretation of that string. That's right. Of the emerging density. The emerging density, which tells you where is the right end point located. Yeah. Here, you just look at this function of that particular sign and this is specific. Not of the sign, the coefficient of the Pauly string that ends on that sign. So this is pulling these back to one, for example, you have a chain. Yeah. So it means in the chain, in the site number one, you have the identity or? It doesn't matter. So if I'm looking at the right. Why? Why not? Why not? Where? Why you choose to be it? Oh, look at the position. It's not coming. No, no, no. Sorry. That's why I'm trying to do it for you. Ah, OK. Zyba, make me confused. OK. I don't know what to do with that. OK. So, yeah. So here, you have to actually try to use that capital as a symbol. Yeah. Yeah, sorry. It's over. Yeah, yeah, yeah. No, no. I understand. It's a real space. This is real space. It's a space in which polystream ended some position. End of it. Yes. Sorry. Did you guys have any questions? Oh, no. I can ask you. No, no. You can ask. OK. All right. The question for painting. For chaotic system is almost impossible. Because the country of the Chinese is not So you cannot keep all these things. That's why I'm talking about L numbers rather than X numbers. So for me, I have to discretize the time. Just realize the space. Yes, discretize the time. No, but at any time... Keep chatting, I ring the bell otherwise. No, at any instant of time, because space is discrete. You can take all your body strings and reorganize them. It's nothing but reorganization. How do you prove the dispersion equation for this? How do I prove the dispersion equation? For this rule. How do you find that equation? Oh, I derived the bias randomly up here. So that row is the right end point of a string. And that string is going to move either left or right with some probability. Which means that row, which is the density of where this particle is, is going to move either left or right with some probability. It resembles random points. It's like a fictitious random walk is what these guys... the words are. So one is really a big question. So for example, if you are near the center, you can see some experimental evidence from some groups. So if you go to one really large range, you will be able to break... That's right. Yeah, it is. And I think some groups are already working on how to generalize the problem in such a long range and so on. This is nice because you can just do it... I mean, is this random circuit? The point of doing this calculation is that you don't have to do the simulation. The simulation becomes harder. As soon as you're doing the simulation, you're back to 14 sides or something. And then you don't even need to do the random circuit. Yeah. Yeah. Yeah. Yeah. Yeah. Okay. Good. So I just really wanted to emphasize this. Yeah. Yeah. Yeah. Yeah. Yeah. So I just really wanted to emphasize that this X is a position. Okay. So it's a... I have discrete spin chain and this X is being summed up from 1 to L. And it's a position. It's not a Pauli X operator. And I'm just positionally classifying all my different Pauli strings into where they end. Okay. And I can just take this group of 4 to the L strings and I can separately bag it into these L bags. Okay. Which is where does that string end? What do you guys have to do with your system size L and where... And each of those end points. Okay. And then this ends up looking like this density of where your operator is living. And that executes a bias random walk. So your operator on net grows, but then the front of the operator is not only propagating ballistically, but also spreading diffusively. Okay. Good. Right. So... Okay. So I wanted to... We're going slower than I had thought. So I wanted to tell you about entanglement dynamics, but I'm going to skip over that. But essentially the message of entanglement dynamics is that you can play similar games. Like you can take this kind of random circuit model and you can take various models and you can map the dynamics of entanglement growth to similar kinds of classical differential equations. Okay. And you get some ballistically propagating entanglement dynamics as well. But let me go to the next part, which is how do I add some structure to these random circuits? Okay. The other thing that I want to emphasize is that even though this is all being worked out for these random circuit models, if you numerically look at just a regular Hamiltonian spin chain, you see this ballistically propagating operator front, which is diffusively spreading. Okay. So it's small system size numeric, so you can't nail it down. But the fact that there is a ballistic front, which is spreading in time, is very, very visible. Okay. So just like yesterday, we were able to work with this completely unstructured end-by-end random matrix and extrapolate the properties of that matrix to a clean spin chain, which is either integrable or not. Likewise, we can take this model of random circuits and use it to say something constructive about just one-dimensional spin chains away from any of these limits. Okay. So maybe I'll actually switch to slides now because it'll go quicker. Okay. So for the next few, for the next part, I want to switch to asking about what happens if you build in a conservation law. Okay. So suppose in a Hamiltonian system, this could be energy which is conserved. If you have a time-independent Hamiltonian, that's energy. If you have charge that could be conserved, particle number that could be conserved. And what you want to ask is suppose you have a chaotic many-body system in which you have this ballistic spreading of information. But in addition, you have one or a few locally conserved diffusive densities, right? So energy charge. So how does this ballistic spreading of information interface with this diffusive relaxation of conserved densities, right? So in a regular system, you know that if you create a lump of charge somewhere, it's just going to diffuse away from there. Okay. So how does that work out? And perhaps more fundamentally, we want to ask this question of how does unitary quantum dynamics, which is reversible, give rise to diffusive hydrodynamics, which is a bit of, okay? So, okay, so the setup is what we already described. But now what I want to do is I'll say that, so I have my spin one-half qubits, okay? But I want the Z component of the spin one-half qubits to be conserved, okay? So these circuits that I'm considering, they're going to be random in space and time, right? So because it's random in time, I don't have energy conservation. So the conservation law that shows up most frequently when you're thinking of a time independent Hamiltonian is energy. But I no longer have energy, but I want to go towards simulating what would happen if I had a conserved quantity like energy. So in these circuits, what I can build in is a conservation of total spin, which is the Z total, okay? And how do I do that? Well, so now instead of taking every gate and having it be completely random, I'm going to impose some additional structure on this gate, okay, to conserve spin. So what that means is, again, if you just have a two-site, if you have a gate that acts on two sides and it's a spin one-half problem, then you know that your basis elements are up-up, down-down, up-down, and down-up. So if you want your gates to conserve particle number, what you get is within this two-by-two up-up and down-down block, you can stick in a two-by-two random unitary. But then in this up-up block and this down-down block, you just stick in a one-by-one random unitary, which is basically just a random phase, okay? So you have random phase, two-by-two random phase. And that's going to be, and each gate and each block in each gate is going to be drawn independently and randomly. And we're going to try to solve the problem using that, okay? So now, once again, we can write our expanding operator in terms of these Pauli strings and the support of the L of them. But now, instead of being constrained by unitary, we actually get two constraints on what these coefficients can do, okay? There's a constraint from unitary, which we already discussed. And then there's the constraint from the presence of the conservation law that I added in, okay? So unitary, which is basically a preservation of trace norm, tells you that the sum of these weights is one, okay? That's what we discussed there. But what the conservation law tells you is that the sum on, hold on, sorry, okay? So, right, so to understand what the conservation law is going to give you, we should separate our operator into conserved and non-conserved pieces, okay? So we have these 4 to the L Pauli strings and, okay, so that's a full basis set. But these Sigma Z operators which live on a particular site I, right, these are the Pauli Z operators, those are a subset of this 4 to the L, of this whole set of 4 to the L strings, okay? So there's L, local Pauli operators, Sigma Z, which live on some particular sites. And I can separate out this spreading operator into the parts that live on that particular site and then everything else, okay? So these are just L local operator strings, which are your conserved densities. And then these are everything else, right? So these are hidden in the sense that they're very, very non-local. Most of them are, okay? Of course, the sum of them will be local. But most of them are extremely non-local. So the hidden is in the sense of whether an experimentalist can go in and measure it. Most of those 4 to the L Pauli strings are very non-local, okay? So now once you've separated your operator into, in this way, the fact that trace of O naught of T commutes with SZ total, right? What it tells you is that the sum of amplitudes is going to be, on these L conserved strings is a constant in time, okay? So maybe I can try to go through that a bit more slowly. So we have O naught of T, which is sum over A sub S of T, S, okay? And now I know that trace of O naught of T, SZ total, is constant, right? And that's just because SZ total is conserved by my dynamics. So this trace is the same as trace of O naught. I'm again moving the U's around as the total of minus T. But this operator is commutes with the dynamics. So this is just the trace of O naught as the total, okay? So it's constant in time, okay? So now if you plug back this quantity, this is equal to the sum over S trace of S, A sub S of T, sorry, A sub S of T, S with sum over sigma Zi, right? So only when A sub S of T commutator of S sum over I Zi, okay? So only when, what am I doing? Sorry, I'm just totally frazzled trace there, no commutator, okay? So only when S is one of the Z's, because of this orthonormality of trace, only when S is one of the Z's do you get a non-zero trace, which means you only pick out those components in your full expansion, which correspond to the weight of your operator on these local strings, okay? So this ends up being a sum over I A I C of T, I goes from 1 to L, and that's constant. So what we have now is operator dynamics that's governed by the interplay between two different constraints on your coefficients. The first is uniterity, which is a sum rule on the norms or the weights of four to the L different operator coefficients. And the second is this conservation law, which is a sum rule on the amplitude. So this is not the amplitude squared, this is just the amplitudes on L coefficients, right? This is the local part of your, this is one local part of your operator. So how does this behave in time, okay? So to see that, let's consider the spreading of some kind of conserved density, okay? So there, what you find is that if you have this operator O, which starts off being one of the sigma Z's, okay? So these A I C's, the C stands for conserved, the I is a positional label, okay? So you know that these A I C's then just started off being one on site I, because that's where the operator started. And so the sum of these conserved amplitudes is one, okay? Then if the operator is undergoing diffusion, right? So you know that the conserved, you know that conserved charges diffuse in time. And in fact, in this random circuit model, you can explicitly derive that your conserved charges will diffuse in time. So what that means is that if you look at the trace of Z of T, Z, and this is in position zero, okay? So before we were doing any fancy operator dynamics or any of this stuff, right? Before we were doing all of this, if I told you that you have a system with charge conservation, and you create a lump of charge on the origin, and then you see what that looks like in time, right? Which is just measuring this autocorrelation function, right? This is like transport. You'd say, okay, well, if it's diffusive, it just goes as one over square root of T, right? That's what you would have said. But now in this operator language, what this means is that if you have Z naught of T, this can be expanded in its operator basis like that. And then when you look at the trace with Z naught, you're picking out exactly the part of that operator, which has overlap on that one basis string. So what this is telling you is that AC zero of T goes as one over square root of T. So that amplitude, because that's what the trace picks up, right? Because of the sorts of normality of trace. So that's just working this out that, you know, just by looking at this autocorrelation function of if you created this lump of charge on someone's site, and you looked at how it evolved in time, intuitively you know that this is just going to spread diffusively. This is the physical diffusion of charge. This is not this emergent bias diffusion process that we were talking about. And the physical diffusion of charge tells you that those L-conserved amplitudes, they just obey the diffusion equation like that. They spread out in that way. And this is of course normalized, so that's consistent with the sum of all those amplitudes adding up to one. And indeed in that random circuit problem that I showed you, where you added in this extra conservation law, you can take your, you can just work it out explicitly, right? In the random circuit model, the point is that you can solve everything. So I can solve it and I get this very explicit answer in terms of these binomial coefficients for what these conserved amplitudes do. And now if you coscrain and take some scaling limits, you exactly recover this diffusion form and you get a diffusion constant for your physical charge. So, alright. So what does this mean, right? So think of the unconstrained circuit, right? To zero the order what's happening is that in some time t, your circuit, your operator has weight on some linear ballistically growing part which scales with t. And there's four to the t operators in that, in that light cone, right? So the weight on any one local operator will usually go down exponentially in time, just because you're developing weight over exponentially more operator strings as time goes, okay? But what this is saying is that in this constraint, in this problem with the conservation law, because you have the conservation law, the weight on your conserved densities is only going down as a power law in time. So it's much slower, okay? It's going down exponentially like it does in this completely random circuit. So what that means is that you can look at not just the amplitude of a particular conserved charge, but you can look at the total weight in all the conserved charges. You have L-conserved charges which are the sigma z's on every site. And you can ask what fraction of your operator weight lives on those L-conserved charges, right? And that's just adding these L numbers up and using this diffusion form that I derived for you, squaring it, integrating it. You find that in this diffusive cone near the origin, so there's a ballistic cone which is going out, but in this diffusive cone near the origin which corresponds to your conserved charges, you end up with significant weight which is going down as just a power law in time, as opposed to an exponential in time, okay? So some part of your operator continues to remain observable, if you will, okay? Because these local operators, these conserved densities, they're local, they're observable, right? And normally they would have just immediately been converted into these very, very non-local operators, but now that conversion process has been slowed down, okay? So what happens is that because your total operator weight is conserved in time, right, the fact that these, the fact that the weight on these local conserved densities is going down in time means that your operator, your conserved parts are getting converted to non-conserved stuff, right? So the conserved stuff is only these L-sigmas these, and then those get converted into this non-conserved stuff. And now the non-conserved stuff just spreads as the old problem that we worked out, which is this random circuit problem, and it becomes very, very non-local, okay? So the, yes, ignore the reference to Ohm's law for a second, but the point is, you know, I'd ask this question about what does dissipation mean under unitary dynamics, and the point is that the dissipative process is this conversion of operator weight from locally observable conserved parts to non-conserved non-local parts at a slow hydrodynamic rate, okay? So the rate is slower hydrodynamic because this conversion is only happening as a power law in time. Usually it's super fast, so it doesn't show up as a slow hydrodynamic mode, okay? So what we see is that the observable entropy increases while the under unitary dynamics, the total one number entropy of the system is a constant, okay? So the observable entropy is the part of the operator that's accessible to local measurements, okay? And the weight in that part is going down in time, but only slowly as a power law, okay? So let's just do this whole thing. So to put all the pieces of this operator shape together, what you find is that you have some diffusion of conserved densities near the origin. The weight on those goes down as a power law because the total weight is conserved. You convert weight from conserved to non-conserved stuff. That's this emission of operators. Once the non-conserved operators have been created, they end up spreading ballistically, right? Because that's a problem we already worked out on the board. So then these non-conserved stuff spreads ballistically and quickly becomes super non-local in this light cone, okay? And what happens is that now you get these diffusive tails behind the ballistic front because what happens is you have this lump near the origin. It emits some operators and those spread and form your leading ballistic front. Then at some later time you emit some new operators and those spread at the same butterfly speed, but because they were emitted at a later time they can't catch up with the main front, okay? So you get this tail that comes from these lagging fronts that were emitted at later times and your net operator profile actually ends up looking like that, okay? So in the problem that we just worked out it was just this diffusive lump, this ballistically propagating front with speed VBT which is spreading diffusively in time. In the non-conserved case that's all that was there. But now the operator front, this right-weight profile, picks up this power law tail behind it and then you also have this diffusive lump near the origin, okay? So just adding one single conservation law which is diffusion, you see that it has this pretty significant effect on the shape of the operator, okay? And all sorts of slow power law diffusive processes show up to give you this power law slow tail, this actual physical diffusion and the two get coupled together in this emergent hydrodynamic description which is that the physical charges are diffusing, that's just the diffusion equation for physical, like the actual conserved charge. But then when you look at this emergent front operator, now this is no longer conserved and in fact it propagates with some velocity, it spreads, but then it also has the source term here because the conserved charges serve as a source for this non-conserved part, okay? So this is just, don't worry if you didn't absorb all the details, the punchline is that in thinking in this way in terms of operator dynamics, what you were able to do is give a very concrete answer to this question of how does a system with a conservation law reach equilibrium, right? Or what's the slow, dissipative hydrodynamic process within this overall unitary time evolution, okay? Good, so now in the last ten minutes I want to switch briefly to these ideas that we had discussed about what is chaos and then these questions that were raised about integrable versus non-integrable systems, okay? So, okay, so what do we mean by chaos, okay? So one definition of chaos that people are using a lot these days is what's called this out-of-time ordered commutator, okay? And what this out-of-time ordered commutator does is exactly what we had discussed with this Lee Robinson velocity which is it measures the commutator between the spreading operator and some local operator to different position X, okay? And this is, at early times if X is far away from the origin, this is small but when the front of the operator gets to position X, you end up seeing a sharp increase in this commutator, okay? And the reason that people were thinking about this as a diagnostic for chaos is that if you take a classical analog of this object with the commutator getting turned into Poisson brackets, right? And then what this probes is precisely the perturbation of is how initial perturbation that some one location affect the trajectory at future times, okay? So in a classically chaotic system, your mental picture of chaos is some object that's exploring all of its phase space and if the system is chaotic, then if you start their object off slightly differently, its trajectory in phase space is completely changed, okay? So that's what the intuitive picture of classical chaos means. And indeed you probe that by asking, okay, you had some trajectory, if you go in and you make an infinitesimal change somewhere at the origin, how does that change the trajectory in the future, okay? So that's this quantity and you square it just because, you know, this information about this change is going to propagate out, but it can in general grow or shrink so you want to square it to get rid of all these sine fluctuations, okay? And then using classical Heisenberg equations of motion, you can relate this derivative to the Poisson bracket and we know that once you quantize something, Poisson brackets become commutators, okay? So that's the reason why this is being treated as a diagnostic of chaos, it's this connection with this classical behavior. And in a classical system, this quantity shows an exponential growth when your system is chaotic and this exponential growth defines an exponent for you called lambda, which is a Lyapunov exponent, okay, which is supposed to be the signature of chaos. So recently there have been lots of papers written on trying to compute a quantity like this for a black hole, okay? And there's various, I don't know if you've heard of the Sajdevi-Kathayov model, there's various condensed matter models that are supposed to be related to, by looking at the physics of these condensed matter models, you can try to understand scrambling dynamics in black holes in some way. And so this commutator has been computed in various such models and they have derived some exponent lambda and then there's a bound on lambda which says lambda can be no greater than 2 pi over t in some quantum, or 2 pi t in some quantum models, okay? So the question is, but all of those work in some special limits, right? Like you work in some large n limit of some gauge theory coupled to a... Yeah, so they're all working in some large n limit of some large Syk dot with n, n fermions, or some large n gauge theories. And in those settings, they are able to derive this Lyapunov exponent lambda. But a question is, if you just had some non-integrable spin chain which was thermalizing, does that have a Lyapunov regime, right? So, I think I'm losing most of you here, so please stop me and ask questions. So, give me one second. So, classically what was found is if you look at a classical system, so you can again look at just some classical spin system, and you make some small perturbation at some location s, or at some location zero, and you want to ask, how does this spread in time? What you can define is a velocity dependent Lyapunov exponent in a classical system, okay? So, what this means is that you want to ask how does the effect of this perturbation... How does the fact that you made a perturbation here affect trajectories along this velocity, or that velocity, or that velocity, or that velocity, right? And what you find is that there is some butterfly speed here again. So, that if you look at the effect of this perturbation within this cone, right? So, for speed v, so v is x over t, and if you look at speeds within this butterfly cone, then you get a positive Lyapunov exponent lambda of v, which means that whatever perturbation you made is going to have this exponentially large impact, and everything is going to get mixed up in that cone, within that cone. But then outside that cone, the effect of making that perturbation will decay, okay? So, classically this was seen and there's lots of works on this going back many, many years, okay? And what I was saying is that in these regimes, which live in this large n or semi-classical or holographic regime, once again you see some positive lambda of v inside the cone and some negative lambda of v outside the cone, okay? But in strongly quantum systems, which is like a spin one-half chain, the point is that you don't really have the room, classically you can go in an infinitesimally perturbed system at some initial location, right? Quantum mechanically in a spin one-half system, there's no way to really make an infinitesimal perturbation, right? Because it's just a discrete spin one-half chain. How do you make a tiny infinitesimal perturbation? And moreover, if you look at the effect of this out-of-time, if you look at this out-of-time ordered commutator, in time this actually grows, but it has to saturate in a quantum mechanical system because things can't grow without bound because of unitarity, okay? So in these strongly quantum spin one-half systems, what you actually find is that your operator gets scrambled, by that I just mean that it develops weight on all these different longer and longer poly strings, but you never see this regime of exponential growth in this out-of-time ordered commutator that's seen in classical models, okay? So I started by asking is there some early-time diagnostic of chaos that is distinct from thermalization? And I don't know if the answer is no, but at least this out-of-time ordered commutator doesn't really do it for you, okay? This OTOC in these strongly quantum systems does not show you this period of exponential growth here, and the fact that this is negative outside the light cone, negative just means that it's decaying, right? Because it's e to the lambda t. So if lambda is negative, this corresponds to exponential decay outside the light cone. So this follows just from locality in Leib Robinson. So this has nothing to do with chaos or not chaos. This follows just from locality in Leib Robinson, but then within the cone, we don't really see any period of growth, okay? And further, what happens within the cone actually looks very, very similar for integrable and non-integrable systems. So these measures of operator spreading are very useful for us to understand, you know, how is this information getting delocalized in space-time? What are the features of the operator? How does hydrodynamics emerge? What are the dynamics of entanglement? So there are many uses for looking at the dynamics of operators, and they really have helped us better understand quantum dynamics in this chaotic setting, but at least at a coarse-grain level, in these strongly quantum spin one-half settings, these dynamics of operators don't tell the difference, don't diagnose some short-time exponential growth regime, and they don't distinguish between integrable and non-integrrable systems, okay? Because even in integrable systems, there's quasi-particles, and those will propagate ballistically and give you this ballistic operator growth. And what we showed recently, this is the paper I was referring to from last week, is that even in an integrable system, if your integrable system is interacting, then not only do you get this ballistic operator growth, but you also get this diffusive front-spreading that I mentioned, okay? So you get both the ingredients that we discussed for these random circuit models, even in an interacting integrable system. So using this as a measure of telling those apart is not good, okay? All right, so let me stop there.