 All right, so let's start. So it's a great pleasure to be invited here, thanks to all the organizers, especially to Roman for inviting me to speak about his work. So this is about interoperability breaking. And so this is a topic that's been around for a very long time, obviously, but it's enjoyed a bit of a comeback in the past few years for a bunch of reasons. One major reason is that experiments have gotten much better at precisely probing how a system that's approximately integrable relaxes when you break into a boilerly. And so, for instance, you can plot things like minimum distribution, you can also plot things like transport and other such quantities. You saw something of this already in Frederick's talk this morning, but a lot of experimental efforts along these lines. At the same time, from a more conceptual perspective, there's been a realization that relaxation of a slightly broken conservation door can be very interesting and very non-trivial. And a lot of that is stimulated by papers by Abhanem, Directin Hivnil and others like that, who show that contrary to some naive expectation you might have, for instance, exponentially long lifetimes for approximate conceptualities. And the third development, of course, is GHT itself. And I'm not going to say anything about GHT because you've already heard all about it. But the key point is that the GHT equation looks like a collision-displacement equation. And that was something that was realized quite early on in this game. And so you might say, well, OK, so if you want to be approximately integrable, just put in a collision integral and you're done, right? And so the problem is that's difficult because the content of collision integral itself, as we heard, for instance, from your own talk this morning, is really complicated, involves all these form factors, these matrix elements between many body states, which are separated by a large number of quasi-particle occupation factors. And a lot of these rearrangements involve some quasi-particles at least changing the momentum by a lot. So unclap scattering, for instance, in lattice multiples is an example where you sort of scatter all the way across the Bruand zone. And so that's not really a hydrodynamic looking thing at all. So that's the hard problem. But in addition to the hard problem, there's a relatively easy problem which says, suppose I manage to get the collision integral somehow, what would the qualitative features of dynamics in the absence of exact intruability look like? So before starting the hard problem, let's spend a few minutes talking about the easy problem. Sorry, I have a question very quick. So why do right hand sign depends only on n? This is sort of very schematic. When I say it depends only on n, it means it depends only on the full distribution function of all the quasi-particles everywhere in the system, which is sort of all there is within hydrodynamics. So it's a full specification. It depends distribution function, that's it. But yeah, the length here with the curly brackets means that it's sort of set of all occupation factors. OK, thanks. OK. So let's talk about the easy problem first. So to start thinking about the easy problem, let's talk about what happens to a conserve charge. So it's a capital Q because it's the charge integrated over the entire system. So if you didn't have intruability breaking, this would just be conserved because of conserve charge. When you add intruability breaking, you have a collision integral on the right. In general, this collision integral depends, again, on the full spatial distribution function. But on the other hand, we're thinking about slowly varying hydrodynamic sort of fluctuations and so you can say it depends on all of those things through a gradient expansion. And so in leading order, it depends on the set of all the other charges in the system. Because again, you're sort of locally looking at a homogeneous state. OK, in general, this is some non-linear function. It's kind of useless. But you can always linearize about an equilibrium state. And if you linearize, you get a matrix equation where you have this matrix gamma ij that sets the rates at which charges relax. And this matrix has some number of zero modes that correspond to the residual conserved charges. So once again, you break intruability. You destroy almost all the conserved charges. But you might still have a few left over. Like for instance, if you break it at Hamiltonian, you still have energy conserved. You might also have, let's say, momentum conserved if your perturbation will galen. You might have number-conserving perturbations, et cetera. So whatever these residual conservation doors are, manifest as zero modes of the gamma matrix, that's going to be important for follows. OK, so what we want to do now is we want to talk about the decay of currents. And so remember, a current for, so let's say you're interested in the transport of some charge alpha, which for argument's sake, let's say it's the energy. And so there's an energy current that shows up in this continuity equation. And if you want to figure out, in general, what this current is within GHD, what you do is you say, well, you only care about the slow part of the current. And so the slow part of the current corresponds to its projection onto the set of conserved charges that I had in previous slide. Yes, there's some coefficients. And the current's a vector in some space, and it's written as, as you can say, it's got some weight on the charges. It's got some other past stuff that decays fast. And in Euler scale, GHD just throw out. OK, so this is, of course, familiar because if you try and stick this in the integrable case, you stick this expression into the Kugel formula, what you're going to get is you're going to get that the DC conductivity is the product, these coefficients, times this charge-charge correlator. And of course, it's a conserved charge of the integrable limit. So this charge-charge correlator doesn't depend on time at all. It's just some number times times this diverges. And that divergence precisely gives you, upon Fourier transform, the delta function in omega that marks the druder peak in the conductivity. So that's what happens in the integrable system. And so if you break into ability, what happens instead is that you've got to write delta qi of t, which is the thing that's been time evolved. You've got to propagate it back to the initial time. And you do that using the fact that it propagates via this venerized matrix of decay rates. So it gives you an exponential decay. And so what you do is you find that in general, the current's going to decay as some of the Laurentians. And that gives you this nice compact expression to conductivity. The physics expression is quite simple. It just says that you had a delta function druder peak. And you broke into ability. But you didn't change the thermodynamics of the model. It's the weight under the peak is a thermodynamic quality. So it's still there. It's not changed. But all you can do is you can broaden it out by a lifetime or some family of lifetimes. And when you do that, you get this expression that basically says that the conductivity, the DC conductivity, is some matrix of broadening druder peak by a finite lifetime. And it takes a bit of massaging. But you also convert this. You also get diffusion matrix out of this using the Einstein relation. And again, I don't want to run over. So I'll skip this bit. But you also derive this in a somewhat different way by starting from hydrodynamics, linearizing and making gradient expansion and keeping leading term the gradient expansion. And the nice thing about this hydrodynamic way of thinking about things is that it also tells you how to put in the noise if you want to do fluctuating hydrodynamics of the remaining charges. So all of this stuff is kind of formal and simple and not very much. It's fairly tractable once you deal with the one difficult thing in the entire program, which is figuring out what's happening with the matrix of relaxation rates. OK. And so Jerem already talked about this so I can maybe be pretty concise. But the point is that if you write down Hamiltonian's interoperability break, of course, it's going to have all kinds of operators in it. You might say, well, let's make life simpler by saying that all that you're going to allow are operators that couple to the local charge densities and their products in some way. You might say, well, OK, you have some hope of getting Hamiltonian these guys using GHD. But OK, even with these guys, the problem is that in general you're going to be allowed to transfer large amounts of momentum. And so the thing that GHD gives you, or the thing that's deeply tied to GHD, is the low momentum transfer limit of the matrix elements of these charge-like operators. And so that only helps you if you're transferring small amounts of momentum. There's a nice plot from an old paper of Jacopo's and Miwosz Panfeld's. So if you're looking at small momentum transfer, you pretty much get the entire spectral weight just by talking about one particle whole excitations. But once you're talking about reason of the large momentum transfer, you can stop there. In fact, the one particle whole excitation contains a relatively small amount of spectral weight. There's not a spectrum leftover that comes from other stuff. And the other stuff is, in principle, not really accessible at least at this stage of development within GHD. OK, so on the other hand, you do have this nice expression for the hydrodynamic limit form factors. So one thing you can do is you can say, right, I'm going to take two attitudes to this in the next five or 10 minutes of this talk. So the first attitude is say, OK, this is all that we have access to, let's say. So what can we do with this? And immediately, you can see that one thing you can do with this is you can talk about what happens when you have slowly spatially varying noise acting on the system. Because in that case, if the noise is varying slowly in space and time, then you can only get small momentum transfer events out of GHD. And so those are accurately described by the formula, by this nice form factor. Formula just says that the matrix element is proportional to the dress charge of the quasi-particle under the corresponding charge. So you can write down explicit collision integrals to this limit. And what the physical content of this collision integral is is that the noise couples to every quasi-particle with a strength proportional to the corresponding dress charge of that quasi-particle, which kind of makes sense because if you have charge noise, it's not going to do much to your neutral quasi-particles of leading order. And so that's the intuition that's being formalized by this. And yeah, so you can write down this collision integral and what it corresponds to is if you're used to cold atomic jargon, it corresponds to momentum diffusion, or in this case, more generally, repetitive diffusion. And so you can play this game with a lot of interesting models. And it actually gives you some insight also into things that happen beyond boiler-scale hydro because all those things also depend in some ways on these dress charges in somewhat similar ways. OK, so that's one road you can take. But the problem is, of course, that it restricts you to only thinking about one very small family of integrality, break, and perturbations. In general, you have many other kinds of things that can happen. And this approach doesn't really get you any fraction of those. And so one thing you can do is you can try and compute the form factors. But you can also try doing something much more brutal. You can try saying, OK, this is a very complicated problem. And so instead of trying to attack it directly, you can do what Wigner did. So Wigner said, you have complicated molecules. Well, let me not try to solve the complicated molecule. Let me just treat the problem as somehow random. Mix them, brutal approximation, and see how good it is. And so that's the thing we most recently tried. And so this just uses this idea of GHG to say GHG takes Gibbs ensembles, especially in homogeneous, local Gibbs ensembles to other Gibbs ensembles under time evolution. And so what you do is your time evolves your ensemble under the integrable dynamics. And then what you do is with some rate you replace your GGE density matrix with the fully thermal Gibbs density matrix. But that thermal density matrix is constrained to match the residual conserved points. So for instance, if you had a GGE with total energy E in some cell, you don't want to replace it with any old thermal state. You want to replace it with thermal state at that cell and it matches the same energy. And also, if you have energy in a chart that's conserved, you've got to match both of those, et cetera. So that's the algorithm. And so the good news about this is that it basically treats non-miniarrities, because it's just a very brutal thing you're doing with collision integral. But you're treating the integrable dynamics exactly and not making approximations about it. And of course, you can just pre-compute these Gibbs states on a fine grid. So that allows you to perform this time evolution pretty efficiently, at least if you don't have too many residual conserved charges. OK, and this is sensible in various ways. OK, so this is the program. Does it work? And apparently we find that it works really well. The reasons that I must say we don't fully conceptually understand yet. So what you do here is, what we do here is we take, say, x, xz with a staggered sigma x field. So this is Hamiltonian that preserves energy, but it breaks everything else we can think of. And so what you, OK, so this program doesn't tell you yet what rate to use. And so the way we go about this is we say, OK, let's take some reference state, some reference non-equilibrium state, time of all of it, and fit it to some specific rate. And that allows you to extract a rate. And this is already a bit of a non-trivial check because the rate shouldn't depend strongly on time. If it does, then you're obviously doing it wrong. But not only does the rate depend on time, but the rate also pretty much follows the family's golden rule scaling that you'd expect in this model on general grounds. And this family's golden rule scaling works pretty well, even for quite large interoperability-breaking perturbations. So this seems to be working pretty well, but, OK, it's still not a very rigorous test. A more rigorous test is to say, let me say, I pulled out this termination rate from one dynamics problem. Can I now take this rate and try and match dynamics from arbitrary other initial states? And that's what we did on the right. So we took the rate out of our first calculation and then tried to take the second calculation with no left-o of three parameters. And as you can see, it works really well. So the lesson here seems to be that at least in this model and a couple of other things we've looked at, there seems to be one dominant rate. And once you extract this rate, you can just sort of brutally approximate the relaxation dynamics as just dominated by this one dominant relaxation time. This, by the way, isn't a new idea. In principle, the relaxation time approximation has been around in classical kinetics for an incredibly long time. And so the general rule is it works a lot of the time. Sometimes it fails horrendously. And so, but when it doesn't, in most cases, when there isn't some reason to expect multiple well-separated relaxation times, this is going to describe the basic dynamics of relaxation. Okay, so that's, and yeah, and so in addition to that, you also apply it to those guesses and this gives you a couple of other interesting points to make. The first is that these states, you can see by eye, look very non-Gaussian, but of course, they're still being captured by GHD plus relaxation time approximation. The GHD evolution is sufficiently non-trivial that it gives you all this nice structure and it sort of interfaces in a non-trivial way with the relaxation dynamics. And one final comment is that even though the relaxation time itself might be universal in this approximation, the diffusion constants are still, they still form a non-trivial matrix. Once again, because diffusion constant has to have velocity squared times time and the velocities and dress charges obviously are different for different charges and so you can still get non-trivial diffusion matrix in the generalized RTA. All right, so that's about it, this talk. So basic points to make, you know, this is a hard problem because at some level it involves ingredients that in general are fundamentally outside of GHD and we tried two things, one of them was we considered the cases slowly varying noise and there or even no static potentials and other long range interactions, you can do the same way. In all those cases, the point is that you don't have much momentum transfer and when you don't have much momentum transfer you can use these nice GHD type expressions, the form factors and so in some sense all of that cases fully within GHD. The general case is not fully within GHD but we found that you can get a pretty good approximation for it just by taking GHD and adding a relaxation time by hand. And so once again, this model is still, the generalized relaxation time approximation is predictive because you only use one time evolution run to predict the relaxation time and then once you've got that you have no more fitting parameters, you can put any other dynamic problem you like to that one relaxation time. Those are the, I guess the main messages so far. I mean, the thing that would be nice, a lot of things would be nice. The first is to understand the structure form factors away from the hydrodynamic limit and some more sort of organized principled way than actually and compute them each time. The second thing that would be nice is to generalize this beyond oiler scale and that would be particularly interesting because the result that Jaco presented a couple of days ago, I guess now over a week ago where you see that even if you break intracurability in the Heisenberg model you seem to have a numberless diffusion. So things like that, maybe interesting questions going forward. All right, thank you for your time. That's it. Thank you. Thank you so much for this very nice talk. So are there any questions? Yeah, I'd like to ask a question. Go ahead. Yeah, so if I understood you correctly in the example you looked at this relaxation time approximation worked at short times. It worked at the times that we're able to access, right? I mean, it gives you a sense. So what I find kind of easy to believe in a sense that at late times it would be good. But at short times, why does it work at short times? I mean, you can ask the same thing about GHD, right? I mean, it is an asymptotic late-time theory but empirically it works well for short times and the other answer is there's a bit of cheating here, right? I mean, we took a model where GHD works well at short times. You'd always look at problems where the GHD versus TBD agreement at short times is terrible. And in that case, adding a relaxation time is not going to make it any less terrible. But the lesson here is if you pick a model where GHD works pretty well at short times, then the relaxation time approximation is also capturing the non-integral at pretty short times. Thanks. A lot of questions. So can I take a question? So how do you prepare the state on the left of this slide? This is like a local thermal state, right? Yeah, but what does it mean that you have a tensor product of, I mean, what is the tensor? Yeah, that's right. Yeah, from the point of view of the TBD, it's an MPO describing a state with some local temperature profile. Sorry, some local? With some temperature profiles, temperature is... Right, but then I mean, so it means that you have product state and you block sites and then you... Yeah, no, that's right. I mean, you don't think about it as some kind of low-rank MPO. Right. Okay, so I think this is also related to what Fabian asked that this is... I mean, so another question is, can you predict something for pre-thermalization in homogeneous settings, more traditional to other studies of pre-thermalization? Like here, it's not surprising that you end up... Yeah. Eventually with a thermal state, but if you take, for example, the example by Jerome, where he has some retrieval driving there, even though there is some kind of going beyond integrality, you don't end up with a thermal state. So maybe here also has to do with the initial state. Yeah, sure. I mean, but I think the point is that if you did a quench, right? If you did a quench that created a bunch of quasi-particles, you would have to find the GGE first and you'd have to have some time for the system. You can do that, yeah, but it's going to, I mean, I guess, yeah, so once you have the GGE, you can run this just fine on that GGE and then compute correlation functions, whatever you like. But of course, I mean, the initial evolution to the GGE, you're not going to capture. No, of course, of course, but also, I mean, then my claim is that the subsequent dynamics, I mean, might hide some surprises. Maybe it's, I mean, so people have looked at these kinds of, you know, pre-thermalization in a sense. You first wait for the system to equilibrate and then you drift towards something which is not the GGE, but then it's more complicated. It's very complicated to see what happens. Yeah, no, there might be surprises. I agree with that in principle, okay. Thanks. Hello, questions. Okay, maybe me. So of course, the reason why GGE works here, even, I mean, everything works from zero times because you're already, your initial state is under dynamic, right? You're talking very long, the full, slow. You know, a few fights for the TBD. Yeah, I started from a thermal state. But yeah, so, I mean, so you're saying that there should be only, I mean, it seems to be like a dominant gamma. I mean, so maybe, so this should suggest that maybe there is a dominant process or if you want a dominant factor or you think, no, I mean. Yeah, no, I think that's probably correct. That somehow, in cases where this works, you have some dominant process. If I would write the question that you write for the time-evolutional row exactly, like with the sum over from factor, could I re-assolve everything into the gamma or I would look, I mean, is gamma given by a sum of processes or different processes would contribute with different terms in the question? So I mean, the way you'd write it obviously is you'd have to write it as a Boltzmann equation and the full distribution function, right? So the distribution function in general is going to evolve in some non-trivial way. It's not gonna get replaced with a thermal distribution function at some rate. But essentially what you'd be doing if you made the approximation is you'd somehow be taking all of that stuff out of the sum and somehow evaluating some average over overall repetities, something like that. Because you have an integral of a product of some process times the distribution function at that value of the quasi-particle repetity and stuff, right? So in order to pull a full rate out of this, you have to somehow break up that integral and say, I'm going to just pretend that I can integrate without the distribution function. So I don't think this is ever exactly true. I think this is only a useful approximation in cases where there isn't strong dependence of a relaxation time and where you are in the distribution function. I mean, the surprise to me is that it works at all. Other questions? Yeah, if not then, best thing, Sarah, again, for this very...