 This is Sunil Wanjoy. Welcome back to the second day of the summer school. I will continue the lectures by Anatoliy Prokofievko. OK. Welcome, everyone, again. I hope I didn't scare you too much yesterday. So that's where we stopped last time. I remind you I was discussing this adiabatic gauge potential as a generator of adiabatic transformations and quantum and classical systems and how we can find it. So just to give heads up, so today I will mostly discuss how we can use this machinery to detect chaos, ergodicity, integrability, how we can distinguish these regimes. And I will show some very recent results. Actually, some just appeared today on archive just to maybe bring you to what, at least, we are interested in right now. It's actually a good point to start this lecture. I will try to connect now adiabatic information and what is known as a quantum information geometry. Some people say quantum geometry. Some people say information geometry. I'm not going to discuss measurement aspect of this, but if you read about Fisher information, people discuss quantum in Fisher information, it's actually a part of this story. And I will take another sort of detour and I will now consider a situation we are interested in one family of states. For example, ground states of some Hamiltonian. I will later go to excited states because we know ground states usually don't have any chaos. So now I can think about these states as vectors and there is a natural distance between two vectors. If you think about this, it's like you have a vector in space and then you slightly rotate the vector. These are unit vectors. A natural distance, of course, there are some choices, but this is a very common choice. You just take one minus on a scale of product of these two vectors. So if these vectors didn't rotate, this distance is zero. If you rotate a little bit, this distance will tell you how it's rotating. And then you can see that this distance is strictly positive because overlap of two functions is less than one. So it means if I do a theory expansion, I should get something quadratic where chi should be positive or at least semi-positive tensor. So if you think physically about this distance, it's actually a probability to end up in excited state if you do a small quench. This is actually one of the standard problems you see in quantum mechanics. You have, I don't know, a square roll potential. You slightly change the volume. What's the probability that you remain in the ground state? What you do, you say initial wave function is the ground state of old Hamiltonian, then you compute overlap with the ground state of the new Hamiltonian, square it, and one minus this is the probability to be in some excited state, right? So this is sort of the physical meaning of this distance. And actually it was distance, this object is called quantum geometric tensor, and it was introduced by Provost in Valet, oh, sorry, there's a type of Valet. I think Google fixed my error, I apologize, 12. In 1980, and it's interesting that if you read their paper, they basically say that, well, this is abstract object which is unrelated to measurements and so on, but it's still interesting to study. And then I'll mention that this object now appears everywhere. Anyway, so now what is this object? Again, I'm skipping some steps, but you can easily convince yourself that of course this object, because it's this object, I mean, this distance quadratic in lambdas, it should involve two derivative of psi. And then it's easy to see that this is connected part of the overlap of derivatives. Again, why connected? Because you can imagine that I just do a phase rotation of my psi, so technically it's a different state, but of course, you know that global phase rotation doesn't change anything, so distance will not be affected. And connected everywhere, I just remind you, probably many of you know, if I have connected parts, so suppose I have a alpha, a bit zero, the same as derivatives connected, means that I subtract for the product of averages, right? So in mathematical language, it's the same as covariance. So that's what connected means, right? So, and then we just introduced remember that we said that adiabatic gauge potentials, i h bar times derivative operator. And then it's, well, immediately obvious that this tensor is nothing but the covariance of this adiabatic gauge potential. And then remember we were writing that matrix elements of this gauge potential are given by first order of perturbation theory. If you forgot, just have a look into previous notes. And then if you combine it, we'll just get this very nice expression. And here d by d alpha, I mean d by d lambda alpha. So here I'm assuming that my parameter could be in general vector. So I can have different couplings in my Hamiltonian, okay? So basically the bottom line, that this fidelity, this is geometric tensor, sorry, which measures distance, if you think about distance, it defines how quickly my wave function changes if I change the parameter. So that's what this distance and this metric tells me. And this is nothing but the covariance of this adiabatic gauge potential. So there is a part of this geometric tensor, which I am not going to talk about, but which probably most of you saw one way or another. This is imaginary part. And this is nothing but very coverage. It's interesting that provost in a lead paper appeared in 1980 when they said that this is probably physically relevant objects. In 1984, there was a paper by Beria who introduced this. And if you heard about quantum halt, topological insulators, and so on and so forth, this object plays the key role. It's like magnetic field in this coupling space, right? So yeah, but as I said, this is not relevant to what I'm going to discuss. But there is also real part of this tensor, which was actually studied much less until recently. And a real part is called Fubini's study metric tensor. It defines some remaining metric structure, basically distance structure. It defines quantum fissure information. As I mentioned, this is also something which is, I'm not going to discuss, probably more relevant to another workshop. But basically, it's how much you can learn about the system by measuring it. It's also contained in this object. It's also defined speed limits, and it appears in many, many other contexts. So diagonal components of the metric tensor, if you want, if you have single parameter lambda, so they are called fidelity susceptibility. So this name appeared to be independently. But it sort of makes, this word makes sense because you know overlap of wave functions is fidelity, and this is sort of susceptibility, right? So that's the reason for the name. So and as I said, so this susceptibility is tell us how sensitive ground state to deformation. So you can imagine that if you have a small gap in a quantum system, then your ground state changes a lot. Just think about spin and magnetic field. Small gap means that you have a small magnetic field, right? And if you start changing magnetic field, your state changes a lot. But on the other hand, if you have big gap, big magnetic field, you change it a little bit, nothing changes. So this tensor is small. And that's actually the reason why in previous expression they were energy nominators. So sensitivity of states is sensitive to gaps, right? To small denominators. So there was interesting a line of research when there was a very nice paper by Vinutian Zanardi already like, you know, more than 15 years ago, who suggested that you can use fidelity, as a sort of observable independent way to characterize quantum phase transitions. They get scaling and so on. And I was involved in a work where we just defined geometry of phases of phase transitions and there's some interesting stuff there. But again, something which is beyond this. So now we are interested in chaos, ergodicity, and we are interested in excited states. And so there is a natural way to define, to generalize this measure. We just take average over some ensemble of this metric tensor. So before I was saying that let's consider the ground state, but now we can consider excited states. You can ask which excited state. Of course we can consider just one state, but usually, you know, you lose a lot of information when you consider one state. So let's consider state average objects. And then you can average with arbitrary weight. So this weight could be constant. Then you weigh all the states equally. You can choose some thermal ensemble, Gibbs ensemble, so this kind of biases between ground state excited states. You can use some micro canonical ensemble. You can use any other ensemble. So in most results, I will show numerical results. We will use a uniform average just to get more information about the system. So we treat all the states equally. So yeah, this is kind of a fissure information and people often emphasize that this is quantum because you take covariance in a given eigenstate. So for a given eigenstate, you subtract this product of averages and kind of this quantum and so on. But this terminology is not completely true. Not completely true. There is again very good classical analog of this object. So what you do, remember we discussed that this adiabatic gauge potential has a nice classical interpretation of a generator of canonical transformations which preserve trajectory. And then if you want to take average over some ensemble, then in classical physics we need to do average over phase space. So trace corresponds to integral of phase space variables. Now, if we use some weights which has a energy dependent, this will correspond to probability distribution which is energy dependent. If we have square of some object that will correspond to square of a classical object. Now, what is connected part? Connected part basically means that you subtract infinite time average for a given point. So, and this has a direct interpretation. So again, I kind of alluded to it a bit but it's a subtle point and you have to think about it. Eigenstate, you can think about eigenstate average if you try to say what is this object without eigenstate. This is nothing but infinite time average of a. So actually this notion of this quantum fissure information it's defined in classical systems as well. I mean, of course it could be different than quantum but there is nothing quantum about definition. And now I'm coming back to this picture which I show for the third time that we will try to use this object which measures distance between states to kind of differentiate between integrable, ergodic and maybe some other regions. So in the intuition as I said is very simple. If our state is messy then it means it should be super sensitive, right? You know, like you can imagine that again, you think about this eigenstate as a classical trajectory time average classical trajectory. So this trajectory is like super sensitive to anything, right? So it means if you change a little bit your potential trajectory will become a mess. Totally different trajectory, right? On the other hand, if you have like a square well potential and you move back and forth you change a little bit, yes, you will get slightly different trajectory but only slightly. So let's try to use this logic and see how it works. So and now I will compare integrable systems and systems which satisfy ETH. So systems which are ergodic, right? Which are strongly chaotic. Sometimes people call it strongly mixing. And we discussed like yesterday what's the implications of ETH to two matrix elements and so on. So now I'm going back to this representation of this fidelity susceptibility. For now I just assume that I have one parameter so let's just look into fidelity susceptibility. So what we discussed yesterday. Well, remember we discussed this ETH around the matrix Syrians that matrix elements now I need square of matrix elements. They have this function f squared of omega which corresponds to non equal time response. Remember we discussed it's like Fourier transform of the correlation function and you have suppression of the matrix elements like one over square root of Hilbert space size which is e to the minus s over two but when you square it it's e to the minus s. So we see that denominator is very small, right? It's e to the minus s, s is entropy of the system so one over e power s is roughly distance between, sorry density of states which is very, very big, right? If I have a big system. But if you think what's the minimal energy denominator I have? Well it's level spacing but it's exactly one over density of states, right? Because density of states is number of states per unit energy. And this number of states is roughly inverse level spacing. The most states I have is in unit energy the less the energy spacing. So it means that minimal energy is roughly e to the minus s. Of course this is fluctuating numbers. I'm just doing estimates of scales. But now we see immediately that this intuition which I was trying to argue for it works because we see that fidelity susceptibility will be exponentially big, right? Because I have small denominator, numerator but I have much smaller denominator. Energy difference squares e to the minus two s. So if I pick just one term corresponding to smallest energy difference I will already get e to the minus, e power s, sorry. So we see that fidelity susceptibility like scales like crazy in thermodynamic limit. It's also scales like crazy in the classical limit because you remember this is absolute value of entropy. This is not like an usually in thermodynamics only entropy difference enters. So here we have absolute value of entropy. So this is something which diverges in the classical limit. So we see that this object is ill defined in classical systems. And remember I mentioned in 1995 Georginsky proved that adiabatic gauge potential does not exist in classical chaotic system. So this is sort of ETH proof of the same statement. He had more elaborate arguments but it was before ETH anyway. So it looks like this object is physically relevant but wait for a second. Of course if something doesn't make sense you need to fix this object and make sense of it. So on the other hand, if you consider three models like three fermions for many of you know the strands of fieldizing model which maps to three fermions, three superconductor or just any, I don't know, any bend model like any model which is non-interacting then it's again very easy to show you roughly your wave function is a product of wave functions in some space like could be momentum space, real space, some other space and because of that this A derivative is a local operator, right? Derivative of this product is sum of derivatives on active and individual entries. So this is a local operator and if it's a local operator if lambda is extensive perturbation then this norm will be extensive. It's like for any quantity like variance of energy is extensive for this same reason like variance of any other local quantity is extensive. So and this is, we already see like difference with this between ETH models and free models. Then there are more complicated integrable models maybe many of you heard about them if not, I don't know if there are any lectures on this but basically there are interacting integrable models actually to much present here is one of the experts in understanding them and there are many examples of those and those still can be solved but they are not simple free particle systems. You need to solve self-consistent complicated equations and there you can expect that maybe fidelity subtability is bigger because states are more complicated they're not product slide but probably they're still not exponential and here let me just show first numerical result it's work was done with market funding and decels and basically this logic works very well. I just show scaling of fidelity subtability divided by L just for simplicity to remove this trivial extensive dependence for three models. I'm not even writing Hamiltonians because this results a sort of generic but let me just say one is like fully chaotic again by chaotic I mean satisfying a th ergodic and then you look high in the log scale so this is basically a straight line means that's exponential scaling as a function of system size and you see that yes fidelity subtability just blows up goes by three orders of magnitude if I double the system size. If I have free model then actually there is simple analytic result and it saturates I mean it changes a little bit but this is like boundary effect which is very easy to capture and if you have this complicated integrable model there is no analytic calculation as far as I know maybe it can be done but numerically you just see very good agreement with a bit higher power law so this is log scale this is linear scale so you see it's still polynomial in L it's almost like L squared if you're not even sure it's in integer power so it's probably L squared so it's a bit faster than free model but still not exponential. Yes this is a good point so I'll come to this later but yes so this lambda is a perturbation which doesn't break integrability so I basically deformed the Hamiltonian keeping it integrable and this is actually very important yes yes you're absolutely right but yeah I'll come to that yeah yeah so it's basically but it's a bit clear why this is important to deform I take integrable model I keep it integrable then orbits are simple if I take integrable model and make it non-integrrable I immediately start destroying my orbits right so I will make of course deltas of stability much worse but I'll come to this later okay so now I want to spend like last piece of analytics to load your hats completely so I will try to connect now the sensitivity with real with physics because right now I kind of implied that you can define at least try to define this adiabatic gauge potential in classical systems like a generator of canonical transformation then I told you it diverges right so it doesn't exist for chaotic systems well for extensive perturbations right if you have a local perturbations will not even diverge if the system is integrable well here these spin chains don't have classical limit at all I actually we have some results for classical systems but it's too much I won't show them so this has been half changed so here yeah maybe I didn't emphasize it well enough so if the system has a classical limit then you can use at least standard approaches to chaos like Lapunov exponents without of time or the correlation function so on so yeah it's something very important which I forgot to say classically you can say that system with one degree of freedom or two degrees of freedom is chaotic or not chaotic quantum mechanically you cannot do it if you consider a spin one half it's a two by two matrix how do you say it's chaotic or not I mean maybe you can consider ensemble of matrices but that's ensemble of systems so in order to say that quantum system is integrable or gothic or whatever you need to take scaling where Hilbert space size dimension goes to infinity so chaos in quantum mechanics is this important statement in classical physics you don't need to do it because in some sense Hilbert space dimension is already infinite even for one degree of freedom but classic a quantum mechanically you have say two by two matrix and there are two ways to do it I mean of course you can come up with some intermediate maze but two standard ways to do it once you keep number of degrees of freedom fixed say one or two but increase local Hilbert space dimension that's typically means you go to a classical limit large n, large s, whatever small h bar, whatever right so but there is another possibility that you increase number of quantum degrees of freedom you make more and more spin one half and in this case or more and more fermions in this case there is no obvious classical limit at least if you map spin one half to spin classical spins you will not correctly describe the system but Hilbert space dimension still increases and ergodicity you can define as a symptotic statement as you increase Hilbert space dimension you approach for example rigged or Dyson statistics right and usually this approach is very quick because Hilbert space dimension grows exponentially with number of degrees of freedom so that's why numerically you can say the system is say integrable or non integrable for fixed number of spins say 16, 18, 20, whatever people can do but in reality you can never say it until you study asymptotic statement it's the same as thermodynamics you know when we talk about thermodynamics we make statements about infinite volume how we approach infinite volume right but in practice of course we never need infinity so here I showed examples of spin one half chains which do not have classical limits so if you apply OTOC or whatever you will not say that system is chaotic or not but at least here you can reproduce this ergodicism okay so now I will try to make the final I would say analytic connection of this adiabatic transformations and real time response so and there is a little bit of more mass involved but it's very simple so if you remember this was first result of first order perturbations so I just showed this slide before right so this matrix elements of adiabatic gauge potential like derivative operator and they're given by this but when you see a denominator it's like rule of thumb you immediately think about time integral if you didn't get used to this trick you will see it in all Cuba derivations and so on it's kind of called lemon representation why because if you use Heisenberg representation you put exponent here you put e to the i ht Hamiltonian to h bar negative exponent here right and it will give you oscillating n minus em when you integrate it you will get denominator but then the only thing you have to keep in mind whether you take a principal value or you take a delta function right you all remember if you integrate e to the i epsilon x over x you will get delta of epsilon plus one over epsilon so one is principal part one is this so here we want the principal part of the integral and this means that we need to take odd integral over time and if you're not familiar with this it's a very simple exercise you just take oscillating exponent e to the minus minus mu mod c times e to the i x t x is energy difference and if you have oh wait, wait, wait, so it should be sine t I apologize, oh my mistake I'm saying so much that you need to take odd for god the most important thing here so this has to be odd integral to get principal value so this is sine t I don't know if shall I show it in the blackboard there so actually yeah so it's basically saying that if you have e to the minus mu mod c times e to the i something minus e to the minus mu t times e to the minus i something you will get one over something mu is here is usually a small cutoff so in principle I need to send it to zero but we are doing physics not mathematics so if you have some cutoff let's actually make use of it later okay so and here d lambda h of t is a Heisenberg representation of operator and the moment you have Heisenberg representation something should click in your head that there is immediately classical limit for this it's normal function so if I think about what's d lambda h is a function of time it's actually classically it corresponds to this function evaluated on time dependent trajectory the moment I wrote this expression you immediately see this has a classical limit this is not obvious right it's some energy of the matrix element who knows what it is this has a well-defined classical limit it just I integrate my observable conjugate to lambda it could be position, momentum or whatever I want on the trajectory and I basically do time averaging of this with the only difference I take kind of look into difference between positive time and negative time so physical and now actually I can immediately use mu for my advantage because I told you that if mu is zero so basically if you take the limit this is a classically exact generator of adiabatic trajectory preserving transformation but now if I keep mu finite mu has a limit meaning of inverse time right so it basically cuts off my time so then this has a meaning of generator of preserving canonical transformations up to time one over mu but this immediately should be clear this should be well defined right because the problem with chaos or whatever that my trajectory is crazy and I cannot undo any change because it's infinite time I just do that's crazy things in chaotic systems but if I only ask to do it up to finite time then it should be doable and now instead of taking mu equals to zero limit I can study asymptotics what happens when mu goes to zero and suddenly mu becomes from just mathematical regularizer actually physical parameter so then if you look into regularized adiabatic gauge potential the matrix elements I'm going back to this expression so basically I'm doing this thing but now keeping mu finite if you do this integral you will find the expression so instead of one over E n minus m I introduce short notation omega nm is like energy difference over h bar so you will see this right so clearly when mu goes to zero they become equivalent but when mu is finite it's something else so what is then conserved operator remember we said that this adiabatic gauge potential comes with conservation law which is basically derivative of my Hamiltonian plus this rotation and then it's another like simple exercise that it's actually same time average of this object but now without sign and actually if you think about this time average part of d lambda h is exactly what is conserved it's the part of d lambda h which didn't decay and it's interesting that uh... sort of this trick numerically was was uh... used by by mrsidmyi refsky and peter prilovchuk and tamash present 2014 and they actually found uh... some missing uh... integrals in xx model for some years people couldn't really make sense of numerics because something was missing uh... between analytics and numerics and it's clear that some conservation laws were missing and they just used the trick to find them and then they found them analytically so uh... again mathematically it's kind of clear why it works because if you take uh... heisenberg operator and do time average only diagonal part of operator remains and diagonal part is nothing but conserved right so again the question is how local it is and so on okay so suddenly this object kind of makes sense this g is just that i said time average part of the lambda h and a is like all the integral of this now let me try to connect dots together so uh... actually as i said you probably by now should feel a bit overwhelmed uh... and this is sort of like many some of the things i mentioned so adiabatic transformations connect with conservation laws connect with sensitivity of eigenstates connect with long-time response this i will just emphasize in a second they connect with eth they connect with integrability and so on so this is like actually good object which connects many many different things together and also mentioned this is also what appears in quantum annealing counter diabatic driving there are many many other applications actually dissipation mass renormalization so on i can actually continue on this list once you go into that direction you just realize that you can do many things so now uh... let me uh... again right expression for kai lambda but now with cut off mu right so i'm basically showing the same formula as before but instead of energy denominator squared i have uh... what i wrote squared right so it's energy different square over this omega squared plus mu squared square again when mu goes to zero it goes back to what you have but clearly if mu is finite i removed the problem of small denominators so mathematically it's clear that i regularize things and then we discussed already briefly even though i didn't prove it to you but i i gave you a sketch how you get this expression that this is nothing but the spectral function turns out if you are more careful only symmetric part survives uh... but you can see it has to be symmetric so uh... and basically this off diagonal matrix elements i called it before like f lambda squared but uh... it's more standard way to call it spectral function and this is nothing but the Fourier transform of symmetric correlation function so symmetric correlation function is basically a memory right it just tells you how much of your observable at time theory members of what it was at time zero it's a two point function there are many names for this and uh... again i'm maybe assuming too much but if you took like any solid state courses many body courses then you might know that this is the object which appears in dissipation so when you write normal imaginary part of the electric susceptibility that's exactly what enters when you compute for me golden rule rate which basically the same it also enters and so on so this is fluctuation and then uh... we see what happens when mu is small when mu is small essentially you're integrating spectral function divided by frequency squared up to cut off mu and then you see that because you divide by omega squared you amplify low frequency or long time so we see that adiabatic transformations know about long times even though i define them initially as a static object you take eigenstate you see how this eigenstate changes change my Hamiltonian at the end what i learn is what would system do at very long times so now we can sort of estimate uh... uh... what's going to happen so uh... imagine that my my function phi of omega spectral function has some shape so now you see what happens you take this spectral function you divide by frequency squared and you integrate from frequency roughly so if phi doesn't go to zero with mu then this is a divergent integral right integral of one of omega squared is divergent so then we can estimate this integral as phi over mu over mu if my phi vanishes if it vanishes at small frequencies then you call it a spectral gap it's very similar to a gap if you have a ground state you know if you have a ground state there are no transitions uh... with energy less than delta and for this reason the spectral function which you remember is just off diagonal matrix element is always zero just because there is no states with this energy difference so if my uh... function vanishes if mu is small then i can estimate this as phi of delta over delta so it's mu independent where delta is a spectral gap the spectral gap is generalization of normal gap because we talk about old states not just particular ground state so it's kind of an average gap so and now i wanted to say something else so and now from plus i just showed to you if you have uh... eth systems there is no spectral gap and this is also a result of random matrix theory remember all of the diagonal matrix elements are the same this is actually a statement equivalent of saying that when frequency goes to zero the spectral function goes to constant on the other hand in integrable systems because adiabatic gauge potential does... oh sorry fidelity does not diverge uh... oh this should be phi of delta over delta so this is phi of delta over delta because uh... my fidelity stability doesn't diverge i know that i must have a spectral gap because without spectral gap this is a divergent integral so it actually means that uh... if my frequency goes to zero uh... i shouldn't pick up any spectral weight and again this makes sense because integrable systems they kind of have quasi-particles or whatever there are no transitions between exponentially close energy levels because the system just doesn't know about exponentially dense spectrum so this intuition also works uh... for classical motion remember we discussed the classical integrable motion is a motion along tori so you have one you have another if you have more degrees of freedom you have one more and uh... usually you have uh... some frequency of motion for this tori in order to get zero frequency you really need to fine-tune to a special point so this frequency is generally dependent on uh... where you are so we do phase space average but you can imagine that effective potential should be is a quartic and then you also need to fine-tune to zero energy not only need to find quartic potential but also to fine-tune to zero energy or potential can be of this shape but then you need to fine-tune to this reflection point then again you'll get zero frequency infinite period but again you have to fine-tune both the potential and the energy where you are it's too much fine-tuning so and it's intuitively clear that uh... if i look into this classical motion i will never see zero frequency if i have finite energy so in this intuition also works so and then there is another interesting fact extremely interesting in my opinion and which is probably not still explained at least not to my knowledge you can forget about all this quantum mechanics eigenstates, adiabatic gauge potential and so on and just say wait i know what spectral function it's long time response and i know if i have a good thermalizing system say i have local Hamiltonian here but you can generalize to other situations then i must satisfy diffusion equation i look into observable right here suppose it's density which corresponds to lambda being local potential right then d lambda h will be density local density i know that this density at long time should satisfy diffusion equation so and diffusion equation this is something we can all solve it's like Schrodinger equation imaginary time and then uh... we can go to Fourier modes let's say it's like some box but it doesn't matter and then each Fourier mode will have exponential decay there is a difference between real-time and imaginary time self-oscillation you have exponential decay then of course if you have some many many exponents you can get anything like power loss and so on but if you look into very long times so times which are longer than what's known as tau less time it's basically time corresponding to the smallest momentum available one over l right so this time you might recognize l squared over d it's the time to reach the boundary for diffusive process propagated square of t and so on then if you look at this solution all k which are big to disappear all terms decay like crazy now right you have huge exponents so in order to find a synthetic decay you just choose smallest exponent this smallest case exponentially decaying but higher case a decaying even faster exponentially so actually you see that after a long time and i just want to emphasize this time has nothing to do with level spacing it just completely classical time which doesn't contain any h bar so after this time this n of t decays exponentially but now we can take Fourier transform of exponent and Fourier transform of exponent as we know is Lorenzen and actually we see that phi of omega should be constant at small frequency it's interesting that we arrived to the same conclusion as random matrix theory as ETH right it tells us that spectral function should be constant small frequency without really talking about quantum mechanics microscopics and so on random matrices and so on then you might say well maybe this is a coincidence but it's interesting that this time tau this time is exactly the same time when random matrix theory appears and this is i would say observational results at least to my knowledge so when you define tau this time for example if you remember it's when spectral form factor starts becoming linear or when you start to see level repulsion so it's like there are various tests so when basically a Hamiltonian becomes like a random matrix so it turns out it's exactly the same time like tau is some frequency or energy is one over this time and then if you think a little bit more so diffusion equation is really not special here the exponential relaxation always appears in all kinetic type theories right when you forget the right master equation you always write this exponential relaxation approximation so then it's basically when bass becomes a bass when you forget the memory so and any system so it's not necessarily local not necessarily one-dimensional the moment you can say uh... all other degrees of freedom act as a proper bus so you forget all the memory you get exponential relaxation so it's broader than diffusion and this exponential relaxation gives to uh... uh... uh... leads to a constant spectral function result of the random matrix theory so uh... even though i would say mathematically it's not yet completely clear but at least physically uh... it's reasonably well established that random matrix theory is deeply connected to hydrodynamics to this kinetic theory okay so we see there are two completely complementary explanations for the same result so now instead of saying that my fidelity is subtlety is e to the s so as i said it's a result meaningless in the classical thermodynamic limit i will say that i have an asymptotic result when my cut-off mu goes to zero one-over-mu time goes to infinity my chi should behave as a one-over-mu so this is a result for eth for ergodic systems and one explanation which is purely quantum mechanical or eth then matrix elements become energy independent and another i'm just repeating myself but it's important diffusion hydrodynamics and so on spectral function is constant and it's the same tower scheme so it's actually from this we find a very interesting conclusion which was actually confirmed numerically recently that long-time diffusion hydrodynamics uh... uh... uh... closely connected well this this repetition so in integrable systems uh... uh... this long-time diffusion and hydrodynamics only last until the system fills the boundary and then they have to stop and maybe some of you heard that uh... recently uh... actual uh... tamash was again involved in some of this work and other people they found that even in integrable systems you can absorb diffusion and so on and there was like some controversies because people never expected that but it turns out this is sort of like uh... a little bit of a fake diffusion because the moment you reach the boundary the diffusion equation should break down because diffusion equation contradicts uh... uh... or at least it's only consistent with the TH it contradicts integrability it contradicts the fact that fidelity, susceptibility doesn't diverge so hydrodynamic transport uh... in integrable models is more subtle okay so now in the last part of my talk yeah uh... you have to be careful by spectral gap I mean exponential spectral gap anything which is polynomial I still consider a gap because we are talking about like you know very long times but yes so you have to be careful so if you increase if you consider ground state then you will get sort of similar stories but you will get different powers of polynomials and that's actually why fidelity susceptibility can be sensitive to critical points right you have to be careful so here when I say uh... spectral gap I only refer to exponentially close level spacing and actually there are lots of open questions along this line so I don't want to pretend that you understand it so now in sort of the last part of the series I want to study how ergodicity emerges when you start breaking integrability and there are some actually interesting and unexpected results which appeared so this is numerics from the same paper with mojit so now I have to actually be careful uh... with respect to the question Lorenzo asked which fidelity susceptibility I'm looking at what I want to do here is to look into fidelity susceptibility in the integrable direction but in the presence of small integrability breaking perturbation and here we consider two types of perturbations and then there are more papers with more types they kind of look all the same one is that we break integrability just on one side it turns out you take a spin chain which is integrable you add magnetic field and just one side in the middle and you break integrability another possibility you take this chain which is integrable you can basically integrable means you can solve Schrodinger equations very complicated solution but it exists you can find eigenstate uh... and then you add say second nearest neighbor interactions and this also breaks integrability this also leads to a change now and this was checked and people know that you get random matrix statistics and so on but now I want to add this term which is very very small so what's what's happening remember we were discussing that if you have classical systems nothing really happens I just might address conservation laws but roughly I get same nice orbit and so on so let's just look into fidelity ability again it just highlights sensitivity of eigenstate there are no canonical transformations here because it's really quantum system so and then uh... if my integrability break in perturbation is zero I just recover result I showed so this is polynomial but then if I break integrability by very very small number this is like ten to the minus three I see like extremely sharp crossover from this integrable behavior to like exponential behavior moreover the slope seems the same of course this is numerical result uh... you cannot make very strong claims but at least slopes are nearly parallel so you break integrability in the same way and you break it tiny tiny perturbation like for this system sizes I don't know uh... ten to the minus it's two times ten to the minus four it's a very small number so at this number you will not see a even traces of ignorance uh... the system is too small for that so uh... uh... in this sense that the ability is very good and this is natural it's sensitive to very long times and when you ask yourself how you feel integrability breaking uh... in the physical system if it's small well you have to look into long times because obviously if you break it a little you cannot get strong level of exponent right away so what we see is that transition from sort of integrable to chaotic behavior is very sharp uh... and of course chi is like very sensitive probe it's very good so uh... uh... we need exponentially at least within numerics right so they don't want to make very strong claims small in the system size perturbation to break integrability so it's basically tiny but then uh... there is a very big surprise when we hit first found this result so we saw that okay we are done this is integrable this is a robotic ETH but if you carefully examine the slope of course it's hard to do visually but you have to trust uh... me then this slope is two times bigger than what you expect from ETH so actually what you find is something crazy so this regime is more chaotic than ETH more chaotic in the sense that your eigenstate are less stable so you change you add a little bit of uh... integrable perturbation delta and your eigenstate change much more so first we were totally surprised but of course after everything i i kind of explained to you it actually looks very natural so and it turns out that this e-power to s-scaling is actually a maximal possible so it's not only slope is two times bigger it's actually saturates the upper bound and the reason if you remember this fidelity susceptibility is matrix element squared over square of energy denominator but for local operator matrix element squared it cannot be bigger than one so one is the upper bound it basically means that instead of this e to the minus s over two suppression you matrix element kind of sits on nearby states uh... and then when it's one you get energy denominator squared it's e to the power two s e power to s is the largest possible so what's going on let's just look yes but again you have exponential dependence on the system size the bigger the system size the smaller perturbation you need to add but yes yes yes so if you have finite system size there is a threshold I mean if I take have ten sides and that perturbation ten to the minus hundred nothing can happen because they have finite gap in my state again this all statements on average so you have to be a bit careful how we average things but that's module of this uh... to be honest I forgot they looked identical I yeah basically very similar results I should know but I forgot uh... so uh... now well we related uh... kai to the spectral function this is physical response and I kind of told you that in integrable systems it should vanish right at small frequencies in ergodic systems at such a rate but now when you have small integrability breaking what happens as this weight so this is like solid lines it does opposite it increases so this is again dashed line would be integrable and then if you increase system size from twelve yellow to eighteen blue you get less and less weight this is your spectral gap I mean it gets smaller and smaller you might see the spectral gap but again this is uh... the important thing that this thing that vanishes with frequency anyway so uh... everything is consistent if you extrapolate it will keep vanishing forever so but if you look into small integrability breaking you see that actually weight increases it's not a constant it does opposite so but then if you think about this for a second this actually makes perfect sense because when I said that there is a spectral bulk in integrable uh... models I actually didn't tell you a complete story there is also a delta function peak which are called druid weights and they correspond to conservation laws the fact that integrable systems don't relax they mean that observables which supposed to decay to zero according to thermal equilibrium they actually don't decay to zero they have memory and memory means you have infinite time response and infinite time response means you have a delta function in frequency space Fourier transform of a constant so and they don't contribute connected correlation function they don't contribute fidelity and so on but the moment I break integrability I start broadening this response this is a log scale so this looks like a big scale but if you consider not a log scale this will look like a broadened peak broadened delta function peak and so what's the physics behind well the physics is called pre-sermalization because if I have integrable system I relax to some non-equilibrium state where I have some extra conservation laws when I break integrability a little bit this conservation laws start to decay you can imagine that this would be super slow process and because you have super slow processes you must have a lot of spectral weight and then as I explained to you this means that it's very hard to do canonical transformations or unitary transformations your eigenstates are super fragile okay so a little bit more numerics and then I do another piece of analytics so then actually we collaborated with serious people like Marcos Riegel but to be honest most of the work was done by at the time his student Tyler LeBlanc it was really fantastic so we considered already serious system sizes and so on I was just admiring how they do numerics I didn't contribute to this project much otherwise so so we consider basically same integrable spin chain but now we have second nearest neighbor interactions then you know Marcos is very careful you make sure all couplings are incommensurate there are no accidental degeneracy so you take these crazy numbers like golden ratio for the coupling to make sure that there is no accidental resonance between something and something anyway and then we also consider small integrability breaking perturbations and now what we look into different fidelity susceptibilities but now in the broader range of integrability breaking perturbations so again it's a log scale these are two different observables so I only added two graphs because there are different inserts but let's just focus on the top graph so what we do we divide chi this fidelity susceptibility over the eth value so eth value think about this as e power s but this removes like all finite side effects and then when integrability breaking perturbation is relatively large you see that curves different curves are different system sizes they're not shown here okay largest is 24 smallest is 18 so they're much bigger system sizes now so what you see that you start getting this collapse and this is what is consistent with eth so chi has this e power s scaling we have collapse and so on and what you see is that indeed if the system size increases your gothic region gets bigger and bigger but there is also this part so if you are integrable we would expect these fidelity susceptibilities remember larger system size should be smaller because we divide by typical eth I showed this result in the previous part so we see we go to very small perturbation like 10 to the minus 4 but we are not there yet so what we know that this is a crazy chaotic regime because chi like grows here faster than the eth right it keeps growing so this we see that this is a whole region which is much more chaotic much more predictable in terms of eigenstates and so on then ergodic and actually to keep the analogy with my uh... uh... blue ink you might think it makes sense because there are different definitions of maximal chaos and I'm not insisting on this one but let me just say that weakling on integrable systems actually less predictable so I actually know it from practice we get like a kind of hurricane, turbulence or whatever when we have uh... small viscosity right we are almost integrable if I consider uh... blue ink then if I'm strongly ergodic like it strongly mixes right after a while I have a very predictable state like a uniform blue color but imagine I'm weakly ergodic you will get this intermediate time uh... uh... pattern which is strictly speaking is less chaotic than uniform color I have less entropy but try to predict it you will not be able it's very susceptible it's very unstable right so it's actually we know from practice that what this little stupid plot shows up is actually what's happening if you break integrability you are more chaotic weaklier you are more chaotic than if you break it strongly you cannot describe your system of simple equations so what we also find from this plot that very good scaling of the speakers e to the minus two s and what we see that the onset of ergodicity grows goes down very quickly with system size so this kind of an exponential but I don't want to make strong claims uh... so this uh... critical coupling before I was talking about coupling here when you start to see chaos so this grows even faster so but this also seems to grow exponentially but it could be high order polynomial but at least we can rule out weak polynomials data could be consistent with one over lq for one over l to the four so higher oh it's also consistent with exponential there is no theory of this as far as I know so right so then uh... I'm not going to talk about many body localization here though I pretty convinced now that this was really a big mistake and I can explain uh... why but here is just the result that what happens if you take a disorder model and for those who know it's under some insulator and add small interactions and basically the story that nothing really changes you still see that uh... these systems become as quickly ergodic as any other system so there's not a regime most people studied this is the regime of where all parameters of the world of one but anyway so uh... there is a clear contradiction to unveil claims anyway so uh... this is kind of an uh... a mini summary of this part and then hopefully I'll have some time for for uh... uh... even your results so uh... this is kind of schematic diagram which seems to apply to all systems quantum classical and so on we have some classical results which I'm not showing but they're completely in line with this so basically I mean instead of system size of classical systems of course you want to have something like one over each bar if you approach classical limit uh... uh... so uh... of course if integrability breaking perturbation is very small the system is integrable if it's big generically expected and in between there is this maximal chaos where as I said we basically have least predictive power about the system in the long times classically it means that our trajectory is the most unstable so there is no way you can do this quantum mechanically our eigenstates are most unstable so we cannot really say much about the system neither in deterministic nor in statistical sense uh... and actually many mistakes uh... were done and this was of course very subtle we didn't expect that we were very surprised because many people like we developed a normalization group approaches and so on when you have a direct transition from integrability to ergodicity and if we examine this paper they are all incorrect uh... there is no this direct transition is impossible and again the reason is this is the stupid broadening of delta function pre-sermalization and so on this long time dynamics is expected so once you connect the dots so which exponent yeah so this this is totally unrelated to this as far as I can say or maybe I don't know how it's related so that's why I want to say again I use maximal chaos for like short brevity like people might disagree this system have tiny leponoff exponent so this uh... systems with large leponoff exponents I would say they are the fastest thermalizing systems but they are more predictable in a sense I mentioned right so again there is terminology so this are maximal chaos in the sense of maximally uh... uh... uh... sensitive eigenstates you can basically define leponoff exponent in space how sensitive you are if you change space and then you have exponential sensitivity uh... or you can define exponential sensitivity of stationary trajectories to time and then this will be like to time mu again this is maximal but this is not happened how to connect it to I still have no idea I mean it's not true we have some ideas but I don't know which ideas are correct if any uh... so I maybe spend uh... ten minutes to show one uh... uh... last for for the series of lectures analytic construction where we can see it how this integrals of motion are destroyed I remember I in the very first lecture I showed how we have two dimensional oscillator we do perturbation theory we get improvement for the integrals of motion but then I kind of implied that oh there is uh... chaos uh... which appears in classical quantum systems almost immediately so local integrals of motion do not exist even when the system is I just want to highlight is still not a good the chaos appears much before I got it and uh... we were able to do this uh... analytically completely at least twenty one set up actually there is a series of related and they are all kind of show this divergence of uh... uh... integrals of motion is actually related to another hot topic which is studied now operator spreading krill of complexity lunches coefficients like all the stuff so and there is like there are very interesting connections between this stuff so let me just illustrate it so uh... the model which we were able to solve this when you have some system so it doesn't really matter what it is it doesn't have to be speaking to be some classical chain let's call it bass which could be integral non-integral then we had a probe spin actually exact same construction works if you have a probe photon and this will be a problem probe oscillator uh... and then we weekly couple it and then what we want to do is to ask if the magnetic field i call it it's like potential but you can think about this magnetic field is very big how exactly this uh... almost conserved operators are clear that we is very big and i have small system my monetization will be conserved right because i have strong magnetic field so spin will not relax but then i want to ask how this uh... conserved operator relaxes uh... or gets dressed if i uh... make the system bigger and bigger so and here uh... uh... it looks like a different problem i now looking for uh... it local integral of motion which commutes with the Hamiltonian which is adiabatically connected to z but i probably hope i convinced you that this is equivalent to finding a diabetic gauge potential with respect to this uh... q is nothing but this g which i introduced which is the lambda h plus i it's completely equivalent problem so and now try to use perturbation theory in one of the to find a better integral of motion the idea that when epsilon is zero and these infinity of those uh... or sorry easy of them satisfied uh... this q is just as the conserved my monetization so for floccy systems that will be fought on number but then i want to write an expansion that this is like as you know it was one of the time some other correction was one of the squared times other correction and so and then what i want to i i need to solve again a very simple equation but this commutator simple equation sometimes complicated solutions so i basically want to have that next order integral commuting with latch term which is a z naught should be cancelled by previous order integral commuting with small term right so this is standard perturbation theory and let's see how it works so in the first order it's easier in the first order my q1 commuting with z naught should be the same with my q naught which is z naught commuting with epsilon hint it does it commutes with h pass because h pass lives here q naught lives here so they obviously commute now you stare at this equation say oh i solved it q1 is epsilon hint so actually now what you did you found a better conserved operator you said to the first order zero sorry it says z naught but to the next order you just add some spin-spin coupling then you want to continue but it turns out that you can continue it is we could continue only to the linear or an epsilon because in order to solve this equation you need to there is some technical stuff which i'm not going to expect you explain you need this interaction to be odd under sigma z which actually i mean if you know if i put sigma x or sigma y here then i will satisfy this condition so if you dress sigma x and sigma y with sigma d you will get minus the same anyway it's it's some technical part but the point is that into the leading order in epsilon all these commutators which start to appear they all remain odd so i can continue this series anyway with this caveat which i'm sure is unclear but if you write this equation try to solve it you will see it in five minutes you can actually write the full solution it's linear in epsilon in my coupling oh so it's linear in epsilon and it's a very simple sum of nested commutators so i have a nested commutators of my coupling to the bus with bus Hamiltonian so all information about convergence or divergence of this series is connected in this nested commutators and those who know this is what builds your krill of space so now we can stop at nth order and ask how good i am so i want to ask uh what is the norm of my commutator is h you remember we want it to be zero right that's how iteration works and then we actually find an elliptic answer so you get v power to n plus one just because this is our small parameter each order contains one and then all nested commutators cancel except for the last one right because i stopped in nth order so you get one nested commutator so there is like a short notation of leuvelian but basically this means i take commutativity in 20 plus one times so now i can estimate mistake and this mistake has a physical meaning it's basically a lifetime of my operator if you do short time expansion it's clear if observable commutes with Hamiltonian has infinite lifetime it does decay if it does not then a decay so square it's not lifetime so it's decay rate so decay rate is the sense commutator and then what you see is that the result is that we have this nested commutator divided by v power of four and plus one so we actually have very simple criterion of when chaos doesn't happen this doesn't tell you about ergodicity but it tells you about chaos at least it tells you when you can have good conserved integrals of motion and in particular if i deal with finite dimensional matrices of course these all norms abound that so at large we and the procedure works so i can i can dress my integral of motion but if i deal with generic local interacting systems i wrote one d but basically a dimensionality doesn't matter then to our rescue there was a paper several papers actually the first one was by berkley group a hudaltman's group in 2019 it's a very very nice paper but then there were some follow-up papers by demarski collaborators cow and so on they actually figure out they solve this problem how these nested commutators behave and they actually grow factorial and at the end it's actually this very rigorous mass behind a very simple statement when i consider nested we suppose i have a local operator when i consider nested commutator with himaltonian i increase its support first one spin then two spins three spins and so on so each time my norm of course increases exponentially because each time i add sycoptic j i multiply by j right but on top of that i increase number of sites so each time i multiply by extra k and this gives you factorial so of course they have much more serious rigorous work we kind of justifies it and so on so in what we see like the bottom line of this that for local interacting models and doesn't matter whether they saw it or non-dissort it then this dispension is asymptotic so we can only do it up to a certain order and after a certain order we have to stop so we can get good integral of motion but not infinitely good and you can ask how good we can get well you just look into you just look into best order so basically when this norm stops decreasing and again it's a very simple calculation it's factorial versus exponent and you will find that best order will give you exponentially small gamma nv so you get very very good integral of motion which has exponentially long lifetime in v but it's not infinite okay so in this very last part of the talk i will show actual results which just appear today and these are kind of really nice pictures so the title of the work which fjunjin for students came up uh is is is integrability is attractive and i mean for some people it's attractive in a static sense but it's actually attractive in geometric sense or maybe in dynamical sense so i don't have much time so let me just really try to explain what's going so we kind of try to understand um uh still the whole thing so what happens with kaos ergodisity near integrable points but now we want to really carefully look into the full metric not just one direction like this or that but the whole metric tensor so this is the model uh i'll maybe show a couple of models uh so this is the same transfer fieldizing model which is integrable i mentioned it many times and then this longitudinal field breaks integrability and then there are actually two interesting results so i don't want to overwhelm you but basically one of the results that this whole transition to kaos seems to satisfy scaling theory so this is like second or a phase transitions that's universal and the second result that uh uh uh you can find integrable regions by following sort of geodesics in the metric space and i'll i'll try to to uh now translate what i said anyway so i just now coming back to the um uh lorenzo point so now let me take an integrable point so when h is equal to zero this is free model and look into fidelity subtability but with respect to integrability breaking persuasion and then uh actually uh uh this um quantum language is very convenient because we know that if it's integrability breaking it kind of defies all selection rules like an integrable case i have many uh conserved operators and so on so i defy all selection rules if i defy all selection rules again i expect spectral function to be constant just because small matrix elements i mean closed by energies not closed by energies who cares i open all the gaps uh with the same roughly rate so it actually means that we expect that fidelity subtability is still one over mu this eth scaling even though the system is integrable but i'm looking into non integrable direction now what happens if uh we look into uh transfers direction so integrable direction well we know that when h is equals to zero i just said that spectral functions must go to zero so which means that chi is a constant but now i'm skipping some steps but you can say let's assume that h is very very small like arbitrary small i said perturbation theory doesn't work but say h is equal to e to the minus 10 million power 10 million right then perturbation theory will work and then uh uh what happens that you can apply perturbation theory and this small h will induce transitions by g because this small h will lift the selection rules which were before and so then you can convince yourself that spectral function will be h squared remember it's square of the matrix element divided by energy denominator squared which is frequency so and then you translate it and you see that susceptibility is h squared over mu q so it's small but it grows much faster and so now we see uh two interesting things first both of them kind of suggest same scaling form again how you usually find scaling form you first find it perturbatively and then try to check whether it goes beyond perturbation so the both results kind of can be uh custom deformed that chi is one of a mu times function of h over mu h is integrability breaking perturbation right it works in both cases okay sorry getting tired almost done so and then uh so and now we get like another interesting result that uh if we're saying what's the minimal direction of chi so where the wave function changes least then when h is very small minimal direction is parallel to integrability say mu g but it's parallel on the other hand you just see because of the reason I mentioned that this has much stronger dependence on mu uh then yeah did I probably again messed up anyway so when h is too small uh as I said integral when oh that's probably correct when h becomes big so critical value basically h equals to mu they switch and actually now minimal direction becomes orthogonal to integrability it points towards integrability and again actually the reason behind is um physically very very expected so um now we already discussed that um in integrable systems I have spectral gaps and basically it means that I don't have small uh long time dynamics like phi of omega is small at small frequencies but if I break integrability actually my delta function start to broaden I start getting pre-sermalization so instead of slow dynamics I'm getting uh fast dynamics I'm getting very slow dynamics but orthogonal direction it was kind of slow from the beginning but nothing changes to it so and this is indeed what happens I I I know it's a bit overwhelming but this is my last three slides so I apologize if it's too much I didn't have the time to simplify but this basically slide to show that indeed scaling theory seems to work for the transition so we plot uh product mu times chi which is should be a function of h over mu and these are different system sizes and what you see that if you increase system size you get bigger and bigger region of collapse and you are already like in non-perturbative regime so it seems that all crossover almost all the way to maximum so remember after the maximum you start getting your radicity it's described by scaling theory so at least there is a good numerical evidence that uh uh emergence of chaos is universal it's sort of like integral points like critical points uh but we look into infinite temperature then uh the other thing we can see is this the same plot but we don't rescale horizontal axis now it's plotted against h not h over mu this is physical magnetic field and there if we look carefully we see in the same as in the plot I showed with styler that there is a growing region of rigidity remember ergodic regime chi mu should go to constant and then what you see that if you basically make mu lower and lower which you can only do if you increase system size in parallel anyway this is again another subtlety what you see that this ergodic region when you have a collapse gets bigger and bigger so we see in this model everything is expected you have chaos which appears very very early on right and then we have ergodicity which appears later both go to zero one is faster than another but again we don't have quantitative statement so um so now what's the physics and I already explained the physics but here you can sort of see it with numeric so now we look into autocorrelation function so this is exactly what you would measure in experiment maybe you can you measure Fourier transform this is your memory function and what happens is exactly what I described in words so if you look into solid lines which is orthogonal direction you see relatively fast dynamics I mean you don't see diffusive tails and so on because systems are too small but if you look into parallel direction so this is again small but not too small integrability breaking you start to see super slow dynamics and this is your pre-sermalization so this glass is slow dynamics which is behind all these divergences you just see in the parallel direction so there is some subtlety that if you go a little bit from integrability you have to decide what's parallel direction but you can decide it by by minimizing maximizing so we actually see like what's physically going on so and this minimal kai directions actually directions of slow slowest relaxation so they are actually extremely natural so there is another model maybe I skip because I think it's too much let me just say in words so there's some amazing result we found so it is in my opinion so it's a model where we break integrability only in the boundary so this system is integrable you have boundary and set from the point of view of chaos it looks the same but when you look into the same plots and look what happens with ergodicity you see totally different story so now if you look into the same plot for that model so you look into mu kai and you see that if you increase decrease mu it's kind of increased system size you see never you never see collapse so it's actually this is an indication that system never satisfies each age even when integrability breaking is small so this is like an example of a system which seems to satisfy k a m this is probably not too surprising after saying because even though this is extensive system because you break integrability on the boundary you feel it's kind of like zero-dimensional so in order to say for particles to normalize they need to travel to the boundary come back scatter here and travel come back and so on so it looks from this point of view it's almost like zero-dimensional system so can you speak out no no so this way you will not get bigger dice and nothing so this is one of indicators but none of them so basically what what what you see is that this scaling of mu kai which increases as you lower it really tells you that spectral function is never flat and these results I want to highlight there in thermodynamic limit so because it's a monomic limit our time cutoff is finite we cannot go to lower cutoff it's like infinite time versus infinite space we decided to go to infinite time but this is x-axis chain okay since you asked I so this is x-axis chain with boundary magnetic field on one side which doesn't break integrability and with last link which is different from one and it turns out that when g is one this is x-axis chain with boundary magnetic field which is integrable when g is zero it's also integrable the last link doesn't exist in between the model is supposed to be chaotic it's sort of a random choice I mean it's not that we looked we thought okay let's try to have two parameters where we break integrability close to the boundary we've just and for the long time actually we started writing the paper thinking that oh it's similar it's similar but then so no no it's not similar so this this is an example of the model where we see no traces of Wigner Dyson statistics or eth whatsoever so I cannot rule out that if you go to some crazily small time something champ it happens but these asymptotes are very convincing so in the sense that what what you see is that if you do mu squared chi this is kind of maximal possible scaling of chi then the scaling works reasonably well again I don't say it's perfect but it works reasonably well when you try ergodic scaling eth scaling it just does not work for any parameter it's actually we see exact same story for classical systems where there is a km and which is supposed to be never ergodic I mean there are some spin models which are never ergodic we see the same story so in this sense this without this kind of good indicator to show see that the model is always chaotic but is never ergodic or at least with a numerical whatever precision is never good I have two minutes left so let me just show nice picture so justifying the title of the paper integrability is attracting so what we show here is really directions of smallest metric so these are basically geodesics loosely speaking and then the interesting thing is that this is for this model which demands that that you can identify integrability regions even if you don't know them so from basically this adiabatic flows minimal direction flows they bring you towards integrability and then this is of course finite size effect so we expect that if you go to infinite size this becomes chopper and chopper and actually I explain why it becomes chopper and chopper so one can say oh this is a nice geometric result but how this is relevant to physics well first of all I told you that these directions are actually directions of fast relaxation so basically if you have I know some coupling manifolds and you look into conjugate observables which relax faster and you just follow this direction so basically you're moving that direction modify Hamiltonian check again which direction you need to follow update it go and so on you will end up in integrable systems but then this is already at the level of speculation but it's last slide so I hope you will forgive me that I would say that actually dynamically systems want to tune themselves to these points imagine that my external fields are not external fields but dynamical variables so then actually directions of orthogonal it's directions of strongest dissipation strongest mass and so on these are directions you can kind of imagine that what slash metric means it means they have huge reconstruction of wave function now think about the polo run so if I have a polo run with big mass what it means it just dressed by many many atoms like electron it's very hard to move now you ask dynamically where the polo polo run will move in an orthogonal direction it happens with you also right if you I don't have a pass and there is an easy direction one is easy direction one is hard you always follow easy direction so in some sense this is kind of an interesting point and then this is on the level of speculations that even though integrable points or lines are measure zero they're actually attractive for nature and of course we have many examples where the systems are nearly integrable around us whether it's for this reason or not I don't know so this is my last slide I'm a bit over time so this is basically what sort of colloquial picture it's just to finish what what I was telling you so in a way like I would argue that integrability lines are kind of like river basin so if I have a loose analogy so we have rivers which are integrable lines then we have mountains which are like chaotic regions then our adiabatic flows like streams they go towards the rivers all the way and then they turn and go parallel to the rivers that's exactly what adiabatic flows in and then they end up in some points and these are usually high symmetry points which sort of like vortices and so on in this geometry so yeah there is it seems to be also extremely interesting geometry I just showed the metric but metric defines metric manifold I even we have some collaborations with actually two people in the audience Ruth Steminis they say maybe it will lead to some results but it's clear that geometry is encoded this metric tensor is very interesting okay so I'm done so let me give you kind of a summary of what we discussed during these lectures and I remind you that we discuss kind of circular relation between chaos and determinism chaos leads to determinism determinism leads to chaos and it's kind of in and down relation between between them we also talk about emergent random matrix theory each description of ergodicity could be strong chaos mixing or whatever I don't think terminology is completely settled now in quantum systems and from this you have emergent thermodynamic relations then in the last part of these lectures I we discussed that classical and quantum chaos ergodistic can be understood through complexity of trajectory of basically adiabatic transformations which preserve trajectories and maximal chaos in this sense of sensitivity and argued actually in intuitive sense as well actually very close to integrability not not when the system say ergodic then in the very last part like I mentioned that it seems again I don't want to make strong statements because so far this is just numerics and some perturbative results but it seems that emergent chaos like satisfies some scaling relations and from the point of view of geometry and possibly dynamics actually integrability is attractive it's like a fixed point thank you very much everyone I'm here until tomorrow afternoon so if you have any questions please yeah so like in in this mbl systems it seems what people thought is localized face is actually a glassy face with this so if you have translational invariance then I mean there is no reason for slow spatial dynamics but you have slow dynamics for example momentum speed that's what actually happens in Fermi-Fashtoulem problem which is classical you have super slow dynamics of occupations of normal moments so it's probably depends a little bit on what the what the integrals of motion are whether they extensive or not but it seems that this glassy dynamics is completely universal feature of all nearly integrable systems whether it's universal glassy dynamics it's always the same or it depends on the system and so on this is totally unclear but this maximal chaos which I mean there's something I didn't mention this maximal chaos is also consistent with spectral sum rule so integral spectral function converges it's your spectral total spectral weight and so on so it means that spectral function cannot decay faster than one over omega if it's a symptotic regime but one over omega gives to one over mu squared so basically maximal chaos corresponds to glasses in this sense and now the question whether it's really asymptotically realized in all systems on uh uh basically yeah asymptotically when time goes to infinity results are kind of suggestive that it's happening but you need much more work and maybe more analytic understanding to see where it comes from and actually one over omega is one over of noise so this is something I forgot to mention so this maximal chaos connects to this famous problem which exists with us for many years like one over of noise I don't want to lie I don't know about them so but if there is like I don't think entropy at least entanglement entropy will have a similar maximum but maybe it's related because entropy is maximized when you're vodik but I was trying to guess to say that actually even with our intuition these states are not very chaotic I mean they are very simple um but it could be that there is some relation I don't know maybe if you send me some references I can try to look at and yes yes yet zero what they mentioned oh this is the last part of your speculation because we saw I mean there is a belief and I would say until this work I was part of this belief that all models and thermodynamic limit will become ergodic but this is special when it's basically think about you have infinitely large system which is integrable but only in the boundary break integrability and then if you think about how civilization happens so if you are like in the integrable part you have your integrable excitations I'm afraid to say something wrong like that I asked Dimash not to kill me that integral models I know they have solitons research or some other they have like some excitations which propagate sort of like quasi-particle and they freely move nothing happens to them then they reach boundary it still doesn't break integrability they reflect but once they reach this link you start getting three particles scattering right so you scatter them a bit then and again they move they come back they scatter a little bit so in this way from the point of view of this quasi-particles it looks a bit like zero dimensional system but again I this is just intuitive speculation this is not like a new mathematical because we were trying to make sense how is it possible actually if you're on gene check this point like for modern demands because we thought maybe we did this mistake maybe we did this mistake so we checked everything and it seems that you never see ergodicity yes so for example uh what I was trying to say imagine you're just your parameters a dynamical degrees of freedom and then what I was saying that generally the evolution will be biased towards integrable points so in this sense the point they will self-tune themselves to integral point just because it's it's kind of this reverse and stream so this geometry in some sense it's also points directions of this dynamical motion the geometry also in normal is this mass it contributes to dissipation and so on and so forth so you have less dissipation in these directions you have less mass than normalization in this direction so it's kind of more likely you'll follow them of course vegan isotropy only happens asymptotically when we're very close to integral points when you are far this anisotropy is very tiny so but then if you approach if you have integrable line of point nearby you actually buy us to move closer to you