 I guess I want to add that. I mean, slowly I will go to more complex materials. So I'll try to explain what I can, but I won't be able to do all the derivation. And for those of you especially who see it for the first time, you will be overwhelmed just by amount of material which I'll try to present. I'll try to give many summaries of what we learned and so on. But if you are sort of interested in this topic, I don't think there is any other way that other than you go through it, you put yourself at some other time and try to fill the gaps and so on. So let's move on. Let me just, as a disclaimer, I'll take a small detour. This is a project, a kind of fun project which was done with Peter Kleis, who is now in Dresden faculty. So I talked a little bit about eigenstates as a way to define quantum chaos. But let me actually say that these Wigner-Dyson statistics, all these conjectures, they equally apply to classical systems. And I'll just flash some results without going into details and continue. And actually, maybe many of you heard about the Wigner function. So we can represent quantum mechanics entirely in a phase space, without even introducing the objects like wave functions. We have our phase space, we have observables. But there are complications coming from the fact that we have product, which is non-trivial, it's called moiled product and so on. So rules of quantum mechanics, they enter some rules of manipulating these numbers. But inverse is also true. We can take a classical distribution and map it to wave functions. And they'll just briefly flash how it's done. So let's assume that we have some stationary distribution which is, say, Gibbs ensemble for simplicity, but it could be anything else, technically. And then what I can do, I can just do a Fourier transform. So for those who know, it's like inverse while transform with respect to momentum. And then conjugate to momentum, we will call a coordinate and I will call it relative coordinates, X1 and X2. And coordinate X, I will call X1 plus X2 over two. And then I have to introduce some epsilon, which of course in quantum mechanics is h bar, but I don't have quantum mechanics, I just have epsilon. And then I get a function which has, which is a function of two coordinates, X1 and X2. And these coordinates could be vector coordinates. They can have many particles, it doesn't matter. And then if you stare a little bit, this function is symmetric or even more precisely, it's Hermitian, if I odd in moment. Anyway, it's symmetric. I mean, the reason to see it is that generally if I exchange X1 and X2, it's sort of like changing, it's like doing complex conjugate. I guess Hermitian is the right word. And if it's Hermitian, I treat X1 and X2 as discrete coordinates, I can diagonalize it. And then I can write it as a combination of eigenstates of this matrix. And then I will have some eigenvalues Wn. And then I ask what's the properties of these eigenstates and eigenvalues. And it turns out that there are lots of parallels to quantum mechanics. So this science satisfies approximate Schrodinger equation. There's no time here. You can get tunneling, you can get bary phases, bend structures and so on. And this is not semi-classical expansion, I just want to highlight. So you can set h bar to be one, the capstone to be one. So, and that's interesting that in this formalism, which is actually kind of developed in 60s and sometimes in 40s, but that's forgotten completely. So you sort of reduce the phase space. You map continuous distribution of p of xp into set of discrete wave functions, psi n of x, like in quantum mechanics and weights, which are like eigenvalues. And then in this language, your observables x and p become operators. And these are exactly same operators which appear in quantum mechanics. So like in particular, if you want to compute average of p, this is the same as computing average of operator p in these eigenstates. And if you ask what's the Gibbs Hamiltonian, which describes these eigenstates, you can do high temperature expansion. Otherwise it's like a complicated problem. And then you will get actually your normal quantum Hamiltonian plus corrections. I'll just show some pictures. So if you take double well potential and you set all parameters to be one, but you just put h bar to be 0.1, it's kind of small, but not too small numbers. And you ask what's the lowest eigenstates of Gibbs distribution, actually get very perfect, nearly perfect tunneling states. So you recover them right away. And moreover, if you want to compute level splitting between symmetric and non-desymmetric eigenstates without these particularly small parameters, so like everything is of the order of one and temperature, you can even choose say also 0.1. You can get accuracy like 10 to the minus 13. This is really, so Gibbs distribution knows about quantum mechanics a lot. So and then I'll skip this, you can get better phases interference and so on and so forth. So let me just say related to this talk that actually you can apply Bohigr's conjecture to the Gibbs distribution. So what I'm doing here, I'm just taking my Gibbs distribution, classical of a particle in a cavity, sorry, in a cavity. In one case, it's the Sinai billiard and another it's some ellipse integrable. Then I just take my Gibbs distribution, do a Fourier transform and diagonalize it. In no quantum mechanics. And I ask what's the distribution of eigenvalues? And in this first case, I get random matrix statistics. In the second, I get approximately Poisson statistics. So all this conjectures work. And basically this statement is that I can say whether the system is ergodic or not by just analyzing static photographs, no dynamics. So in order to get this Gibbs distribution, say experimentally, I just do many, many photographs. Then I get the image, I just do Fourier transform, look what eigenstates and I know what the system is ergodic. It's kind of funny. So in a way, this Wigner-Dyson statistics and so on, it's not special to quantum systems. It's the way to see ergodicity whether in quantum or classical. Anyway, so let me continue where I stopped. So let's try to apply random matrix theory to observables and see what predictions are. And I'll be a little bit sketchy here, but as I said, I urge you to view the gaps if you are interested. So let's assume that we have some observable which is not random. So this observable is a matrix and this is some non-random matrix in some basis. Say it's a fog basis, right? And this observable could be a number and then it will be diagonal in the fog basis. And then we can ask what the matrix elements of this observable in eigenstates of the Hamiltonian, but if my Hamiltonian is a random matrix, this eigenstates are random, these are random unit vectors which are orthogonal, of course. So, and then what you will get, well, essentially you will get a simple expression. You have non-random matrix elements and then you have these projections of random vectors on fixed axis, right? Think about each vector i and j as non-random axis. So this might, I don't know, fog space in the Hilbert space, it looks like a vector space, right? So, well, if my vectors are random, then we can say that, on average, these projections are uncorrelated. Except when I project the same vector to the same axis, then I get positive number. And on average, this number is one over n. Just think about three dimensions. You have, say, random vector which you project to three coordinate axes. So on average, you'll get one third, right, of weight on each axis and all other projections are uncorrelated. And actually, the higher the Hilbert space dimension, the less fluctuations are. So it's basically the fluctuations around mean and variance will become smaller and smaller. So now, if I ask what's the expectation value of the soft diagonal matrix element, well, if alpha and beta are the same, it's on average zero. But if alpha and beta are the same, then I see that i and j should be the same, right, by this rule. And then I immediately see that what I will get is basically average of trace of O. And think about this. This is really a micro-canonical ensemble except that I sum all energy states. But because I have random matrix and there are no selection rules whatsoever, my micro-canonical ensemble actually becomes infinite temperature average. There are no constraints, right? So, and this is sort of expected. If you start from a random matrix and you ask what's the average, well, you will get basically infinite temperature ensemble if we use statistical mechanics language, right? So then I said, off diagonal observables on average zero, but if you compute their fluctuations, it's a little bit longer. So you have to take all for beta, take complex conjugate and square. But then it's very easy to see that you'll get some cross terms which are non-zero from the same rule. And then what you will find that you will get basically fluctuations which are one of a Hilbert space dimension squared times trace of O squared. And I just want to say that typically our observables don't scale with Hilbert space dimension. Right? So therefore, this will be like sort of intensive, right? Because trace of O is your sum of all eigenstates roughly have N equivalent values you divide by N and so on. And this becomes one over N. Again, one N will disappear because of trace but another N will remain because in each state O squared, think about O is like some spin squared or whatever, this doesn't scale with N. So we see that fluctuations are suppressed. So this is a prediction. And if we combine together, we can loosely say that our matrix elements within random matrix approximation given by some average O times delta alphabet, the chronicle delta. So diagonals are given by basically this. I will use micro-canonical with these caveats which I just explained micro-canonical value and fluctuations are basically given on average one over square root of N and with some distribution which say we can choose Gaussian such that square of this is roughly one. So and this is a strong statement. As usually this is not exactly correct because if I start looking into higher order correlation functions like I start looking into O power four or whatever, I can get higher order correlations even within random matrix theory. And I'm skipping this part completely but they were like recent very nice work by Silvia Papallardi or Kichun Kurchan and Laura Fiona about applying free probability theory and going beyond this statement. So basically one can extend it to higher order correlation functions. But anyway, this is a prediction of random matrix theory but of course we don't deal with random systems. But then a major would say breakthrough in trying to reconcile quantum mechanics and statistical physics came due to Mark Sridnicki whose picture here and George Deutsch independently who already like 30 years ago and generalized this random matrix theory to normal physical systems. They were driven by the same idea that time average should approach equilibrium to basically say we know it happens and then they asked independently how they can reconcile sort of these ideas of barrier and the matrix theory with the fact that we should equilibrate to the thermal ensemble. And basically they said that in order to agree with thermal expectation, each diagonal matrix element now should be indeed micro-canonical average, right? Because if you choose your distribution narrowly centered around some energy in order to reconstruct micro-canonical ensemble, this diagonal matrix element should be always the same for each state. And then details of what exactly these guys are not really important. And they also sort a bit more and said that off diagonal matrix elements should have this, remember I said it's one of a square root of Hilbert space dimension suppression, right? And Hilbert space dimension translates to e to the minus entropy. So, and basically the ETH ansatz which was formulated by Mark Sridnicki states like this, it's encoded in this expression that if we take some, usually people say local, physical, whatever, observable and you ask its matrix elements about this matrix elements in eigen basis of this Ergodic Hamiltonian, then we will get micro-canonical and the averages on diagonals and some random suppressed matrix elements away from diagonal, but there is an extra ingredient that there is some smooth function of energy difference. So omega here is En minus En. So basically, if you think random matrix theory would be that this OE is energy independent and this function of omega is equal to identity, right? So they introduced basically structure on top of random matrix theory ansatz. There's no proof behind, but there are lots of numerical tests. So what's the meaning of this function? So I decided to save time, I'll probably skip the calculation. Let me just say how it works. So this function is actually related to spectral function. So Fourier transform of autocorrelation functions and it's very easy to derive this, but it still will probably take me 10 minutes to go through the derivation, so I skip it. So essentially, you look into non-equal time correlation functions and that's not surprising that omega appears as notation. At frequency omega, a few observable and then what you can do, you can insert complete basis of states, you insert M here and then you integrate over time. And then if you do it carefully, you will see that this square of the spectral function, which square of this function F squared because it will appear, maybe I'll just show this and then I'll skip. So what you do, you just introduce sum of intermediate states, right? And then here you will get EN minus EM, right? So this translates to sum over, sum over M, N, O, M, where it times E to the I, EN minus EMT over H bar, right? And this by ansatz are precisely given by this little F squared of omega, right? And then when you evaluate integrals, you have to be careful. When you take symmetric and anti-symmetric, you'll get delta function principle value and so on. So you need to play a little bit with algebra, but essentially both symmetric and anti-symmetric correlation functions I express through the same of diagonal matrix elements and then you can basically with a little effort you will find what this F is. And there's one, you might see beta here and they just want to highlight maybe, yeah, this I think is important, that there is no Gibbs distribution. So temperature appears entirely from density of states. You probably know that temperature is defined as, through derivative of entropy with respect to energy, right? So this is how density of states change. And when you have this type of sums, then you will get F squared evaluated energy E n plus omega over two. This is center of mass energy, right? Remember, omega is E m minus E n. So what appears here in the matrix element is center of mass energy because it's completely symmetric. But then when you sum over states m, this sum becomes integral because E th tells you all eigenstates are the same, right? So replace the sum by integral over D E m, right? Or final energy, which I'll write D omega because you're integrating over energy omega. But then you will get density of states at energy E plus omega, which is E m, right? So you get E power S E plus omega from density of states and you get minus S of E plus omega over two from E th ansatz. And now this omega E is usually extensive, right? Omega is of the role of one for local matrix elements. It's how much you can change energy, say, or can flip a particle, right, from up to down state. So in large systems, you can do Taylor expansion and this will be beta omega and this will be beta omega over two. And that's how you get beta. It was actually missed originally in some original papers. It's a subtle thing. But if you do it correctly, you recover it. And then maybe in the next 10 minutes, I'll just show how from E th ansatz, you can recover many statements of thermodynamics in one line. So in particular, again, like if you do play this game, you just see that this combination between anti-symmetric and symmetric function must be small. It's D, this function, DE, but E is proportional to volume. It's like one over volume. It vanishes in a thermodynamic limit. So you just see that this thing should be equal to zero because of E th. But this is nothing but fluctuation dissipation, so if you remember it, there is like tangent of beta omega over two appears. It connects response and fluctuations. Again, I'm not going there, but just to say S minus is your response. It's Kubo response and this plus symmetric function is precisely a fluctuation. So fluctuation dissipation theorem just follows in one line from E th ansatz. And actually many other statements of thermodynamics follow. So I'll just show a couple. So maybe interesting. So maybe as some of you heard about Kierzinski and Krugze qualities. So essentially they relate. So if you have some process, dynamical process in the system, they ask about probability of doing work W of basically increasing your energy by energy W. And then I'll show a result in a second, but I just want to say how it's easy to make the statements using E th. So suppose I have a process, like I send light to the system, or I just drop this, right? Or I hit it with a hammer. I do some process. And then as a result, I will get transitions between levels, right? And I can ask, what's the probability of increasing energy versus decreasing energy? And we all know that probability of increasing energy should be higher, right? If I do something, I'm heating up, right? My energy increases. So, but microscopically, probabilities are the same. So you can just show that if the system is isolated, so I apply some time dependent process, then probability go from level M to level N is given by square of the unitary operator, unitary describing evolution. So it's beyond any perturbation, so it's really exact statement. So, and square of the unitary is symmetric. So basically probability of blue arrow and probability of red arrow are exactly the same. Do I contradict myself? No, because we are not interested in probability of going from particular state to a particular state. We're interested in probability to go into some window of energy E plus W. And then I have to, as you know, I have to sum up all final states. Now, I use a TH. All probabilities are the same because all matrix elements are the same, right? Again, there is some smooth function, but if I look into narrow energy shell, this function is always the same. I have the same energy difference, right? So what I get, I get basically this microscopic probability times density of states at final energy plus W. Now, if I do a reverse process like arrow, I go from here to here, it's the same process, but now my final density of state is at energy E and it's not the same. And now I ask, what's the ratio of these probabilities? And then I see it's the ratio of densities of states and again, if my W is small compared to total energy, I can expand it and think for simplicity that they have a cyclic process and this is just E power of B to W. If you're more careful, you'll also find free energy, right? And this is actually, this simple statement led to a very actually famous Yerzinski equality which you can get in one line from here. I just move E to the minus B to W to the left and this probability of reverse process to the right and integrate over W. This will give me one and this will give me average of E to the minus B to W. And then it's interesting, it's actually one of these famous results because it's totally independent of any details. It just tells us that whatever I do, if I measure probability of doing work and measure average of E to the minus B to W, B to its initial temperature, W's energy change, I will actually get exponent of equilibrium energy change for the same change of parameters. If I do a cyclic process, this is one because my free energy doesn't change, I come back so I get average of E to the minus B to W is one. This is one of this recently found equivalence. It's actually, to be honest, it was found before but it was not really appreciated until recently and it's used a lot. So another very famous example you know about detailed balance. So let me now assume that A is some simple system could be say three level atom. It doesn't satisfy ETH or anything. And then I couple it to a bus which satisfies ETH. So it's like big chaotic system. And then I couple them, let time evolve and then I also ask what's the probability of blue process versus red process? Blue process is when I go from down state to up state and then because I have to conserve energy in the bus I have to go down and then the red process I do opposite. Again, microscopically, these are the same by unitarity for allusion. They're completely equal to each other. But you already see from the picture I have more states in B for the red process. So I have more states there. So it looks like red process should be more probable and this is the origin of your detailed balance. And again, from ETH I require it in one line. So I just say what's the probabilities and within A I go from state N to state M versus the probability that I go like down. And again, what I need to do, I need to sum all final states of B because I'm not asking from them. And then I will get density of states only in reserve or B because I'm summing over B at different energies. And the erasure is just e to the minus temperature of my reserve or B times energy. And this is a detailed balance. So I got it from ETH in one line. Actually you can go through many, many thermodynamic relations and if you assume ETH you just prove all of them. It is all I know. So I just want to highlight again, ETH is not necessary condition for thermalization. At least it was never formulated as such, but it's sufficient. And it's kind of interesting that initially this quantum chaos or aggadicity were very complicated and people couldn't really understand how to describe quantum chaos or aggadicity for a long time. But at the end they developed language which made all statements much easier because when you say it formulates you do basically equivalent statements in classical physics. You have to say many more words that you forget memory, blah, blah, blah, blah, blah. Like here you just say there is one ansatz which you can use. So then another statement, as far as I can tell, Norm will talk about this in your lectures that another thing which follows from ETH is this sort of entanglement. So if you take, I have like a simple cartoon so let's imagine, so this is even non-interacting system. So let's imagine I have superposition of a particle in all three states. So this is my square well with only three positions for simplicity. And I want to contrast it with localized state where particle is in one position. And now let's imagine that I just don't measure the right position at all. So I can have a black screen there. So I can only get information from these two states. And then you immediately see that in this case I still find a pure state. So I always get the same result because particle will be in the second well. If particle was in the third again I will get the same result. I will never see a particle in the system. But if I'm delocalized then I will see this probability one third I will see no particle. And this probability two thirds I will see particle in this superposition. So I created entanglement in this case. And actually ETH if you do it sort of more carefully and there are actually mathematical works it says that if you take a state which satisfies ETH which is sort of random if you look into reduced density matrix it will look like so. So this is a much stronger statement than even this observables because it kind of implies that all local observables are thermal. So in few words I mean in simplified statement would be that ETH implies that in small systems entanglement entropy is thermal entropy. Again this is not necessary condition but it's sufficient. Okay so now I'm slowly going to what I announced is that trying to distinguish chaos and ergodicity. And now I just want to highlight level statistics is really measure of ergodicity. I even didn't define quantum chaos but whatever definition is we will all agree that this system is chaotic and argued it's also ergodic. Well if we just double it so I have two boxes this system is still chaotic in any sense and I cannot predict anything. But according to level statistics or according to thermalization it's not ergodic right because I now have two boxes. And of course this is like a trivial example and it's clear what I need to do. I have to say oh there is extra symmetry like particle conservation and H well and so on and so on but I need to start saying like some words. And then you can ask what if they're weakly coupled what happens and so on. So in sermon limit chaos usually implies ergodicity at least there are no counter examples I mean some people talk about many body localization but I think it's clear by now that there are many mistakes in those statements. So and then I'm slowly going to the second particle actually switch to another presentation shortly is that the idea we had which actually was completely random it's not that we purposely work on it and I'm now slowly going to our own work from just general overview. So it came from Mahit Pandey who was a student at the time at BU and he graduated already and with CELS who is now professor at NYU. And the idea was that can we really try to approach quantum chaos not through sensitivity of trajectories we don't have them but through sensitivity of eigenstates. So like classically I mentioned that we define chaos through the fact that if we change a little bit our system could be initial condition could be Hamiltonian and could be something else then more trajectories will strongly diverse from each other. So now when in the ergodic eigenstates at least we know that they are almost random and then it's intuitively clear that if they are almost random they should be highly unstable. So you change a little bit in the Hamiltonian and then they will change a lot. And this is as opposed to integrable systems where we expect that eigenstates will be stable. Again it looks like I'm going to purely quantum direction and you might ask oh but we have level statistics and so on but maybe in the rest of this and next lecture I'll try to convince you that actually this idea of sensitivity of eigenstates is not really quantum it has well-defined classical analogs and I will hopefully I will show you how this idea connects many, many different dots. So but before going there let me kind of summarize the first part by saying that how we go from chaos or ergodicity if you want back to determinism and how we have sort of this kind of duality. I'm coming back to the very first picture when I have this blue drop of ink which when it reaches chaos this state becomes very simple. So if any of you took any lab like freshman lab in physics you kind of did this pendulum problem and you measured the motion of the pendulum and compare it with some damp oscillator and so on. And then you can ask well do I contradict myself because I just told you that even two particles or even one particle in more than one dimension is chaotic. And now I have a situation where I have many particles I mean this is like a set up you can think this is particles of the air and they hit you. So how is it possible that I'm deterministic when I'm doing my freshman lab I don't know nothing about chaos right? I just say everything is predetermined yes we have extra friction and so on. And actually the reason that we have determinism is precisely chaos if you think about this. Because we have a chaos we can say that we have thermalization of molecules here which have equal probability. So in a way in a mathematical sense we give a measure to our distribution. We are saying equally probable on some energy surface but then because I can use central limit theory if I don't have a measure I cannot use central limit theory. I have to define that they should be this and this. So and if you use central limit theory we will find that macroscopic observables actually have very small fluctuations and that's the reason why pressure in this room is the same everywhere right? I have a lot of chaos around me but pressure here is pretty deterministic. So and I want to say yeah and from this we actually see that when we look into macroscopic objects like piston then the motion is again deterministic and we don't really need to solve any chaotic equations of motion. Moreover if you try to solve them we will be in trouble. So we use various approximations for expert Born-Oppenheimer and Field and so on. So in a way we see that there is a sort of a loop where chaos leads to determinism, determinism leads to chaos. What I didn't tell you is that even if you think about the whole dynamics of this whole thing even Lagrangian equations of motion in a sense they also come from chaos but this will lead me away from this topic so I'll skip it. So essentially we see that friction, viscosity, pressure, electromagnetic response, elastic properties, whatever, whatever, whatever they come from chaos. So chaos leads in a way to probabilistic determinism. So very often you might ask like why is it that I have sort of reversible macroscopic dynamics but irreversible macroscopic dynamics and people are discussing what's missing and so on. But in a way there is no like paradox because what's happening is that chaos leads to probabilistic determinism, right? It's meaning that I go to probabilistic descriptions which are highly centered around the maximum probability distribution and that's the reason why friction is irreversible, right? So once I write kinetic equation I don't deal with microscopic equations of motion, I deal with basically include assumption of chaos implicitly that I'm maximally random within this constraint. So and then the whole situation is also like in Chinese philosophy Yin and Yang so if you think about this that determinism leads to chaos so unstable solutions, equations of motion but chaos in turn leads back to determinism, right? So we have statistical mechanics which gives measure and I mentioned central limit theorem and then we can make predictions again. So and this is something which I guess is good to keep in mind. So I think this is the point when I want to switch to presentation too. Here I'm not really late. Well, see I didn't finish it but this is for tomorrow. I have two presentations supposed to cover three lectures. So and this is like part one of this talk which I already motivated and I'll talk more about this. It's sort of a geometric approach to chaos and ergodicity. I will try to connect at least. This Eigenstates sensitivity to geometry and say how we can think about chaos in this way. So I will start from like introducing different notions and at some point there will be no complicated equations but there will be a lot of new material for many of you and it could be overwhelming. So I apologize in advance. I'll try to give a summary because I'm going to relate this Eigenstates sensitivity to many different things and then in the end connect dogged together but then hopefully I'll show you some numerical results and some arguments and show like how what's the physics behind all of this. Well it's an excellent question. I will not be able to answer all of them. So what I will try to do now is to describe both quantum and classical systems using the same setup. Sensitivity of you think stationary observables. I introduced the stationary observables time average so classically as well and now I can ask what if I slightly deform the Hamiltonian how this stationary observable will change. So how this is related to Lyapunov exponents we don't know yet. We are trying to work it out. It's direction, relation is not that direct but it's there, we know it's there and maybe like two or three years from now I will know how to answer this question. So I'm going to change gears. So I'm repeating what I just said. I want to highlight again that this is, yeah I was not sure when I finished so I thought I will start again reminding you the things but since I just told them I will not. So now let me spend some time of defining what adiabatic transformations are and I will again start from classical systems. Again the idea to keep in mind that eventually I want to ask what's the adiabatic transformation of quantum states and I want to simultaneously talk about adiabatic transformations of classical trajectories. So I need to define what I mean by adiabatic transformations. Quantum mechanically we kind of know it. I'd still have to introduce some objects. Classically we know it less. So I'll start from classical systems. So and again I will be skipping some calculations but I hope to highlight main results. So I will talk about Hamiltonian systems only because they have direct these connections between quantum and classical and everywhere in the stock systems are closed. Actually there is a parallel workshop on open systems another story I guess. So you probably all know at least you should know that time dependent evolution equations you can think about this as canonical transformation generated by Hamiltonian. And canonical transformations are very, very important in classical physics because they preserve canonical Poisson brackets right which are analogs of commutation relations. So that's what we call canonical variables which Poisson bracket is one. And they also preserve structure of equations of motion. And if you remember the proof that trajectory gives you canonical transformation it's not based on the structure of the Hamiltonian. It's based on this equations of motion that dx dt would be actually plus the hdp and dp dt is minus the hdx where h is basically arbitrary function I always assume they're all differentiable and so on I'm not even discussing this. So but now instead of h I can take any other function I will call it a and I will call it generator of canonical transformations continuous canonical transformations. It's actually related to generating function if you know it but this is much easier object. So this is arbitrary function of x and p. I'll mention it. And then if I deform my trajectory so this is not a time evolution now this is change of variables. I basically define x and p as a function of some continuous parameters. For example I can rotate my space or translate or dilate or do something else. As long as I have this function a I get a family of canonical variables x and p. So like example. So suppose I do translations. It's a very natural example. I have a particle in the box in a train and then position of this box is I call it I will use lambda for the parameter generator. Then if I want to deal with particle in this train whether it's moving or doesn't move it's much more convenient to use coordinate representation with respect to the train. If I'm staying in this room and want to tell you something I want to put my coordinate system somewhere here around this room. But there could be some I know lab frame where positions are measured with respect to I don't know say Greenwich Meridian some place in UK right. So then I can say that well if lambda is position of my train I will define my coordinate as some lab not doesn't mean initial it's lab but I didn't want to use L for various reasons. Some fixed coordinate minus lambda. And then I can ask it's clearly canonical transformations Galilean transformation in fact so momentum doesn't change and they ask which operate which function generates this transformation. So I use this game I say DX D lambda it's clearly minus one and it should be minus DADP. DP D lambda is just zero it should be DADX. I stare at it I see that A is equal to P. So momentum generates translations in classical mechanics. Of course all know this about quantum mechanics but in classical mechanics the station is exactly the same. Let me do example two. So suppose I want to do another simple transformation dilations. So I for example I can imagine I'm squeezing the box I'll show it. Then I can say well I can take X and divide by some parameter lambda but in order to preserve Poisson brackets I need to multiply P by lambda otherwise they will not cancel and I can ask what generates these transformations? Well DX D lambda I will get minus one over lambda squared right? X not over lambda will be X so I'll get minus X over lambda. DP D lambda will be P naught but P naught is P over lambda. So again I stare at this and I see that X P over lambda is my generator right because DADP will be X over lambda with minus sign it will be this and DADX will be P over lambda and I will get this. So I'm not going through this example but I guess you can believe me now that if you do rotations around Z axis your generator of this canonical transformations will be angular momentum again like in quantum mechanics it generates. Now why this is needed so far I'm not talking about chaos or whatever I just generally when you can use these transformations in practice for example. So there is like a standard problem maybe many of you saw in the Olympiads say you have a situation when a particle moves in a box and the wall of the box moves and then how many collisions you have again a standard picture and of course you can solve it in some complicated ways find the current relation and so on but it's actually much easier to solve this problem is to say let me actually go to coordinates where the box doesn't move, doesn't shrink. So I'll basically say X over this lambda of T is now a volume is my new coordinate right? And then I can ask what will be equations of motion in this system and now you see what happens so first of all my particle moves on a box so my DXDT will include DXDT at fixed lambda but also definition of the coordinate changes in time because at each moment of time by coordinate I mean a different object X over lambda of T so if I use a chain rule I will also get DXD lambda times lambda dot right? So I just want to highlight this is not a motion it's the fact that I change definition of what I mean by X every time right? So but DXD lambda is said minus the ADP and then you will get DH moving DP so you get and then you can do the same for P so you will get actually again Hamiltonian equations but now you have a moving Hamiltonian which is H original minus lambda dot A so you manage to move part which comes from coordinate transformation to a new dynamical term in the Hamiltonian so now what you can do you can solve the problem at fixed well but now you have extra term in the Hamiltonian and usually it's much easier now you can solve Hamiltonian equations find how many collisions if you want and so on so now I want to go to from general canonical transformations to adiabatic transformations so in what are the adiabatic transformations? These are actually transformations which are trajectory preserving so how I can motivate it? Well let me go again to my time average distribution when I started right? So basically now my Hamiltonian is fixed so I average over time I get some stationary distribution and generally as we discussed it will be a function of some conserved quantities so in one dimensions only energy it's micro canonical we discussed but maybe you have more but then you see if you had special perturbations which commute with Hamiltonian then generally this coordinate transformations will not change again there is an issue the genius if you want this is analog of statement that eigen states will not change so you can keep in mind that stationary states are like time average probability distribution and you know in quantum mechanics at least very well that if you had perturbation which commutes with Hamiltonian and doesn't leave extra degeneracy so on I assume that this is not the case then your eigen states will not change so basically it comes from the fact that if you add this diagonal commuting perturbation it still commutes with this stationary p so and yeah mathematically it's just you say that if you have stationary probability distribution which means it has vanished in Poisson bracket with h it will also have vanished in Poisson bracket with h post v so these are special perturbations but now I am dealing with generic perturbation so I modify my Hamiltonian I change parameter lambda so it basically means my Hamiltonian goes to h plus derivative of h with respect to lambda times delta lambda it's infinitesimal change and this generally does not commute with Hamiltonian right that's the reason why we get all dynamics and so on so my trajectories do change but then I can ask the question can I find canonical transformation which undoes this change so basically I ask if I shift my x and p with some a can I obtain basically a perturbation which is diagonal with h which doesn't change trajectories so this is pictorially it looks like this so it's more or less clear and then I'll show mathematically what this means so imagine translation is kind of trivial if I move the box I need to move my particle so let me consider dilations which are also intuitive but a bit less trivial so suppose I have some potential and I have this type of trajectory which I mentioned face-based trajectory and my stationary distribution is precisely this blue line right now suppose I squeeze my potential and of course my trajectory will change right now will be bigger in p and small in x so it's clear what I need to do my adiabatic transformation would be that I rescale x back and rescale p down so but now how do we say mathematically that I rescale by the same amount which amount right and we can use exact same logic as before I want to create perturbation v which commutes with Hamiltonian but this perturbation now consists of two parts I change my parameter lambda like potential and also I change back my coordinates there is sign issue but if you go there with this example you can see you want to undo this sort of transformation that is why there is a minus sign anyway this is a convention plus and minus so basically I want to say that my new Hamiltonian so it's lambda plus delta lambda at new coordinates commutes with old Hamiltonian and old coordinates that's what I mathematically want to do so once we realize that then it's very trivial to formalize this equation so I'll just say that again d derivative of this h full derivative is back to lambda will be like a partial derivative plus a contribution due to change of variables and then because dx d lambda is minus dA dp so what I find that my total change in the Hamiltonian should be partial derivative of h with respect to lambda that's your physical derivative minus part which is due to changing coordinates it might be a little bit too fast but if you do this you will see okay so know what I'm telling you I'm just saying you that adiabatic transformations in classical mechanics are such that this object basically derivative of the Hamiltonian minus Poisson bracket has a vanishing Poisson bracket all commutes with the Hamiltonian and actually just from this very fact we already see how this kind of related to integrability I didn't say word integrable in this part of the work at all because if you think about what is g lambda well this is a conservation law so basically I'm telling you already that if I can undo my adiabatic transformation I get a conservation law so this kind of immediately hints that adiabatic transformations are related to chaos so again if A exists I didn't tell you this A exists but if it does exist in a sense it's a local well defined operator and so on and so forth then it means I can form a conservation law conjugate to lambda and in some sense this is extension of Nosar's theorem because symmetries are special transformations when d lambda h is zero so if I have like rotational symmetry I rotate and my Hamiltonian doesn't change in the new frame so I'm just saying it's a little bit beyond Nosar's theorem so all adiabatic transformations not just symmetry transformations they come with conservation laws so there is a interesting corollary of this which I'm not going to discuss it could be like a separate course but which I think is very interesting that also if we have this generate of adiabatic transformation basically if this equation can be solved then I can do what's known as counter diabatic driving which was introduced actually not so long ago by rice in Chicago and dimmer plaque and rice and then by berry independently and the idea is kind of very simple it's sort of like you know if I want to take I'm not going to experiment with this but suppose I have a glass of wort and I want to bring it across the room I can do it fast and then I splash the water this is my non-adiabatic effects I'm talking about but if I tilt it I will not splash it or splash it less so it's the same idea here so if I apply a drive is Hamiltonian h plus lambda dot a then in the moving frame I Hamiltonian will be just h but it preserves trajectories so it means that nothing will happen with my stationary trajectories with stationary distribution and then of course you can do infinitely fast the diabatic processes you can reach carnal efficiency over the engines you can do many, many wonderful things you can basically beat dissipation and so on and so forth and of course the trick is that and there is one caveat then in generic chaotic systems actually this equation does not have a solution so actually the first person who realized this to my knowledge was again Chris Yerzinski a long time ago back in 95 it's interesting he was not really interested even in these questions he was asking whether you can define Bayer phase and chaotic classical systems and I'll come a little bit to that so chaos is important so now let me go to quantum systems and there actually it's much more intuitively clear I would say what we mean by diabatic transformations so first let me again start from the same setup as in classical systems suppose I want to solve time dependent Schrodinger equation when Hamiltonian depends on some parameter which changes in time so think about same setup I have a piston which moves and so on but now my system is quantum then what I can do instead of defining time dependent transformations I can define basis time dependent basis so in this case the basis which depends on some parameter lambda which can change in time so usually when we do all exercises in quantum mechanics we expand our wave function in a fixed basis but nothing prevents us from expanding in the moving basis so think about this again I'm moving a box it's much more natural to expand my wave function in eigenstates which move together with the box so now if I do this I get time dependence which comes from sort of two sources one is that my coefficients still depend on time my wave function in a train depends on time but also my basis states depend on time and here I will immediately again I have a freedom of defining what this eigenstates are I will focus on eigenstates of instantaneous Hamiltonian this is related to adiabatic transformations now if I plug this to Schrodinger equation then I will see again I will have two contributions one comes from derivative of coefficients and another comes from derivative of eigenstates and I can use a chain rule because my phi n's depend on parameter lambda but lambda depends on time and now what I'm going to do is exact same trick as in classical mechanics I will move this term to the right and interpret it as a correction to the Hamiltonian of course I pressed the wrong button so and what happens derivative which acts only on this coefficients I will call moving derivative and that's exact same meaning as in classical systems so this derivative ignores the fact that basis depends on time and then I can say that my Schrodinger equation in moving frame so I kind of expand on the moving basis but pretend that basis doesn't depend on time so I only look into coefficients so it still has the form of the Schrodinger equation but it's modified Hamiltonian and modification is exactly the same as in classical physics the lambda dot times a where a here I define as a matrix it's a matrix of derivative d by d lambda I can basically say say my a will be derivative operator i h bar d by d lambda so now I can sandwich it between eigenstates and I say my Hermitian operator lambda is defined through matrix elements of derivatives acting on eigenstate if I know this is definition eigenstate basis but if I know in one basis I know in another basis so basically I can define this adiabatic gauge potential this is definition as a derivative operator in a sense I just described you can immediately see it's a Hermitian operator it just follows from the fact that if I differentiate phi mn for m not equal to n actually even for m equals to n it's a chronicle delta I mean derivative is always zero because it's chronicle delta right then you'll see I get derivative of phi m and derivative oh I think sorry for indices should be still the same order m and n so anyway I have left derivative and right derivative oh I think it's I just switched between this and this anyway so I have left derivative and right derivative and you see the derivative is like anti-Hermitian but if I add i it will be Hermitian so this is a Hermitian operator then I can see actually I can define it in a basis independent way we know that if I go to some new basis it's always some unitary rotation right so you it's basically matrix which diagonalizes your Hamiltonian right and then if I act with derivative operator on on on this I will see sorry so now if I act on derivative I only act on the unitary and there is a simple trick I have d unitary d lambda I multiply by U dagger U this gives me back an eigenstate and then I find that this operator is this adiabatic gauge potential just given by this so it's basically derivative of the unitary times U dagger again it's one line exercise to check that it's a Hermitian operator and let me just show through examples that this beast is exactly the same as we define in classical systems instead of showing it in full generality let me go to examples which we just analyzed so let's consider translations so then my eigenstates phi n of x and lambda just phi n of x minus lambda right if I have particle in the box or some potential and lambda is the position of the box then my eigenstates depend only on x minus lambda right so now if I differentiate this with respect to lambda you see it's the same as minus derivative with respect to x but minus derivative with respect to x is a momentum operator so I see that my gauge potential is momentum as we know of course it translates so now if I do dilations so my eigenstates will be phi of x over lambda but they need to divide by square root of lambda right because if I integrate phi squared it should be one it should be lambda independent so now if I differentiate this with respect to lambda I will get one contribution from this guy and this will be just well IH bar comes from definition I'll get one over two lambda right one half comes from square root and then I get another derivative coming from this x over lambda again when I differentiate with respect to lambda I'll get it's the same as x times derivative d by dx then I get minus one over lambda squared it translates one over lambda anyway if you stare at this you will see that you get nothing but xp plus px over two lambda and this is of course symmetrized form Hermitian form of dilation operator remember it was xp over lambda so it's exact same thing so you see that this this is basically the same object well I defined it through derivatives of eigenstates but this is of course not very practical definition because if I know eigenstates I don't even need to know generate I have them but we can sort of make this definition more practical by using first order non-degenerate perturbation theory and again you don't really need it it's one way to derive results which are sort of general you know what is derivative of eigenstates with respect to lambda it's actually an infinitesimal change of eigenstate if I change parameter lambda to lambda plus delta lambda and this is my first order perturbation theory which tells me what happens so what I need to do I need to sum over all virtual states states m not equal to n matrix element of the perturbation which is the lambda h divided by energy denominator so now I multiply by ih bar and I see that oh sorry this should be the lambda h I apologize so I see that this matrix elements of a are given by this so again this looks a little bit abstract yeah I think I skipped this but you can now recover the classical equation by doing a simple trick I multiply both sides of this equation by n minus em and then if you stare carefully what I get here em minus em times a lambda this is nothing but the commutator of a lambda h because h when it hits n gives me n this h when it hits m gives me m so actually what I see is that for all matrix elements basically all of diagonal matrix elements I'm using non-degenerate perturbation theory so here m not equal to n this object d lambda h plus i over h bar commutator of a lambda n h should have no of diagonal matrix elements diagonal matrix elements are actually arbitrary and in fact expectation value of this adiabatic gauge potential abetic connections that's what you usually know it for the ground state this what gives a lot of bomb effect and so on so many interesting features but they are arbitrary so now how you mathematically say that the matrix can have arbitrary diagonal elements and no of diagonal elements if you think about this is the matrix which commutes with the Hamiltonian so once we connect the dots we actually see that this adiabatic gauge potential satisfies exact same equation as I wrote classically but instead of Poisson bracket you have i over h bar times commutator and this is exactly expected I'm coming yes, yes, yes, yes, yes it's very good point yes so quantum mechanically it's always exist but yeah give me yeah five minutes and I'll get there so let me just say in words there is alternative derivation of the same result where you don't need to introduce eigenstate for sort of perturbation theory and so on it's also like very simple so what you want to say is that if you have some Hamiltonian of h lambda you want to find unitary which diagonalizes it so but what it means it means for any lambda this is diagonal matrix so it means that for any lambda this matrix commutes with itself now I just make a differential statement out of this I just say if I differentiate this with respect to lambda it's commutator with h must be zero and actually if I do this I actually should be U dagger U h U anyway so if you do this mass you'll actually write to the same equation yeah I'm coming to this so this is like as I promised so time of the time will give some short mini summaries so if you skipped the details so that's what you really need to know so in classical systems we can define this adiabatic GH potential as generate of canonical transformations which preserve trajectories and they're associated with a conservation law that this beast commutes with h so in quantum systems I introduced a similar object by inspection right which is basically derivative of the unitary operator which diagonalizes Hamiltonian or derivative of eigenstates and this satisfies basically the same equation I have associated conservation law which commutes with the Hamiltonian and I keep h bar specifically to just to highlight that classical limit is well defined so that it's the same object and I think now I'm getting to this question so how it's related to chaos so in both quantum and classical systems existence of local adiabatic transformations is associated with existence of local conservation law so somehow it's related as I mentioned to chaos or integrability and classically we did not prove that AGP exists I just wrote the equation but I didn't give you solution right how do we know that this equation has quantum mechanically I wrote your solution there is always a unitary so there is always solution so I'm kind of contradicting myself right because we can always think about classical mechanics as some limit of quantum mechanics h bar goes to zero but the question is, is this solution meaningful in a sense that is it well defined in the limit of h bar goes to zero so and this is what I'm going to address of course you can imagine that it's meaningful if I have integrable systems and it's not really meaningful if I have chaotic systems okay so I think I try to finish on time so I consider a couple of simple examples and they already show that this how this machinery is kind of related to very interesting physics like integrability so I'll sketch it but it's actually a very fun calculation I urge you to do so you kind of recover I'll show what you find so let me just consider a very simple example it's integrable so I have particle with some potential view of X and lambda and I can ask when I can write a simple adiabatic gauge potential so when I can solve this equation I didn't specify what lambda is so essentially I'll do quantum derivation but it's the same classically I know that there should be some conservation law which I formulated now what is this conservation law well because it's particle in one dimension it can be only powers of h because there is nothing else conserved we just discussed it so then I can say that my g should be a naught times h plus b naught times h squared plus dot dot dot now if you stare a bit more you just see that this part is real for my Hamiltonian so this also must be real which means that a should be imaginary but if a is imaginary it should be odd in momentum because p is so I can find a as a not polynomial in momentum so let me try to do this so the first thing would be that I'd try a would be p times some function of f plus f times some function of p this is first order approximation so I'll finish this example and we'll go for lunch so now you can probably follow and if it's too fast then you can sort of do it yourself of line but it's a very simple calculation so I have this ansatz and I want to compute commutator it's actually usually easier to compute Poisson bracket so d lambda h is just my d lambda v and then I will get what? I will get if I look at this Poisson bracket it's dA dp times dhdx minus opposite dA dp will be f right and so dhdx will be dvdx and then the opposite will be I will get derivative of this respect to s it will be p times f prime times derivative of h with respect to p which will be p over m so anyway so I get g has this form and it's quadratic in p so I can use only power, first power of h because high power gives me p power 4 so I can only have one coefficient A and then I need to be consistent and then immediately I see that my function dxf should be constant because the only way this can be equal to this if dxf is x independent so I immediately see that this works only when f is linear function of x it's like some ax plus b again to ignore it just for some yeah well I guess two comes if I mention them so and then I have another equation so to solve and then this I skip you can just check that the solution would be of this form where gamma and psi are arbitrary functions but to make long story short if you say that f is linear in x then remember b are just translations right because if f is constant my A is p so if I have non-zero B I have translations and A I have dilations because it's xp so it turns out that if I want my diabatic gauge potential to be linear in p the only thing I can do are translations and dilations there are no other diabatic transformations so I gave you two simple examples and you can ask why don't you add one more example it doesn't exist from this family that's it in one dimension and then in order to v has to satisfy extra constraints so it should be a function of x minus psi which corresponds to translations and this you can figure out so this this requirement that v scale invariant so if I dilate it I get basically the same Hamiltonian so if I have x squared plus x power 4 I have to have certain relation between them to have a dilation okay and this was actually found by first see it's already I'm coming to relatively recent years by definitely Ryzynski and Delcampo they didn't use exact same method but I'll take two more minutes and I will just flash the result what will happen if you go to next-order polynomial so I'll just say how about that we now make our A a little bit more complicated v at p cubed so then I'm skipping lots of details but this is really straightforward you write exact same consistency equations and amazingly you recover KDV equation Kortevec de Friesd equation which describes solitones in water and actually this result is still known in the gravity theory but here it just comes from a very simple requirement that your generator of adiabatic transformations is local so your potential should satisfy this sort of the equation this sort derivative actually is quantum and it stabilizes solitones so I guess I'll finish this part okay with sort of an interesting statement that if your potential has a shape of the solitone then actually you can implement dissipationless driving very quickly and I think this is a little bit non-intuitive so what I'm saying is that these are pictures of the solitones so these are very reasonable potentials right so this is single solitone of KDV this is double solitone and what I'm saying is actually already I would say very unintuitive statement so let's imagine they have particles a non-interacting particles which move gas of the particles and I want to move this potential very very quickly arbitrary quickly and then the statement that if I add this counter term to the Hamiltonian which is exactly my adiabatic gauge potential it's negative but it comes from plus sign then I will suppress any dissipative effect so I'll just move my potentials for the particles without creating ripple it's sort of like this picture from movie being there I mean there are two amazing things one is that this guy works on the water but another that there is no single ripple so he works on the water adiabatically and this is what you can do moreover it's even even paradoxically there are two solitone solutions so if those who know this bigger solitone moves faster than slower solitone so if you think you just create this potential it's like two tweezers and you exchange them and again for this setup if you apply this counter term which is a bit weird like cubic and momentum it's not something we see then you can suppress dissipation completely okay I think it's probably a good point to stop so tomorrow I will try to connect this to chaos more precisely and thank you for your attention are there some questions looks like everyone is hungry up there is a cafeteria and we meet here at two thank you