 držem v quantitative effect. Stacii. Tost. To je šeho. Šeho sen boš. Točno. Tako da se zaradi postah je diploma pevnije. Ne, din delal ne musim šeho bevediti. In, res... Tako... Ja, da je, ne, da je... .. wanted. Paj. Vi housing is more the level of lecture rather than research talk and also the topic. If you guys work in statistical mechanics, you can sleep for 45 minutes, wake up in the last three minutes or so. We are gonna play only one slide at the end. Ok, več dobro. Ok, so... The idea is the following. First of all I want to explain to you what is the aim of statistical mechanics and what are the results of Hamiltonian dynamics that we need somehow in order to justify the use of statistical mechanics. So the idea is the following. Statistical mechanics is a discipline that was born to try to connect thermodynamics, which was something which had been already established as its own set of axioms by the middle of the 1800s, something like that. And it seemed to stand as a discipline by itself. It was very useful to build engines, steam engine and other kind of things. But it had a set of axioms that laws that seemed to be falling from the sky. So why the first law of thermodynamics? Why the second law of thermodynamics? So people started thinking that somehow this must be connected to classical mechanics in some way. And it should be possible somehow to use the laws of classical mechanics. We are talking about the middle of the 1500. Of the 1800s. To use the laws of classical mechanics in order to derive these laws of thermodynamics. So the idea is the following. You take a particle, you take a number of particles, which is very, very large. Let's say something like 10 to the 23. And classical mechanics will tell you that for an Hamiltonian energy conserving isolated systems you have to take all the coordinates in phase space, QI and PI. So since I am mostly talking to students, if you have questions, if you don't recognize notation or something like that, just raise your hand. And it's better this than other, I speak forever. At this point I think more or less everybody should. Where I goes from one to three n if you are in three dimensions, four n if you are in four dimensions and so on so forth. And then we have Hamiltonian equations, motion. So in general, if we can introduce Poisson brackets in any function of phase space variable that has to evolve with. Who doesn't know what Poisson bracket is? Who knows what Poisson bracket is? It's not complementary to say. There's some people who are not sure if they know. So I assume everybody is not more or less. And so the idea is that you have the evolution in this very high dimensional space. The evolution is very complicated. So I should be careful here because as you know, in phase space, a trajectory cannot intersect itself. Because there is only one way to go forward, one way to go backward. So in order to describe the motion of the system, we need all the six n values of the coordinates, we need the function h and we need to evolve the system according to the equations of motion. By the way, I want to stress the philosophical point here. The fact that the laws of Newtonian mechanics actually have to apply to very small particles and to very many particles is an hypothesis. We need to test it. And there is nothing strange a priori in thinking that there is a critical number of particles after which these laws should not work. If I tell you like this, you think that I'm crazy because we've been taught this in school. You can use f equal ma to describe the motion of a ball or to describe the motion of a molecule, and to describe the motion of a billion molecules should work all the same. But there are fields of physics in which this thing is still not clear. And one thing we have already encountered for sure is the theory of quantum measurements. So when we talk about quantum mechanics, we talk about measurements and measurement is not a unitary operation. We say it's a projection. And some people have thought, actually have developed an idea that quantum mechanics, because of some nonlinear effects, the more particles you put in, the less behaves like the Schrodinger equation. And people have actually explored, even done experiments on whether this hypothesis is falsifiable or not. So it's not such a crazy idea. You should keep it in mind. It's an hypothesis. It works, works fine, but maybe once we will find that it's not really true. All right. So we have six n coordinates. Well n is 10 to the 23. We need to measure all of them. We need to evolve all of them. But we really don't care about all of them. The only thing we care about are macroscopic quantities. Like, for example, the energy, the pressure. If it's a ferromagnetic material, we might be interested in the magnetization. If it's a conducting material, we might be interested in, I don't know, the resistivity of the material, the resistance of a piece of metal. We might be interested in how much current flows in that thing there. So, I don't know, the current, the electric current. Few observables, OK? Five, six, ten. Depending on the object you are under scrutiny. You have under scrutiny, but certainly not 10 to the 23. So is it possible that for physically reasonable, reasonable Hamiltonians, or Hamiltonians that describe the things that we observe, we can have a simplified description when n becomes very large. So when n is equal to 1, we have a very simple description. Single particle. Just use whatever method you want to integrate the equations of motion. When n is equal to 100, it becomes very complicated. When n is equal to a billion, we are saying that things have to simplify again. So, first of all, simplification in the, and going to infinity limit. So the first question is, is this simplification true? Is it true that things simplify in the large and limit? Answer, yes. I mean, we observe things, which are, I mean, the description of the air in this room or that of a piece of metal at the level when we are really interested about in the energy, the pressure, magnetization, current, so on, so forth. It's a simple description usually, right? So thermodynamics works very well. So the answer is yes. Yes. Is that again? No. It's exactly the opposite of the integrability of the system. It's exactly that. It's because, and we will see, the system is highly non-integrable. So it's at the opposite spectrum. Because if you have an integrable system, then you are back at the point at which you really need to all the six n variables to describe the position of the value of the system, in all of these new variables, conserved quantities, are conserved and you need all of them to describe the state. So it's exactly in the other end of the spectrum. So, for example, the equation of state of a gas which takes p, v, t, n and other things, maybe other properties of the gas, gas could be diamagnetic gas and have some magnetization, paramagnetic gas so we have an equation of state. For example, for the ideal gas, this is p, v is equal to n k, b, t. Another thing that works extremely well is the first law of thermodynamics which says that the energy of the system is the change in the transformation is the heat exchange minus the work done. This works extremely well for everything which is a macroscopic. Another thing which works extremely well is that if I take the heat exchange and I divide by the temperature, this is a state function which we call entropy. So, all these things seem to work extremely well. They work so well that they have their own logical standing and they were derived from observation without even thinking about whether the system was made of atoms, whether they were Hamiltonian equations and so on. Hamiltonian equations and so on. Now, another question. So, is it true? Yes. Another question. Can you prove it? The answer, unfortunately, is no. Or at least, not in general. So, even for the cases, there are some cases in which you can actually prove it, start from the Hamiltonian equation and derive what we will see, what we will derive. But for most systems, we cannot. So, for example, take the Hamiltonian which is the sum of the kinetic term plus interaction terms where we give some particular form to the potential. Okay, so let me give you something like this, something which decays. Another thing that can be considered is potential which is partially attractive and then partial attractive and repulsive in the closed score, which is a good approximation to the interaction potential between real molecules called Lennard-Jones. So, I give you this thing and I tell you, can you prove? Can you find f of p, v, t, n? No, in general, I mean, we do not know how to do that. So, but if you remember, actually the first thing we need to prove is that the system, if left to itself for a sufficiently long interval of time, reaches equilibrium. Okay? So, this is the first, the zero flow of thermodynamics. You put two systems in contact, you allow them to exchange energy between them and then they go to an equilibrium state, okay? So, the first thing we need to prove is that such an equilibrium state is reached, okay? And in order to prove that something is reached, something exists, the first thing we need to do is to try to define it, right? So, what's equilibrium? So, the idea is the following. I don't know, consider a piston How do we define pressure? I consider a piston. This, this is the piston. Let's put some kind of dynamometer in here. And if we go and look, the particles come here and hit the wall. So, we can look at, so every time the particle hits the wall, there is a momentum transfer which is two times m, v of the z component, okay? So, I have to sum this thing here, so the total momentum transfer v p in the z direction in a time delta t is equal to the number of collisions in the time delta t. So, if I plot this function as the interval delta t grows, this thing, this function is something like this. No, it's the z component. See, sorry, sorry, sorry. It's only the z component which changes, right? Because this particle is going like this. And so, it's only the component along this z axis, okay? And the transfer, so we look at this function here, right? And we, there is a trend and there are fluctuations. So, the average force, sorry, the force which is the pressure times the surface of this thing here is delta p divided by delta t. And this has a constant part plus some fluctuations, okay? It's called fluctuations of anxiety. So, if my time t is sufficiently large, these fluctuations are negligible. They are unmeasureable. And I therefore get my pressure constant, okay? Another thing that one can measure, for example, take a small volume delta v and count the number of particles which are inside the volume. These particles move and some of them get out of the volume, some of them get in the volume from the environment. So, the number of particles in the volume as a function of time t is a random integer number which fluctuates. There is an average to this thing and if I divide it by the volume, I get some constant plus fluctuations. Now, if the fluctuations are small with respect to the density, then this is a good approximation. So, in order to do this, I have to take a delta v which is not so small and also I have to take a time t which is not so small, okay? So, equilibrium must be defined in some times in which we average over time. There must be some coarse graining either over time or over phase space. So, this needs to be defined. By the way, here just to specify. So, there is there is an alternative way of defining equilibrium states on what's called the ensemble theory, okay? So, you take an ensemble of systems just don't take a single system, take many, many copies of the system that is different from each other taken from some distribution, let's say, and then let them evolve and ask about. Since I've been working, so mathematically in the end you get the same equations and I think mathematicians, mathematical physicists sometimes like this approach better because it's cleaner. But in the last years we have reached technological control over mesoscopic systems and isolate them very clearly from the environment. And so there has been a little bit of a paradigm shift, in which instead of thinking to many, many copies of the system I find it more convenient to think about a single system and thinking about isolated quantum dynamics. I find it more logically appealing because you don't really have many, many copies of the system and in the end the results should be the same because the physics we observe is that, okay? Okay. So, so from these examples we get that we have to do some sort of time average. And here my friends in the math department will forgive me if I quote a theorem which is Birkhoff's theorem which essentially says that if you take, so it's based on, correct me if I'm wrong, but maybe Stefanos. I'm about to quote Birkhoff's theorem. So the important thing is that this map, this evolution map with the Hamiltonian dynamics is measure preserving, okay? What does it mean measure preserving? There is a, you have to define the measure space, right? Measure space with the sigma algebra with the target space with the target space in algebra and measure. And the measure space in the case of the Hamiltonian evolution is simply the, you will measure dp, dq. Good. So if you have such a measure preserving transformation then the limit take a summable function and take its values over in orbit where pq of zero. Then if you take the limit of this thing, this limit exists almost everywhere. Okay? So it makes sense to consider this limit and from the reason, from the smallest example that I gave you before, essentially this limit will describe some sort of equilibrium situation. Okay? Good. So this theorem is by Birkhoff and it's late 20s, beginning of 30s, I think. At the same time, I think it's likely before because in Birkhoff's paper it quotes Von Neumann saying, Von Neumann has this result and there is an analog theorem for unitary evolution of wave functions. Okay? If you take the average over the expectation values of an operator where you take your wave function and you make unitary evolution then for almost all the wave functions that you put at the beginning of the evolution you get a limit which is well defined. Good. So this tells us that we can... So this limit exists and therefore it defines some sort of measure. Okay? And the study of these measures, this limiting measure is a big field and wonderful field in mathematics. Dynamical systems. Okay, so one important set of measures are measures which... I want to say this correctly. So are measures... so maps whose invariant sets... so let's take an evolution for a time one, for example. So this defines a map. So if this map... so this map has some invariant sets so if I apply it to some set I get the same set, okay? If the invariant set of this map is the entire space or the null set then this map is said to be ergodic. Now forget for a second my bad definition here. The idea is the following. The system evolves in phase space and covers essentially all the available points consistent with the conservation of energy if it's let go for sufficiently long time. So the motion is exactly the opposite of an integrable system. The integrable systems are constrained to be on Torai. This thing goes everywhere. Good. Okay, I'm extremely slow. So in particular the distribution that is induced by this time average equilibrium distribution of my system and it is for example has to be a function of the Hamiltonian and in a physicist's notation this is the micro canonical distribution. Now the difficult step is given the map or the Hamiltonian, okay? Prove that this thing is actually the distribution that you get is the ergodic distribution. Now here I need to make a historic remark. So but just by chance I was reading there was a paper by Gallavotti came out a couple of weeks ago and commenting the old papers by Boltzmann, Maxwell and other people and I found very interesting the fact that I've always been taught that the one to discretize phase space was Gibbs and the discretization of phase space introducing a constant of nature which is the dimension of angular momentum Planck's constant was something that Gibbs said but indeed it was Boltzmann. Of course not of course, I mean he didn't have in mind quantum mechanics he didn't talk about he didn't talk about constants of the universe constants of nature like Planck's constant but instead of describing this thing he said the following thing so the motion is this now let's imagine we discretize phase space into little dots which is what we do when we do numerical simulations so every dot is even by an integer number between one and two to do how many bits my memory has in my computer and therefore the unitary sorry, the Hamiltonian evolution here is a permutation of these dots why is it a permutation? because there is only one way to go from one point to the other and no two points can go to the same point so for example this is a possible permutation so all the possible orbits in phase space are permutations of a permutation group of a huge of a huge set made of all these little dots little cells in my phase space then the definition of ergodic motion is clearly if you ever find a number of these things whatever permutation you have it's going to be periodic at a certain point you come back to the original configuration and the definition that Boltzmann gave of ergodicity is a permutation which has a single cycle so there are no two points or no subset which is permuted among themselves like this so this was Boltzmann definition and therefore he went on and described he found everything using this picture in his mind in the calculation so he didn't have to do with differentiable flows good so ergodic theory is a whole branch of mathematics implications are important but not too binding for physics and I can say this so they have to binding for physics because in physics typically we are interested in the limit then that goes to infinity and this thing is very difficult to control in the theory of dynamical systems and the second thing is that usually what we can prove we can prove things for Hamiltonians except very few and very important examples like Sinai's Biller we can prove things for Hamiltonians which are too simplified maybe to describe some situation but the important thing is the thing I want to have in mind is about the thermodynamic limit that's where physics becomes interesting good ok so this thing here describes the equilibrium state equilibrium state is an average over a long time of the evolution of my system so my question the original question is now turned into the question whether you can prove that the long time evolution of your particular Hamiltonian is going to give you this because once you have this then you have all of statistical mechanics and all of thermodynamics this is called micro canonical ensemble in particular let me show you how to derive the canonical ensemble to which we are more accustomed from this thing here so the idea is that you have to take a smaller subsystem take your big system and reservoir now this division is completely arbitrary ok and the idea is that the particles in the system so just what does it mean arbitrary means that you can also take as the system every other particle it doesn't have to be constrained there ok it's a smaller part of your system so and the idea is that Hamiltonian is the Hamiltonian of the variables only pertaining to the reservoir plus the Hamiltonian of the variable only pertaining to the system plus some interaction and if this thing here is much smaller so the values of the functions hr is much larger than hs much larger than h interaction this is assumed so typically when we divide the system like this this is a volume effect this is a bigger volume but this is a surface effect so unless you are in very high dimensions you can assume this thing and so if this is the distribution if this is the distribution of the whole system what is the induced distribution on the smaller system so first of all let's put a normalizing factor z of n particles and energy e which if this is normalized to 1 because these measures have to be normalized to 1 because they come from an average then it's simply the integral of the delta function now the induced measure on the system will be this factor times the integral over all the reservoir variables of the delta function of e minus hr minus hs and we said we can neglect that part there so I close it here but you see this is the same integral as zn but instead of n variables we have nr variables and instead of energy e minus hs so I can write this as the ratio of two partition functions and if I take partition function if I write a partition function as an exponential of something this something is the entropy then I have the exponential of the difference of entropies now hs as I say this is much smaller than hr much smaller than the total energy of the system which is conserved so I can expand this thing here in power but also and s is much smaller than nr so I can expand also so this nr so I can expand also in these variables here and therefore this thing becomes e to the s by de times minus hs plus ds by dn times ns sorry minus now Boltzmann told us that ds by de is one divided by kbt because we can interpret this as the entropy apostolate in Boltzmann and it's the definition of temperature so this thing here becomes constant which I call again z e to the minus beta hs so the induced if I have a big system which is a micro canonical ensemble I take a smaller system the induced probability distribution a smaller system is the canonical ensemble good from the canonical distribution we can go to thermodynamics this is quite straight forward I'm not gonna do it so what's the reasoning, what's the flow of reason that we have followed so far Hamiltonian dynamics ergodicity micro canonical ensemble canonical ensemble thermodynamics now and then one thing that I want to show is that in doing this approximation we never talked about forces or anything but in doing this approximation this is equivalent to say that the reservoir acts on the system inducing some langeven forces which are well described by a random noise by a white noise and this will lead me to the last part of my talk so somehow in this simplification in the hypothesis of ergodicity with this simplification white noise will come out and the thing is the following let's consider for simplicity one particle so I don't have to put indices so is the force times dt which is minus let's put a potential dv by dx dt then let's assume that there is friction and then there is some white noise so these are called eto-stochastic stochastic equation so this is the force and dx is p over m so if there is random noise acting on my system this is the mathematical way to describe it now this mathematical way to describe it is equivalent to so it gives me an evolution for the probability distribution p of xp time t which satisfies the so called focal prank equation is dp by dt is equal minus d by dp of minus dv by dx minus gamma over m p times capital p minus d by dx p over m capital p plus d d squared p hp and if you don't know about eto calculus it's very very interesting I suggest you go and read about it ok so this is a complicated differential equation partial differential equation in three variables it's linear because it's describing a probability and contains both you know functions of x and functions of p magically if you look for the stationary solution of this equation let me give you an answer the answer is the following constant which we call 1 over z e to the minus beta miltonian if you take this function you plug it in here you will find that this function satisfies this differential equation if you appropriately choose beta gamma over d so kbt is equal to d over gamma it's the same result from a completely different calculation it's not a coincidence I don't want to claim too much but it's not a coincidence what does this thing here mean well physically if I increase so there are two sources here so the reservoir is doing two things to my system it's giving energy through the random noise and it's taking energy out through the friction coefficient there is a balance give some, take some ok so the larger the diffusion coefficient for fixed gamma the larger the temperature and by server larger gamma the lower the temperature notice that you can actually in this thing here the only thing that matters is d over gamma you could actually take the limit of both d and gamma that go to zero and take the ratio fixed this will describe the same equilibrium situation however the time to reach this equilibrium which is given by the gap of this linear operator the gap is the largest eigenvalue sorry, the second small so the smallest eigenvalue of this thing is zero and then of which we have already found and lambda 2 is given by 1 over d time that it takes to get to that solution this thing diverges diverges when d goes to zero ok good in some sense this ergodic hypothesis is the assumption that one part of the system there are no forces external forces acting on the system except for this potential but the idea is that the system itself develops white noise and this is related to the chaotic hypothesis the system develops chaos chaos is continuous spectrum of the perturbation so in classical mechanics in the thermodynamic limit what's the status of this hypothesis quite good actually even exceptions like even exceptions like when the Kolmogorov-Arnold-Moser theorem arise in the limit when the number of degrees of freedom grows the importance of the torai comes smaller and smaller ok so this hypothesis is well satisfied and now let's come to a surprising thing what about quantum mechanics we recently discovered that in quantum mechanics things are not so simple ok because in quantum mechanics apparently there is an easier way to it's not so simple to develop white noise in the effective forces acting on the system and let me give you an example so let's take just a single particle now I have an Hamiltonian which is an operator and so I have to put hats over all the variables for a single particle my wave function size 0 evolves with e to the minus i ht size 0 let's put also each part here and I can look at the expectation values for example the position operator or the momentum operator or the derivative of the position operator which is derivatives of operators in quantum mechanics are just commutator with the Hamiltonian so I look at this and this is some function of time t if I look at this function here x of t coming from this e to stochastic equation then I would realize that this function is actually not even differentiable that's why we use this thing here something that does something like this and on average some appropriate average the average of the square grows linearly in t ok, it's not differentiable it's like the stock market so people actually use these stochastic equations in analysis of the financial market all the time good but now let's look at this function here and let's put and I can state the composition ok so the spectrum of this function so here we have all the frequencies that are involved in the sum they are given by the differences between the energy levels the spectrum of this function is the square of this coefficient here so s of omega will be the sum over m and n but now I take the difference between I split this sum into various terms so m and n is omega except for an error delta of this same thing square so psi zero so let me actually call this p zero n p zero m which are this so this thing square is equal to p zero m and then I have the matrix element of the operator x m n square so I have to sum over distinctions if I sum over distinctions there are typically if the potential is smooth and what I find is that this spectrum here is smooth and could very well be the spectrum of white noise if I take instead a situation in which I have disorder in my potential which is strong then I observe a complete change in the dynamics of my system and the particle that I put in this potential because of interference effects doesn't move too much and then I go and look at that thing there and it actually looks like this so it's peaked the spectrum is peaked over frequencies which are not rational related and therefore the motion of the particle is quasi periodic so we go from the particle that started here and went around to the particle which starts here and does something like this a complicated Lisa Jou figure around the origin the difference is that this particle here goes to a distance which is proportional to square root of t and this thing here is proportional it always stays to order 1 and if we have this we cannot have we cannot have um equilibration yes so this case is called in this case there is v is strongly disordered strongly in homogeneous let's say it means that if you look at the potential it looks something like this and here there is some disordered but it's not too much this would require a seminar by itself I understand what I want to give in the last ten minutes was just a counter example for a generic Hamiltonian which does not equilibrate yes exactly that's why Anderson got the Nobel Prize yes the issue is that it's not intuitive what's the difference for a classical model there is no difference between this thing and this there is some disorder if you go to sufficiently high energies the particle would just take a smaller diffusion coefficient but diffuse away but here there are interference effects even if the particle goes above the barrier there is a amplitude of being reflected back and these amplitudes accumulate they accumulate so much that they actually build these peaks in the spectrum and the motion of the particle changes completely and this remains through even if you have many particles if the disorder is sufficiently large and what I want to show you is now let me tell you what I'm plotting this is the last thing that I do and then I will stop so I take the following Hamiltonian and the disorder now is in the z field so these are spin one alpha operator so they are representation of the SU2 algebra and this now is an Hamiltonian on L spins the Hilbert space as dimension 2 to the L J can be said to be equal to 1 HI are some random variables completely IID distributed between minus H and H this is mockery of this picture here in which this thing is H so if H is small I should have diffusion of what these are not particles, they are spin of excitations I can start from a negative state and create an excitation in which I tip a little bit the spin and then I see if this propagates as a wave propagates in a diffusive way or does not propagate at all so for small disorder this is numerics done with computers up here so this is on the y axis there is energy so I create a state in which there is a lot of energy here and very little energy here so this is close to the ground state, this is a very excited state and then I let evolve here there are spins these are discrete locations and then I let evolve the system with the Hamiltonian if the system is ergodic it has to equilibrate and in fact you see that it actually equilibrates there are fluctuations and fluctuations but it equilibrates so you start with an energy in balance and then it goes puffed if I increase the disorder I start with the same in energy in balance the same and I let and now I'm going to press a button and the dynamics is going to run and the system does not equilibrate it's like having a rod of iron you hold it in your hand you put the other hand on the fire and you stay there and nothing happens so this is called many body localization it's the thing that I've been working on in a few years 3-4 years and this is going on so it's excitation so if we run this for the age of the universe it actually should come back so because of these interference effects quantum mechanics can violate the ergodic hypothesis even for potentials which already have disorder inside so the ergodic hypothesis doesn't have anything to do with disorder actually disorder and hbar conjure against the development of over-gudist and this is everything that I wanted to tell you