 Stefanos 5. lekču. Mateo, možete me vzlušati, da imače vzlušanje. V 2022 sem nekaj, da se vzluša, da se vzluša, da se vzluša, da se vzluša, da se vzluša. In tudi nekako bo vzelo, da nekako vzelo, da je nekako nezavršen. Meni nekako nezavršen, da je za prv vič nekako in na prv vičen na vzelo from Bangalore, and Tata Institute is a fantastic scientist, and also ... a very nice person. I know him well. By the way I could mention of him that ... I had one of these students je vidite ljudi, včešnjo oslacenje, in loneti, če so čestili, da vse čestili po drugu povoru, ki je se obježa in s daveh lepe, swega českala, ki so vložičila, na tטrejse, ali zato oto vljeli v Laksemo Long, tako, da prijevamo ljudi s državičnišimi z napravo Zadejte prezor, zato se prijevamo, to prezdjej v odličenju, bo, da je nisu človek, ki je izgleda in ima vse zvon, da je neko, neko, koncernost vzgleda, regerost vzgleda, neko, nekaj je, nekaj je dobroga vzgleda, na vse vsočenju, kako je svoje jedne, vse ili, češke, težavne modeli in in v statičničke, ko je, in češke, svoje, težavnega težavna vse. A druga vse je John Hopfield. Sveš je zelo jazem povedo, oj Matheo, ne? Se se da bila v zelo v 80-iljah, ne? Maybe you can say what about him, because you know better, but okay, essentially it's neural networks, and as you know, neural networks and I mean artificial neural networks have a long history, they started in the 50s with perceptrons and then there were the contributions of megalogue and pits, but I think John Hopfield was the first one to really formulate attractor neural networks, so the concept that the dynamics of artificial neural networks could lead to an attractor, so to something that could look like a memory, so then open the field to how you could store different attractor so that would correspond different memory patterns, so I think that's the contribution of John was, I don't know, maybe Mateo knows better than me the field so he can add some words. Okay, so unfortunately the medals will not be given this year because the conference, the start of this conference has been moved to next year because of the COVID pandemics, so they will be delivered only next year, and I would like to add that there is also a young scientist prize of the Commission of Statmec, and I know that they are working on the candidates now, and the young scientist prize will be announced later during the year, so okay, so I think, I don't know if you want to ask something about John Hopfield, but I think it's enough, I think it's... It's also worked on kinetic proofreading, so proofreading, I mean, why is it that when DNA replicates in ourselves that is, I mean, the error is much, much smaller than what you would expect just from random errors. Very good, so, last year, as you know, we had the second Nobel Prize in Statmec, we could say, so the first one was to Wilson in 80, something around 80, and the second Nobel Prize was to Giorgio Parisi, so the field has been honoured by two Nobel Prizes, now one fortunately died. I was a member of the Commission of Statmec, and of the C3 Commission for a few years, and I was always sending messages to Kenneth Vincent, he never replied the messages, so it was a very peculiar person, so the two Nobel Prize winners are very peculiar persons, so okay, so before starting the talk, I would like to give you an exercise, so okay, so I hope that you will devote some time during the weekend to solve some of these, so you have seen in several of my examples, there was a line, there was a line of second order phase sensations, and there was what is called the canonical trichritical point appearing here, and this is the line of second order phase sensation, and usually the line extended to farther in micro canonical phase transition, and the canonical phase, or maybe this was the canonical trichritical point, canonical trichritical point, and the micro canonical trichritical point, which is the endpoint of a second order line, and it was usually going farther, so the line ended in the canonical trichritical point in the canonical ensemble, then if you go to the micro canonical extended farther, and this is exactly the region where you can find negative specific trichritical point, so this is exactly the region, this can be proven formally in a Landau expansion, so what I propose is an exercise, I propose take the mass potential as a function of beta in m, or beta in x, and the entropy, as a function of epsilon in m, and then show these, show that there is a gap between the two, and that the canonical, the micro canonical, micro canonical, micro canonical, second order line extends beyond CTP, so as a hint, as a hint try to expand beta, and then s, this is a hint, beta as you know in the micro canonical ensemble is a function of epsilon, so it can be proven formally that this is true in a Landau expansion, so it's not only a property of the specific models that we have looked at, but it is a generic property of the massier potential, so this in some sense proves the genericity of the negative specific heat in this context, because if you have this extension of the line of second order, it is exactly in this region that you hope to find, the negative specific heat region, so now let's go to this lecture, there so first part of the lecture is, if you want a continuation of what we have learned, but most of the lecture will be devoted to dynamics, let me try, I have a pointer now, thanks to our assistant, technical assistant today I have a pointer, very clever assistant, so first of all, oh it doesn't change, I have a pointer but not, I cannot change, ah yes it works, okay, so all the lecture will be devoted to an extremely simplified version of all the models that you have seen, it's a model that I introduced long ago, it was in the 90s and there were studies of such a model, such a model by Kuni Kaneko from Japan and also in the Astro community, Shogo Inagaki, so as one used to say for any problem, there is always someone who did it before and in this case there were these two persons and okay, so the model can be seen as a model of several different physical systems, but okay, you can think of as an xy model, but you can also think of the model as a system of particles moving on the circle, okay, so the position of the particles is the angle and the particles interact with an attractive potential, cosine potential, so if the particles attract, the potential energy will have the tendency to bring the particles to the minimum of the potential, but there is a kinetic energy that's moved the particle around, so there is a competition among the kinetic energy which tends to spread the particles in the circle and the potential energy which tends to attract the particle in the minimum of the potential and what happens is that at some point they will sort of balance and that's the phase transition point is the point at which the particle from being clustered go to a phase where the particles are sort of moving around on the circle and so we are in the gas phase, so I arrived at this Hamiltonian in a very funny way, at that time I was visiting a plasma physics group and the head of the group put me in front a Hamiltonian with horrible Hamiltonian with many, many different terms and then he asked me what is a phase transition and I said, okay, phase transition is a singularity of the free energy and then he looked at me and said, don't tell me this bullshit, I don't understand anything about singularities, you should tell me what is a phase transition, so I started to cancel all several terms in this Hamiltonian and I arrived at this Hamiltonian that was not too different, so I went to his office and said phase transition is a point at which kinetic energy competes with potential energy and then he understood, that's interesting that you could reply to my question what is a phase transition, okay, and it happens, of course, also for finite systems and so on, okay, what is the order parameter, the order parameter is the is the magnetization M, I have to learn how to use it, okay, is a Ferrari in the hands of an inexpert driver, so which is the limit as N goes to infinity of the average of the cosine and the average of the sine, so you see that these are sample average, okay, so it's interesting, no, you could solve this model using large deviations, but I will not do it today, I will solve it in a different way, okay, so it's a model with a second order phase sensation and the solution of the model is pretty simple, but you will see that the dynamics is extremely interesting, okay, so let's look at the solution in the canonical ensemble, I would be slightly more sloppy today because I can do it on a very simple example, so the configurational partition sum, you have to do an integral over all the angles of the exponential of minus beta N, the sum of the cosines and the sum of the sines, then you have two squares here and you can introduce two auxiliary fields in the Habas-Tartonovich transformation, x1 and x2, and then you can integrate because the square linearizes and you get the logarithm here of the modified best function of zero order as a function of the auxiliary fields, and then you exploit the fact that there is a rotational invariance in the model, so you can rotate freely the angles and which is another property that you can see through the modified best function, so that you can write the best function in terms of two variables cosines and sines in terms of a single variable with the modules z of the two fields, so this is just to be slightly more sloppy in what we have done quite quickly in other models, so you can see how it works, and then go into polar coordinates, this was z before, sorry, what is x here is z before, there is a mismatch, in the thermodynamic limit I look for the minimum of this, sorry for the maximum in x of this, or the minimum, so okay, the minimum in the large n here because there is a minus here, and so I get the free energy, and this is the expression of the free image, I have to take an infimum of this quantity here, and if I derive with respect to x, I get this consistency equation in terms of the ratio of the modified best function of first order to the best function of zero order that I have to solve, then, I don't know if I have, I don't think I have put the graph, oh yes, yes, okay, so this ratio, this ratio is something very similar to the touch, is something like that, with two asymptotes, so I'm only plotting the right part, and you see that either you have a solution, which corresponds to the broken phase, sorry, a non-zero solution, or you have only the zero solution, and so if you vary the temperature, you have a usual mean field type of phase transition of second order, so, goes in the wrong, ah, yes, sorry, I have to learn how to use it. Okay, so this is a plot of the temperature versus the energy, you have seen this type of plots, and in the inset there is the plot of the magnetization versus the energy, versus the average energy, and you see that the simulations agree quite well with the lines, the lines are obtained in the canonical ensemble, and they agree almost everywhere, apart from here. You see that there is a region here where they don't agree, and this was the sign that something went wrong for this model, and it was the beginning of a story of the quasi-stationary states and ergodicity breaking in this model, so, and moreover, you see that these points, they correspond to the largest system, so you would expect, if it would be a finite size effect, if you go to larger sizes, you would get points closer to the lines, but in fact you don't, and moreover, this is a remark, that you see that some of these points, they lie on the continuation of the high energy line, and in fact, if you go there and check, instead of being ferromagnetic, the state is paramagnetic, so these are paramagnetic states that resist a lot to converge to the ferromagnetic state, and if you increase the size of the system, if you want to the collapse phase, if you increase the size of the system, they don't want to go to equilibrium, so they resist to go to equilibrium, and as you increase the size of the system, okay, I will tell you the end of the story, these are states that whose lifetime increases with system size, so, which usually you don't find in such type of models, unless you have dissolved, there are other features that induce this type of behavior, but for completeness, let us look also at the canonical, at the micro canonical solution to convince ourselves that the ensembles give the same prediction, let's use the mean max method that we have learned how to use, so I have to invert the infimum in x with the, I'm always confused about sup and inf in x, but don't worry, it's working, so the reason is that sometimes is the, the legend is defined in a way or another, so at some point I will decide to fix this point in my slides, because it's okay, so the, you will check, okay, so the stationary points are the same in the two ensembles, so you can derive the massier potential as a function of beta in x, and this, at the level of the first derivative, it will give the same result, this is the derivative with respect to beta and respect to x, and this system of equations can be solved in beta to give a consistency equation in x, which is slightly different form than what we get in the canonical ensemble, b here is the inverse of this function, so you have to rotate your head and see the inverse, and there is a unique solution for x equals zero of this consistency equation, you can check for energies above 3 over 4, and otherwise there is a non-zero solution, and the non-zero solution as you can see, since it must come from the intersection of this curve with a line, so it must be between minus 1 and 1, although x extends from minus infinity to plus infinity, the equilibrium solution are those that are confined in the interval minus 1, 1, and you get the entropy that has this expression, once you have solved this consistency equation, you replace it here, and you get the entropy, and if you derive the free energy from this entropy by the plus, by the general transfer transform, you get the free energy of the canonical ensemble, so the entropy is concave, you can check it is concave, and there is a second-order phase transition, so there is a point where the two concavities don't have the same curvature, because there is a point of second-order phase transition, but this is not enough to create an ensemble in equivalence, the function is singularity, but it is not enough to create an ensemble in equivalence, so when I started to work in this field for me, I knew that this solution was the same in the two ensembles, but it took time before we got all the techniques that could allow us to solve this model in the micro canonical ensemble and be sure that they were the same. In the end, it's very simple, but as you know, once they are done, they look simple, but before, it's not like that. The model has a very interesting dynamics, I would like to point this out to you, the equations of motion of the model, they are the equations of a perturbed pendulum, and for a long time, we tried to understand the dynamics using this analogy, so you see that I can write the equation in this form for the single particle, there is an external field m, which is like gravity, but both m and phi, the phase of this pendulum, are time dependent, and in fact, if you have a look at the motion of the particles in this phase space, you will see that they move like in a pendulum, so and for a long time, we wanted to use the analogy, in fact, in the original paper on this, we used this analogy even to compute more or less where to locate the phase transition of this model. Okay, but what is the clue? For the analysis, come with this idea of the water bag, so what is a water bag? The water bag is a distribution that is non-zero and constant in a given domain, and it's clearly non-equilibrium, because you will see the equilibrium in a while, and by this, we can define a set of initial states with given delta theta and delta p that are directly related with energy and magnetization. So what you know from Statmec, you know that if the trajectories move on the surface at constant energy, the average should depend only on the energy. So if I fix delta theta and delta p and I let the system go, there should be no dependence on m0. m0 is the initial magnetization. So what counts is the energy, and the motion should evolve on top of the surface at constant energy, and the detail on what is m0 should not be important. So this term was used in plasma physics, so I took it from there. So water bag means, okay, there is, and also in gravitational systems, so you will see a slide by later. So this is a sort of very far from equilibrium initial state, okay, and you expect that the dynamics drives it to equilibrium after some time, okay, and how you check equilibrium that the final result should not depend on m0, should depend only on u, on the initial energy, and you will see very interesting results that in fact show that this is not true, and in fact it depends on m0 also. So, okay, so this is the equilibrium in the phase space of, in the so-called Boltzmann mu space, so these are 10,000 particles, and I'm in a region where the water bag relaxes to equilibrium, so the initial water bag was a rectangle here, was a rectangle here, in the middle of the spot, and this was drawn by a Japanese collaborator, so for him white is a lot of mass, dark is less mass, so I wouldn't plot the inverse, but it's interesting how light is more density, and dark is less density, okay, which is, by the way, reasonable. Okay, so essentially what is this distribution? This distribution is nothing but if you have q and p, this is nothing like e to the minus beta h of q and p, okay, so you see that there is more mass near the, so I've shown you that the dynamics of this system is the one of a pendulum, and here you see the phase space of the pendulum, okay, because h is nothing but the single particle h is nothing but p squared over 2 minus epsilon cosine, cosine q, cosine theta, so you see that the system should reach this equilibrium, and when it reaches this equilibrium, the beta, beta depends only on the energy, and okay, so you are gonna feel it, what happens when it does not go to equilibrium? This is, for instance, a simulation where you see that this is the value of m of the magnetization, and I'm in a region where I should go to the Boltzmann Gibbs equilibrium, which is here, is here, is this level here, which is quite high, is around 0.3. On the contrary, I do simulations with increasing values of n from 10 to the 2 to 2, 10 to the 4, and you see that there is an initial phase where the system looks that it moves away from m equals 0, but then it flattens down, and then you see that it converges to equilibrium, converges to equilibrium, but the time it takes to converge equilibrium is longer and longer as you increase the number of particles, and you can see that in fact there is a constant shift to the right of the pointed twitch, the system starts to go to equilibrium, which converted into a low tells you that this is a power low, because in logarithm there is, this is log t, in logarithm there is a constant shift to the right, which means that there is a power low behind this relaxation to equilibrium. In this case, the water bag initially has delta t equal pi, so it's a homogeneous water bag, so you distribute the particles homogeneously on the circle, and so initially m0 is 0, and the final equilibrium, the one that does not depend on the initial m0, should have m, should have m non-zero, the magnetization should be non-zero, but the system wants to stay close to the initial state and doesn't want to go to equilibrium, and as you increase the system's sides, the effect is even more drastic, and the system stays for very, very long time, 10 to the 4, 10 to the 5, 10 to the 6, out of equilibrium. So we have learned a lot about equilibrium in this lecture, in these four lectures, and then we discovered that equilibrium might not be relevant for system with long-range interactions. It's not always true, of course, because you have seen that in the plot of the relation, temperature, energy, there were regions in which everything was okay, so those regions I can trust equilibrium statistical mechanics, but there are other regions where I cannot trust, because if I start the system in some region of the phase space, it doesn't go where I am expected that it goes. So we denoted this state as quasi-stationary states, and they appear almost everywhere, so we discovered plenty of, so I maybe I will put on the Slack physics report paper by Jan Levine, which is a sort of good review, because I will not, this will be the only lecture devoted to quasi-stationary states, is I would have to introduce you a little bit more of kinetic theory, so it's a course in itself, but just to give you a feeling that there is a problem with dynamics of long-range systems, and I will derive a few simple results. If I have time, okay, I have a code of this model with comments in yellow, I prepared it for you, so I will put it in the Slack, maybe if I have time I can show you the code, and it's a Fortran code, because I'm old enough to continue to use Fortran, so if you want to transform it into a Python code, it's okay, but it's a 100 lines code, and you can play with this code, it's very simple and okay, it's an alternative to learn kinetic theory, so playing with a code and see what happens, okay, you can even reproduce these results there, nowadays these are simulations of 94, 95, so now you can run these on a cell phone, at the time it was harder. Okay, so what is the power law? Our estimate at the time was 1.7, you see we were limited to 10 to the 4, 10 to the 5 part, but then there was CUDA, CUDA is a software that allows to run very, very large systems, and this is a group from Brazil, and you see that the feet is 1.7 is okay, so in the region that we explored it was okay, 1.7, but then if you enlarge the region you see that it goes to 2, so 1.99, so these are systems with millions of particles, so instead of 10,000 particles, so it means that asymptotically it goes to n square, so it takes a time of order n square to relax, to relax to equilibrium, and to understand these n square relaxation, it took a very long time, I will tell you only some parts of the story, one has to write kinetic equations for these systems that are in the class of the Lenar-Balesko equation, which is finite and corrections to the Vlasov equation, and I will show you the Vlasov equation, and then you had to compute corrections of the Lenar-Balesko equation to very high order, and only recently Pianari Shavanesev from Toulouse to the force using Maple Mathematica and computing hundreds of diagrams were able to show that the correction is n square, so but this is analytic now, so it takes a time of order n square to reach equilibrium, so if you look at the system on a short time scale you will not realize that it is in equilibrium, it will stay very far from equilibrium for a very long time, I think this is an interesting result and could explain why also in many long range systems, so here is the scaling in the second part of the, is the same group, is another paper, and they show that it's in a region of where the system is homogeneous, the time to relax to Boltzmann, this is Boltzmann value, is n square because if you do the collapse of the different curves they will go on top of each other with n square, so this is now well assessed that even analytically and numerically that there is this very long time scale in the system, it's not exponential n, but it's power of n, but it's nevertheless a relaxation that is extremely slow to the equilibrium value. Okay, so there is a paradigm emerging in this area of research that is the following, you have systems where you have, you start the systems somewhere in phase space, so you have an initial condition, and there is a phenomenon that is called via relaxation by in astrophysics, I will give you some information about that, in which if you, you have a relaxation to a new type of equilibrium that we call Vlasov equilibrium, I will try to say a few words about this Vlasov equilibrium, and then you have collision relaxation on a much longer time scale, which is Boltzmann equilibrium. So the initial relaxation is the one that you have seen, and maybe at the end of the lecture so I will also show you some movies, I have movies that show how the relaxation goes on, and okay, so this is essentially the scheme, you have an initial phase, which is well described by the Vlasov equation, and finite n corrections are not important, so everything happens on a time scale of order one, order of the parameters of the Hamiltonian, and then there is a much slower relaxation to equilibrium, and you remember even yesterday when I showed the relaxation of the intensity of the laser, there was this initial level of relaxation, and then on a much longer time scale convergence to Boltzmann equilibrium, that was exactly the same phenomenon, that the laser was relaxing to a Vlasov equilibrium, and then very, very slowly it was going to the Boltzmann equilibrium. Okay, so what is the Vlasov equation? I don't know if this is the time to enter in this, but okay, maybe I just show some pictures that for the end of the lecture are good, so you see, for instance, this is the initial water bag, it is smoothed on the boundaries, okay, so I put back, because if it's not smoothed, then there are singularities in the simulation, you get problems in the boundaries, so you can see the smoothing by the orange boundary here, and then you run it in time, so with the Hamiltonian dynamics, so just solving the system of the particles, and you see that it forms a sort of two vortex here, and then the vortices split, and they become two vortices in a center of the band, and if you go on with time, you will see these two vortices moving, the one vortex here is a positive momentum, the other one is a negative momentum, so I joking with my good friend Freddie Boucher from Lyon, I told him this is like the Jupiter red spot, but then he took it seriously, and he did the Jupiter red spot with similar concepts, so there is an analogy between the Vlasov equation and the Euler equation in topology, and you can in fact prove that this quasi-stationary state arrives also in these systems, in geofizical systems, and this is right thinner, so it was after the diet, the quasi-stationary state, the water back is now narrow, and you see a more turbulent behavior, there are more vortices, but still you are very far from equilibrium, so the system relaxes to some states that are very, very far from equilibrium, and then you, so, and this is an old picture by Lynden Bell, it's a paper in the 60s, there were no computers at the time, and the system is totally different, it's a self-gabithetin system, and this was drawing this sort of a picture of what should happen in that case, so, and that's the water bag, okay, the distribution is constant in a, is constant in a, in a, in a region, and so the dynamics deforms the water bags, there are boundaries to the, to the phase space, so, so you can cut, you can cut the distribution and the distribution is, but what is important is that the area of this initial water bag is conserved, and, and, and the distribution is cut, and it goes on finer and finer, finer and finer scales, until you reach, you reach this sort of band, it's amazing how similar it is, no, so there is no relation among the two, but the picture is very, is very similar, okay, so, and this is drawn by hand, and this is done on the computer, so, I think he had a good imagination, this guy, so, to understand what, what was going on without knowing that the thing was going like that, so, so, is there an increase of entropy in this system or not? So, if you compute the Gibbs entropy on this, on these states, you find that it does not grow, that it is constant, but you have to define a sort of fine grained entropy, so, so, for instance, if I cover this space with boxes, and then I look at the density in such small boxes, then here, for sure, is more homogeneous in the box than here, okay, because here you find boxes that are empty, and below you find boxes that are more empty, that are filled, so, there is an evolution of a fine grained entropy, and in the next hour, I will give you the theory by Lindembell, how you can define an entropy that grows during the evolution of this system, and so, then you can define maximum entropy states of this entropy, solve variational problem using this entropy for this model, and find the quasi-stationary states or plus of equilibrium states in these systems. Okay, this subject gets me closer to other courses, this is the first time I mentioned this relaxation to equilibrium, dynamical entropy, and so on. So, I need a little bit of theory, so, take the Hamiltonian H with kinetic and potential energy U of a system of particles, and then define a discrete one particle time-dependent density function, so, you are summing all trajectories up to time t, and you are, this is the empirical measure, so, you are counting how many, what is the density around theta, and what is the density around p, so, is a sum over all trajectories, and the distribution is normalized to n, so, called the Clemontovic distribution, and then you can prove that using a property of the delta function x delta x minus y is y delta x minus y, one can get the Clemontovic equation. Clemontovic equation looks like the Vlasov equation, it's very much like the Vlasov equation, so, there is the evolution of the density plus a transport term, so, the evolution in the direction of theta times the momentum times the evolution, which is due to the force, which is the potential, the potential evolution V, and the potential is given by an integral over the, over the phase space in theta and p, so, you get this closed equation. It looks like the Vlasov equation, but it's essentially a rewriting of the Hamiltonian motion, so, I'm, so, all the information about the trajectories is contained in this, in this equation. So, what I have to do, and is to smooth this distribution, because it's extremely singular distribution, and distribution is concentrated on the orbits, and so, you have to average the distribution, say, over the initial condition, so, it's sort of hard to do that, so, I'm just giving you the rough idea, so, in my review and in my book, you will find the derivation, and then you realize that the average equation has a, has an average potential here, and there is a correction to the right, and this correction, one can show that it's, it's one over n, so, you see that there are fluctuations here, this product of fluctuations, you can prove that it is one over n, this correction, so, if n is large, you get the Vlasov equation, where V is the average potential, so, the evolution of the one body distribution function f is transport, this part, plus the effect of the average potential, and as the system gets larger and larger, this term gets smaller and smaller, and the, then, okay, to finish this part of the lecture, I will remove, I hope I can do it, I'm still on screen, no? Yes, I hope it works, is a movie, these are evolutions of the, of the model at various energy, this was energy 0.55, now I go closer to the, to the phase transition, and even closer to the phase transition, so, you see the filaments of these evolution, okay, the formation of these clusters, and the relaxation to equilibrium is driven by the granularity of the system, so, if the system is finite, it will realize its granular, and after some time, and this time is order n square, and when it realizes its granular, it's made of particles, then it will remember, oh, there is Boltzmann around, and will converge to equilibrium, okay, so, it's the granularity plays the role of collisions, so, in fact, in the kinetic equation that I've given to you, the right part, sometimes it's called collisional part, but it's not through collisions, because the particles do not see each other, and it's granularity of the system that brings the system to equilibrium, in fact, one can prove an h theorem for the equation, the full equation, including the right part, one can prove that entropy increases, that Boltzmann entropy increases with time, once you include the effect of the, of the second, the right hand side of the equation that I've written, so, okay, so, this is the end of the first part of the lecture, and five minutes of rest. Very good, so, take five minutes of break. These are full simple. No, it's green. Okay, maybe it was not working. Okay, so, this is the code that I will briefly describe, because otherwise you will be lost, so, it's Fortran Coda, so, where it uses an algorithm that is voted here, it's a McClack-Lanatela algorithm, it's a symplektic algorithm of Fortorder, and you see here there is a number of particles, position and momenta, x and xv, and then, okay, there is, in order to start the code, you have to initialize a random number generator, there is a transient, so, this is, for instance, this is the seed to the to the random number generator, and you can either start, okay, so, let's go on, so, these are numbers that are important for the routine that evolve the Hamiltonian, it's a number that I took from the paper of McClack-Lanatela, this is a water bag initial condition, or an alternative is to have a Gaussian of Maxwell Boltzmann Maxwell initial condition, then you enter the code you compute, the magnetization, the two component of the magnetization, there is a transient, and then this is a central loop, you see that all the program is here, from the instruction 100 and the instruction 100 here, all the code is here, but it calls the leapfrog here, this is the routine that is here below, and, okay, it's a routine that I wrote myself to integrate the motion, and this is a code that produces these pictures that I've shown, so, so, if you run it for, say, 100,000 particles instead of 1000 particles, then you can plot the positions of the particles, and then with the density plot you get the evolution, so, it's really simple, so, I wanted to show it because it's nothing complicated, it's a really simple code, and I will put it in the in the slack, if you want to try, how many no fortress, no one, one, and how do you code usually with C? Python, Python, Python, Python, so, all the Python code will be 10 lines, this is 100 lines, but I think the Python code will be 10 lines, because you will do vectors and everything like that, and also the graphics will not be so much shorter in Python, but maybe you understand the logic, and it's easy to translate it in Python. By the way, are there artificial intelligence network that translates, fortunately, into Python? Perhaps, yes, it should be easy, I think. Okay, so, the first, the last part of the lecture of the week, so, is the last, I don't understand, when it starts doing like this, I got crazy. Okay, so, that is the velocity equation for the, for the HMF model, it's a very simple partial differential equation, so, just to resume, okay, you see the distribution function is a function of theta and p and p, okay, then you have to solve this partial differential equation, but with a caveat that V is a function itself of the distribution, so, it's a nonlinear partial differential equation, okay, so, here is linear, okay, and this term, because V is a function of F, makes this equation nonlinear, so, it's highly nontrivial partial differential, nonlinear partial differential equation, but you know what is V, V is the average, so, you see mx and y are the two components of the magnetization, so, mx is the integral of F cosine theta, and then y is the integral of F sine theta, you can prove, so, I think this could be an exercise also, so, given this set of, given this equation, given this definition, prove that the specific energy, which is a functional now, okay, is conserved, and also that momentum is conserved, so, okay, so, the derivative of e and the derivative of p are constant, so, so, this is the equation for the one body distribution function, so, each particle of my simulation will obey on average, this equation, and will move by itself, because there is a momentum, so, there is transport, so, you are seeing, for instance, and the vortices that are transported to in the positive momentum part, it's because of this transport terma, to the right with momentum p, if p is positive, to the left with negative momentum, if the momentum is negative, and then there is a part, which is due to the interaction, for instance, the formation of vortices is due to the interaction, okay, but this is a closed set of equation, it's very interesting, it's a closed set of equation, and, okay, there's been a lot of mathematical studies on this, and if you go to the literature, you will find that the mathematicians call it the Vlas of HMF, because it's, in the past, that they had studied more complicated equations, but for this set of equations, it's much easier to prove some theorems in kinetic theory, so it's, okay, but it's an interesting set, so, you could take a solver for this equation, okay, and which is different from the particle solver, so, the solver that I gave you is a solver that solved the Hamilton equation that are behind these partial differential equations, so, by integrating the trajectories of all particles, this is a connection with Krimontovic, Krimontovic is set of particles, no, and then the average gives you the Vlas of, but you can also start from the continuum equation and integrate this continuum equation with a solver, so, the, the, okay, okay, so, what is, otherwise, I don't have time, what is the Lindemberg, it's a very interesting statistical theory, I like it, so, I like to teach this theory, it's due to a British astronomer, Lindemberg, Donald Lindemberg, is famous for other things, that has to do with astrophysics, but he has also a very nice theory, and the idea has to do with the, of constructing the Lindemberg entropy, has to do with this picture, so, you see that the area, the area of the, one can prove that given a, given a shape in the single particle distribution function, and you fill the shape with constant distribution, any shape will conserve the area, this is a theorem, it's a volume conservation for the Vlas of equation, okay, so, how can a drop that you put into a, a glass of water mix if the area is, should be conserved, it will be deformed, and then it will create filaments, and then it, the steering that of this drop will never reach mixing, but the steering can be so fine, so fine, that is close to mixing, so, any, any square of little square that you put on these, the, if here it was lemon and this vodka, okay, and here it's all lemon, and here it's all vodka, okay, in the end, if I take a small square here, the size of my drop, the size of my pointer, for instance, there will be more or less a certain percentage of lemon and certain percentage of vodka, so, it's not mixed, but it's almost mixed, okay, so, how do you describe this mixing? So, and that's the idea of Lindenbell, so he says, okay, I have vodka, he's a red vodka, and what I can do with my vodka is to move out the vodka from here and put it there, okay, but I cannot take vodka from here and move it here, because otherwise it will be blue, so, blue vodka, it's not the same vodka as I had before, okay, so, it's a sort of, this is how I see it, at the time Lindenbell didn't have the idea, this is like Monte Carlo, no, if you want, but it's a different interpretation, you can do this move, yes, you can do this move, but you cannot do this move, because you reduce the area, no, because you put mass on top of other mass, so, you can see this as a sort of exclusion principle, so, in each part of your phase space, you can have, you can have either zero vodka or one vodka, but you cannot have two vodkas, so, vodka is like fermions, okay, you cannot put, and you will see that the statistics will look very much like fermi statistics, although it's a classical system, it will look very much like fermi statistics, okay, so, how do you cook out this, cook up this idea, this is not exactly as Lindenbell did, so, this is my own reconstruction, and you find it in the book, you will find a better derivation with all the details, so, you take a system and you define microcells and macrocells, okay, then, there are new microcells of volume omega in a macrocell, and a macroscopic configuration has an i microcell occupied with level f0 in the i macrocells, and they remain occupied with level 0, so, this is my vodka, okay, in each of the macrocells, I put a certain number of microcells that are occupied and a certain number that are not occupied, and the distribution is coarse grained on a macrocell, so, you will see that the f bar distribution will have a growing entropy, not the f distribution, the f distribution, the fine grained distribution has an entropy which does not increase, while the coarse grained distribution, the distribution that I average on a macrocell, will have an increasing entropy, okay, the total number of occupied microcells is big n, such that the conserved total mass is nothing but the integral of the distribution over d theta and dp, and this n, the number of microcells times omega, the volume of a microcell, of a microcell times f0, which is the level of the vodka, okay, f0 is how much vodka I put, okay, it's sort of complicated, I've tried to put it all in a slide, so, this is generic, so, you can do it for k levels, so, in that case was a level 0 and level f0, and you can do it for k levels, there is no problem, you define the mass of the levels, and there is a result, which is a very important result for the values of equation, that is the fact that there is an infinite set of conserved quantities, so, all functions of f are conserved in time, and the Gibbs entropy, of course, is a particular casimir, so, because it's f log f, okay, the Gibbs entropy is a particular casimir, so, it does not increase its constant, but you can define a coarse grain distribution for any set of levels here, it was restricted in the previous slide to two levels, but I could increase the number of levels, so, what happens is the following, so, you can prove a mapping, it's very interesting, from the casimir to the levels, so, you can prove if I have a distribution with many levels, each level will be conserved, and the infinite number of level conservation, of mass of level conservation corresponds to the conservation of the infinite functions of f, of the infinite functionals of f, so, this is a complicated theorem, but, essentially, it means the following, if you have a zero vodka or one vodka, the one vodka and the zero vodka will be conserved, if you have a zero vodka, one vodka, and two vodkas, the zero vodka, the one vodka and the two vodka will be conserved, and this is conservation of the casimir, okay, you would like to know this distribution, but how can you get this distribution? You have to integrate the equations of motion, and you cannot, so, the idea is to introduce a statistical mechanics, so, to do accounting, so, there is a given set of microstates, okay, and you would like to count how many they are in terms of microstates, and, okay, so, you have, and you have an i, so, you remember what is n, so, maybe you have lost big n, big n is the total number of occupied microcells, okay, microcells, so, there are microcells, and there are microcells, and all these small dots, black dots here, you count all of them, and there is, and they are big n, okay, they are big n, all the number of small dots is, so, that's Boltzmann counting, no, Boltzmann counting is nothing but n factorial divided by product of n i factorial, as if you have a gas of small dots, you have many boxes, the macrocells, and you distribute the macrocells in the boxes, okay, but then, then you have to, you have an exclusion principle, because you cannot put so, this count, this is this term, okay, so, this is like Fermi counting, if you go to the book of Wang that I attacked very, very strongly in my lecture, but, okay, go back to the book of Wang and go to the chapter on Fermi statistics, it will have exactly this term, because if I have a certain number of drops, and I have to put them in the macrocells, I have to first put one, but then I cannot put one on top of the other, so you have Fermi counting, so, this is Fermi, and this is Boltzmann, it's a very strange distribution, you don't find it in Boltzmann, in the one book, it has two counting terms, one is a Boltzmann counting, and the other one is Fermi counting, and now you do stealing, so, this is why I like this theory, because it brings me back to the other part of my lectures, and if I use stealing, and I compute log of omega, I get this distribution, which, if I rewrite in terms of the course-grain distribution function, is nothing but a Fermi entropy, f log f, one minus f, log one minus f, so, Lindelbell entropy is nothing but Fermi entropy with an extra ingredient, which is Boltzmann counting of the states of the macrocells, so, now what I do, I maximize this entropy at fixed energy, so, I've told you that the energy and momentum are conserved, so, now I maximize this entropy at fixed energy, and this is a very complicated variational problem, you can see that these are the consistency equations for the HMF model, and this is the course-grain distribution that I get for the HMF model, these are Fermi functions, f0, f2 are Fermi functions, I have to solve this variational problem, but it's a one-to-one solution, so, given the initial condition, I don't have three parameters, I have to solve these set of equations, and I get the distribution, and then I compare the distribution with data, and this is the comparison, no three parameters, so, you see there are details that are not reproduced, here there were two peaks, the peaks usually corresponds to vortices, and this distribution is not perfect, but you see that it's amazing that you are able, by this very simple theory, to reproduce, and what does the theory says? It says that the distribution depends on m0, so, you see that in these formulas, somewhere it takes into account that the distribution depends on m0, so, it does not depend only on energy, it does depend also on m0 on the property of the initial state, so, I will be brief because I don't want to be, okay, this picture is recent, and there is a lot of computation in this picture, so, this is m0, then I will go back to previous slides, and this is energy, so, I've told you that this model, what statistical mechanics tells me is that there is a line here at 3 over 4, so, 0.75, and all states below this line are clustered, so, the particles are clustered, so, magnetization is, or, ferromagnetic, and all states above are gas, so, they are, particles are distributed, but I was finding the, in simulation, that this is not true. Now, this black line is the Lindembell line, so, is the line that I obtained by solving this very complicated variational problem, and according to the theory, it divides states that are clustered, or, that are here, small energy, from states that are gas, okay, in the initial convergence, so, in the glass of equilibrium, okay, so, in the glass of equilibrium, the states that are here are ferromagnetic, and the states that are here are paramagnetic, and you see that, okay, there are points where it fades miserably, okay, here, no, it gives a blue below, what is above the black line should be all blue, and what is below the black line should be all red, but, on average, it's not so bad, so, in each of these squares, with a student from Brussels, we divided this big square into 100 small squares, even more, I think, you can see the sides of the, of each, maybe, it's one over 20, there are 20 by 20, something like that, and, and then, we averaged the magnetization in the initial relaxation, so, before it goes to the Boltzmann equilibrium, and so, so this is the glass of equilibrium, and there is a line of phase transitions, which differs from the Boltzmann line, the Boltzmann line would be a straight line at point 75 irrespective of the value of m0, there is a transition at energy 3 over 4, while the Lindenbell theory gives me a phase transition, a non-equilibrium phase transition, on a line that reaches point 75 only when m0 is 1, otherwise, it's below, below, I can compute analytically this line, and this line, moreover, we can prove that this line of phase transition of non-equilibrium phase transition, it's a very interesting variational problem that you can solve, and you discover that there is a tri-critical point here, and the line is a line of a second of the phase transition that ends into a line of first-order phase transition, and this is some point that we can compute exactly, 7 over 12, so, and also something has been done also on other points more carefully. So, apart from the tour de force that you can imagine is behind all this theory, what is interesting and I would like to transmit here before I close this lecture, is that there is a theory for the Vlasov equilibrium, it's maybe not a rigorous theory, but it's a theory that I like, so it's very simple, and by this theory, you can introduce a variational problem, you can solve a variational problem, so instead of solving that, it's a theory that is exactly in the spirit of Boltzmann, so you don't solve the partial differential equation, which would be very hard, and for any initial state, you would have to solve a different equation, but you solve a variational problem, which gives a prediction on the coarse-grained state, it's not the detailed state of the dynamics, it's a coarse-grained state, it's the same that you do with a gas, when you want to understand more or less the pressure, the volume, and so on, it's more detailed because it gives the distribution function, not only the macroscopic variables, then from the distribution function, I can derive the magnetization, the energy, the momentum, so the macroscopic variables, it's a very detailed information, and it can be used for different models, we have done it for the free electron laser, and it gives the result of the convergence to the first level for the free electron laser quite well, it has been used also for systems like plasmas in the papers by Jan Levine, and I put the report of Jan Levine, and this is very nice, I think it's very nice exercise, I don't know if this concept of coarse-grained entropy is used in other fields, but it's a very useful concept, it can be used for situations in which you don't have mixing, but you have steering of the distribution, so you have a conservation law that avoids to suppose that the system is mixing, conservation law that avoids can be different conservation laws, and then if you take into account these different conservation laws, you would define your proper coarse-grained entropy, and this coarse-grained entropy will reach a maximum somewhere, and this serves as a tool to solve the dynamics. So, I think we'll leave time for a few questions, a discussion, if you want, about that, five minutes, three, four questions and discussions. My question? Yes? Yeah, this stable state is due to maximization of this coarse-grained entropy. You can view it as this, it's a theory, so it's not the theorem. It's a difference between a theory and the theorem, it's not the theorem, it's a theory. Yeah, so my question is like in this theory interpretation will be like for a long range interaction system, sometimes it's like a competition, first the entropy is the quantity that is more important, and after some long time energy start being relevant, that's the idea. Yes, yes, on the very long time the only entropy is the Boltzmann entropy, and the system will recognize by granularity that the right function to maximizes Boltzmann entropy, so as the trajectory invades all the phase space, you will reach the state of maximum Boltzmann entropy, but on a much shorter time scale, which is not too short, is n square, so if you have a system of thousand particles, it will be a million proper times. You see that there is a convergence to some value, and you don't know anything about this value. What's going on? What's going on is that is this steering of the distribution, so the distribution is not mixed, it's stirred, and this lindenbel entropy takes into account the steering of the distribution by introducing a sort of coarse grain scale over which this Fermi-like entropy reaches a maximum, and if I use this optimization problem using this entropy, I get the line, I get the black line that I've shown. It's not perfect, so you see that there are blue parts below the line, but it separates two regions of initial states. One region of initial states where you converge to the Boltzmann equilibrium quite well, other in which the convergence is much slower, and these are exactly those points in my simulation that had troubles to reach the temperature energy relation that I get from solving the equilibrium model. In the end, you will be Boltzmann, but on short time, you will be Vlasov. This is more or less the message, and collision will bring you to this final state at some point. Have you think about this in a cosmological level? Šua, šua. In the review, so I'm not an expert in this field, but for instance, Pierre Henry Chavanisse is working a lot in this direction in Toulouse. So for instance, this could explain why you observe many galaxies of many different forms, or I mean solar systems that are very different one from the other. So it could be that the reason why the sky is so different is that it's not a gas, and the long range interaction brings the universe into a mixture of interesting states for us also, although we are a bit stupid and we do worse, so at some point we will disappear. So, but also in plasmas, for instance, this idea has been used. So in plasmas, you really observe in plasmas system that are really big problems to relax the equilibrium, and you see that the plasma is confined into some states. Jan Levine thinks that the Lindembell theory is not perfect, and it does not like like me. So he likes, I like very much the Lindembell theory, it does not like. He has alternative theories based on resonances, and okay, my point about the Lindembell entropy is that it is very simple, so even a baby can understand it, and it's general. So there is a principle behind, and there is a symmetry behind. So I like it so much that I tend to over stretch it and to say that it works even when it doesn't work. So this is my failure, because I like it. It's a beautiful theory, and it's an example where I mix two different statistics, the Boltzmann one and the Fermi one. So it's a very nice theory to teach, and it's simple, you define, you change the model, the entropy is the same, so you don't have to change the entropy to change the model, so it's sort of universal. So maybe when it was formulated, it was not clear the power of the theory, so it was a sort of, so the Lindembell formulate it as a new entropy, but in fact it's not a new entropy, you find it in books. You just have to do things properly and define these micro and macro cells, and by the way the same theory was developed independently for the Euler equation. It's very similar to the Lindembell theory, it was formulated for the Euler equation. Even there there is a conservation law in two dimensions for the Euler equation, and you apply it to the climate, to understand why you see cyclones, anti cyclones, heat waves, so why you see all this complexity in the climate. Freddie Boucher, for instance, he was my postdoc for three years and we developed together a big part of this approach, he is now working on heat waves, he has specific programs to understand heat waves, heat and because these are sort of equilibria that appear regularly in climate and you don't know where they come from because climate is chaotic, should be mixing, but we know that climate is not mixing, because we see from time to time the same patterns, so the idea is that these theories could serve as a frame, not as a specific and predictive tool, but they can serve as a frame for understanding the complexity of climate, for instance. This is another system for which there are long range interactions, fluids in 2D are long range. Correlation function in fluids in 2D does not decay sufficiently fast, decay sufficiently fast in 3D, but in 2D no. So the Euler equation is a good example of long range system. And then you put the topography and you get the equations that everybody uses for large scale climate. So I did it at the level of master. These are very simple models, but they hide a lot of complexity that is very interesting to project onto the complexity of the universe, of the climate, but I don't want to be too exaggerated because I tend to be modest. So for instance similar things have been done for the Euler equation. For instance, you can study a very interesting problem, a slightly more difficult is a shear in Euler. So you take a 2D fluid and you shear. And the vortex formation at the center of the fluid, which grows by the instability as a name. So there is an opening of an eye in the center of the flow. It's an instability, a Rayleigh instability. And that's exactly the same as the HMF. You've seen in my picture the fact that it shears and then at some point it opens an eye in the center. You can formulate that problem exactly as a Rayleigh instability. But in the Euler is likely more difficult and still there are problems in the Euler, so technical problems. But this shear instability you can see it in this model. It's very simple model and shear because you have a part of positive momentum and a part of negative momentum. So particles are going in this direction because they have positive momentum and in this direction because they have negative momentum. So there is shear and in the middle below a certain kinetic energy you get the opening of this eye in the center of the phase space. So this phenomena are sort of, you find them here and there. For instance, I have a movie. I mentioned to you the confined plasma. So maybe I'm doing too much, I don't know. But I have a movie that I took from Japan, from my visit to Japan. I think this is this one. Yes. This is not my model. This is what happens for, now I know that it's put here. So I told you that you can confine electrons onto magnetic lines, the swirl of magnetic lines, and then you confine the electrons like this. And then you can make a cut here somewhere and you can observe the charge density here. This is exactly that. So the charge density initially is on a ring and as time evolves it forms these swirls, these vortices. And there is a theory for this charge density evolution that is exactly the same theory as the Lindenbell theory for. So this charge density is described by maximum entropy. Again, it's not perfect, but it can be done. This is a lab. It sounds quite general. Sorry? Sounds a lot like it could be really general. Yeah. Maybe it's general but wrong. So this is what my suspicion is. If a theory is too general it can be wrong, but it's beautiful, so I like to teach it. Ok, very good. What about the coughing? So thank you very much Stefano. We recommend in 15 minutes or so.