 Vsih, da je tudi vse zelo, da je to dobro. Pohoje, da smo v nekaj v Evričjih vsev, ali so imeli noženje. Zato so najbolj z Stefanovi. Ok. Vsih dobro, da imaš vse vsev v Ukrainiji. Zelo, da je. Proste, da bomo počusti nekaj izgledaj, zato počutim lektur. Zato mešljim, da je zelo, da je zelo, da je zelo, da je zelo, da je zelo, da je zelo, da je zelo. tako, da je tukaj ponenšel, tukaj minus beta mikro, tako, različne energije demonov. Tukaj izgledaj se, da je tukaj izgleda, tako, da je to, da je to, da je to, da je to, P of E D is, so the most complete P of E D is exponential of minus beta mu, beta micro E D minus E D square, beta micro square divided by twice Cv, where Cv is a extensive specificity. So Cv goes like n, like the number of parts. Plus, of course, there are other corrections. OK, so you can see that since the energy of the demon is order one and beta is intensive, so here there is one over n, so in the large n limit this term will be small. That's very important for long range interactions because Cv can be negative, can be both positive and negative. And in case it is negative, this will get a plus, and so it would be unstable. But the fact that there is a one over n avoids the instability due to the negative specificity and the algorithm works also in the negative specificity region. So this is the first thing. And then much simpler one, I would say, I already mentioned yesterday, so use both, this is exercise one, exercise two, use, it's for the bloom-couple model, use the expression of entropy, energy, and if you want also bounce, bounce, like the one that I proved, you, the number per spin of defects is larger than zero and less than one minus m, to draw the region accessible epsilon and m. So this is the region that you remember yesterday, something like this. OK, you have to do, solve some quadratic equations. Yesterday I was very fast and OK, and then exercise three, you remember the estimate of the, of the, sorry, this is for the kardar negar. It can be done also for the bloom-evagif, for the bloom-couple, OK. So if someone wants, can try also the bloom-couple, it's possible also. So you remember the volume that I proved for open boundary conditions in the large general was something like this, for open boundary conditions, prove it also for periodic boundary conditions. OK, I proved it only for you odd. OK, try to extend the proof to you even. OK, this is first step. And second step, do the proof for periodic boundary conditions. What is the exact counting factor for, OK. So this is in terms of exercises. OK, I can leave them. OK, yesterday I finished with this slide on the min-max method. I would like to motivate, so to give you feeling how to prove this. So I will not do much of the proof, but it's OK. So it's just a justification. So you see that as of epsilon, as of epsilon, the micro canonical entropy is the sup over x and the inf over beta over beta epsilon minus phi. There must be a potential as a function of x. Phi is defined here, OK. Phi is beta, phi is the inf over x. And phi over beta x is simply beta f over beta x, OK. Phi over beta x is beta f over beta x. So before taking the inf, OK. So the idea is, OK, how to prove this. One way is saddle point. So you know that omega, the volume is the trace of the delta of e minus h in the micro canonical ensemble. Then you represent the delta as a plus representation. So this will be the trace of beta minus i infinity, beta plus i infinity. So it's a path, vertical path, a vertical path at real beta. In d lambda, trace means trace over the variables of h, over the dynamical variables of h. Z of lambda n, the lambda is the integration variable, e to the lambda e. So you have to perform this integral. So there can be in the complex lambda plane, there can be saddle point outside the real axis. But in our book we prove that the only saddle point for this integral is on the real axis. And if there were saddle points on the outside the real axis, would be creating oscillations in omega. And you could get negative values in omega. The proof is quite lengthy. I can give you the reference in my book, but one can prove that the main saddle point is on the real axis. And on the real axis it's a maximum in the imaginary direction and the minimum in the real direction. So you should concentrate on this saddle point for the evaluation of the integral at large n. And since this is a saddle with a maximum along the imaginary axis and a minimum along the real axis, you see that here you have to take an infimum over beta, the minimum over the real axis. And then when you perform the integral in x, which if you express the partition sum in the form integral dx e to the minus n phi of beta x, then you have to perform a saddle point in x, which is a soup in x. So essentially the difficult part of the proof is to prove that the main contribution to the integral comes from a saddle point on the real axis. And the saddle point on the real axis is a minimum in the direction of the real part of lambda and is a maximum in the direction of the imaginary part. I will put on the web the book and you will find the proof. It's quite lengthy proof. You have to exclude some regions in the lambda axis and so to prove there are subdominant terms. It's a bit involved, but in the end it's a proof. Not in the mathematical maybe sense, but it's acceptable for theoretical physics. So there are two formulas here. One is the formula for the entropy in the canonical ensemble. This is the formula for the entropy in the micro canonical ensemble where I have inverted the soup and the inf. And of course the entropy, this inequality in soup inf tells you that the entropy in the canonical ensemble in the micro canonical ensemble is always smaller than the entropy in the canonical ensemble. No, because we have seen examples. I cannot raise the blackboard if I lost my, where is it? Ah, OK. Did you copy everything here? OK. So typically if you remember we had the situation of this kind, you know, with a bounding, it's called supporting line, OK. This will be the entropy in the, sorry. This will be the entropy in the canonical, what is f s star? And this is s, OK. So it's always smaller than the entropy in the canonical. So you can have a look at how to prove this. It's a simple proof that the soup inf is less than the inf soup. It's just playing with the two variables x and beta and you will prove it easily. OK. So let's now see an application of this. It's a useful tool. So let's solve the another model, which is very close to the Kardar-Negel model, but now instead of having discrete spin one-half, I have a cosine. So it's an x-y model. So an x-y model means the vector now is an x-y vector and the interact isember vector. And the interaction is the scalar product of the two vectors. And there are two terms in the Hamiltonian, one with coupling j and the scaling, cut scaling 1 over n is, so is the product of all vectors with all vectors. So it's a Q-revised term of the type xy, OK. I added a constant energy for a simple reason that the minimum of the energy will be around the point where the vectors are aligned. OK. So this will be ferromagnetic. And in this minimum, the Hamiltonian is quadratic because if you develop the cosine, the one will be killed by the first term and the Hamiltonian will be quadratic. OK. This is just to have a good normalization for the energy. And then there is, we are on a one-dimensional lattice and there is another term with short range, which is a product of two Heisenberg vectors on nearby sites, OK. So and the coupling is k. So it's very similar to the Kardar-Negel, but the variables are continuous. So I could not count the number of variables here. OK. So how to count a continuous set of variables. So the trick is exactly this one. So I use Haber-Stathonovich to introduce the field Z, OK. Which comes from the Gaussian transformation on cosine square. I'm right. In fact, I'm a bit sloppy here. There are two fields, no. If you do the exercise properly, there is a field for the sum of the cosine square and the field for the sum of the sine square, OK. Z1 and Z2. But due to the invariance under rotation, records, OK. Due to the invariance under rotation, you can break the symmetry in the direction of the cosine because the result will not depend on a global rotation because I can rotate the angles, all the angles by a global rotation and since I have only angle differences in this model, it will be invariant under. If you don't like to do like this, you do it with two fields, but then you will have to pass to polar coordinates and you see here the trace of the polar coordinates, OK. Because there is a Z here, OK. The modulus of the vector of the two fields, Z1 and Z2. Anyway, I think I have put in the slak a paper where it is done carefully, this calculation, OK. So you have the nearest neighbor term and you have the Haber-Sarconovic term and then you have to do the integral in the theta. Now, so this is not easy because you have to solve an operator, an eigenvalue operator equation for this model. It's essentially the same as solving the transfer matrix in the case of continuous variables, OK. So this is the operator equation that you have to solve. You have to find eigenstate and eigenfunctions of this operator. The eigenfunctions are easy, they are plane waves, so you can indeed solve and get the maximal eigenvalue of this operator, which is a function of the couplings and all the field, Z. And then in order to get the micro canonical entropy, you just use the formula sup over Z or inf over beta of this expression here. I think here there is a new, it should be an epsilon, I think, yes. So there is a mistake here, I will correct. There is an epsilon. OK, so this can be non-numerically, of course. OK, the calculation. And we get this phase diagram, which is very similar to the Kardar-Negel model. So for negative k, we have the continuation of a second order line that ends at the canonical trache critical point in the canonical ensemble you have this line of first order phase transition in the micro canonical ensemble. Instead, you reach this point, which is the micro canonical trache critical point and then you have the branching of two lines. So you see it repeats exactly the structure of the Kardar-Negel model. So using this mean max or sup inf or inf sup method, it is possible to solve the micro canonical entropy for a model with continuous variables and one can get information on a model in which, OK, as a final slide, I have put this, which is known to everybody. So it is the transfer matrix of the one-dimensional ising model. I will give you the slides of that. And I put the expression of the maximum of the two eigenvalues, explicit expression. And the exercise is use the maximum eigenvalue to derive the free energy of the Kardar-Negel model in the canonical ensemble, OK? Derive the critical line and the trache critical point in the canonical ensemble and then the third part use the mean max method is the method to derive the entropy because another way to derive the entropy for the Kardar-Negel model you remember the Kardar-Negel model in the accurate vise term you can use the Abarth-Stratonovich to introduce an auxiliary field and this will transform the Kardar-Negel model into a one-dimensional ising model in an external field. Then you use the transfer matrix to solve this model which now the external field of the Abarth-Stratonovich so you integrate instead of taking the sup over x to get the free energy you take the inf over beta so you invert like in the in the mean max method and this will give you a formula for the entropy of the Kardar-Negel model which should be the same as the formula that we derived here of course after you substitute the number of defects with the relation that gives the energy in terms of the number of defects and the magnetization because the mean max method does not know how to take into account the number of defects so you take this expression you replace u by epsilon over k plus j over m m square you get an expression where you have epsilon and m and then you can check that you will get the same expression when you maximize over m this expression you get the same expression when you take the sup over x in the mean max so the mean max is also an alternative method to get the free energy of the entropy of the Kardar-Negel problem so it's a bit involved but I hope you have understood so this ends this lecture and now we'll go to another so we have learned that the ensemble in equivalence is present also for spin models with both nearest neighbor and mean field interaction we are now working on models with nearest neighbor and next to nearest neighbor interaction which are interesting and the ensemble in equivalence remains also for this class of models simulations in the micro-canonical ensemble can be performed by using the Kreuz algorithm and reveals ergodicity breaking in exponentially long transition times when more than one local maximum entropy exist this is so we are present in metastability and then we have learned that the mean max method is useful tool to obtain micro-canonical entropy for a specific class of models in which I can express the partition sum as an integral over a dummy variable which in the case of the Habalt-Raslovich is not in but the field the Gaussian field which arises when you perform the Gaussian transformation to eliminate the square in the Hamiltonian ok, so now it's not the right moment because it's in the middle of the lecture but we go to a totally different so this is the end of lecture 2 and I am in the middle of lecture 4 so I prepare much more material than needed unfortunately so that's usual so let's go to lecture 3 and now you will relax because I will say trivial things for sure everybody know but just to warm up so this lecture is devoted to large deviations and I will show how you can use large deviations to solve entropies so entropy will be the analogous of the rate function in the large deviation and the free energy will be the moment what is called the moment generating function for large deviations so you know that in large deviations it is related by a transform and you will see that I will use this language to compute for some models the rate function which is minus the entropy and then connect it with the moment generating function which is nothing but the free energy and that will be the topic of this lecture and I will show you examples of calculation using a theorem by Kramer there are other important theorems in large deviations that could be used the son of theorem I mean I had other type of models, more complicated that I will solve, use only the Kramer theorem so I will state the Kramer theorem and we will use the Kramer theorem to solve the three states post model which is one of the simplest examples I know of ensemble equivalence and then we will use it also to solve the modified x-y model the model of a free electron laser in which we will discover in the mean field F4 model that besides a negative specificity you can also have a negative susceptibility so this is another response function that in the micro canonical context can be negative ok, so let's start by warming up so you will see this law of large numbers also in the context of that deviation but it's better to recall so consider as a sample of any independent identically distributed random variables x1, xn and each of the variable as a PDF of x with expectation given by the integral of x and then you take a sample and you will see that this is the usual object in large deviation theory so you take an average over a finite number of extraction of the random number so for instance 10 you extract 10 number of random numbers and you take an average over the random number of random numbers that you have extracted with this rule then you know this is the law of large numbers that the probability that this sample mean this is the name usually that the probability that the sample mean converges to mu in the large n limit in this case is 1 of course this is called the law of large numbers but it's not a law, it's a theorem you know very well there are several ways to prove it one way is to there are several versions of the row of random numbers I don't want to cite all of them because otherwise I will do a course in probability the weak law, the strong law and so on for those who are interested they can look at but what is important is that this sort of concentration of probability onto the average as you increase the size of the sample then there is so the sample mean is a good evaluation of the average sample mean is better and better as you increase the size of the system then this is part of the warm up in this lecture then let us state the central limit theorem so consider a function of the set of random variables sample mean of this function and then define the difference of the sample mean to the average divided by the square root of the variance of the sample mean the variance of the sum as you know is n times the variance of the function so you can rewrite this expression with the square root ten in the numerator here if we define the sigma square the variance of g then the central limit theorem can be stated in these two forms for instance in one form you say that the probability that this ratio here is between A and B in the larger limit integral with limits A and B in another form you say that the probability of g of n is Gaussian with variance sigma square over n so what happens is that irrespective of what is g of x the the Gaussian distribution will be a Gaussian around the average distribution will be a Gaussian around the average and the width of the Gaussian reduces to zero which is another way of seeing the law of large numbers so the fact that as you increase the sample size the probability ok, so let's try in our calculation and where we can compute explicitly the probability and check if we can see the traces of these two theorems the law of large numbers and we also discover that the probability distribution of the sample mean so, although the probability distribution of the sample mean is Gaussian near the center if you go away from the center it's not at all Gaussian so if you read the presentation of the large deviation theory by you there is a chapter interesting chapter that says what is the probability of a very unlikely event so suppose I throw coins so what is the probability that I have a string over ten throws that I have a string of five head or five tail ok, of course the probability is the same as all the other sample, because it's one of the it's one event over two to the ten event but if you take the sample mean ok, let's sum the results with head minus and say tail plus so what the probability is that you get five or ten ok, in the sample mean several ways of getting five or ten it's much much smaller and the reason is lies in the binomial distribution that you will see also is related to large deviations so you are far away from the center of the distribution because you are asking for very unlikely events and you cannot more use the Gaussian estimate for such events ok, so I will do exactly the calculation this is the same as evaluating the magnetization, the entropy in the in the ising model by the way so let's take head and tail so I throw XK and the results are either plus or minus ok, is it ok? you are not sleeping, no, it's ok yesterday maybe you had I had a good dinner with wine so I hope also you enjoyed the evening so everything this is clear to everybody so I think ok ok, so this is the only calculation of today which is slightly more demanding so I define the sample mean over the random variable XK and I want to know what is the probability that the sample mean is X so I can get this probability very simply by counting in the series I am fixing the sum and the sum is X ok I can get a certain number of heads a certain number of tails and I can permute all the heads and tails because I get the same sum but I have to divide by N plus factorial and N minus factorial because they are the same configuration and I have to divide by 2 to the N in order for this number to be a probability it should be less than 1 it should be positive and less than 1 now I rewrite N plus and N minus using the sum which is X and I use the stealing approximation and I get this it is very illustrative of the fact that you have a large deviation principle so the log of the probability neglecting subdominant term in the stealing approximation the log of the probability can be approximated prefaktor as minus N the number of throz times this function of X which you recognize very well is nothing but the entropy of the Q-revised model in terms of the magnetization minus the entropy in fact and this is so called the rate function in large variations and the rate function I of X as a minimum is X equals 0 and this is the central limit theorem the fact that the most probable value which is the average the average of N throz and each throz is either plus or minus is 0 this function as a minimum in X equals 0 which is the most probable value and you can check that it is quadratic around X equals 0 and this is the central limit theorem but there is much more information in this function you get the probability also far away from the average so you can get the probability of states of the system I would say magnetizations that are very far from the zero magnetization and that's why this theory is very useful for phase transition because it tells all the story about the probability of a function of the order parameter so it's a way of getting the probability of the order parameter and it is used in many different areas of statistical mechanics so the coin toss experiments can be taught as a microscopic realization of a chain of N non-interacting spins and I of X which correspond to the opposite of the Boltzmann entropy is a macro state that characterizes the fraction of X up spins in the configuration which is related to the magnetization so the rate function and the large deviation function are the same different names that arise if you formulate the large deviation principle so if I formulate the deviation principle by saying that the probability is of the exponential type the function inside the exponential is the rate function ok, so this is extremely simple example but you can amuse yourself to make it slightly more complicated oh, I don't have it here ah, yes, ok ah, I did it with using Kramer ok, you can do it also so you can bias the coin you can bias the coin and try to get the binomial when the coin is biased and try to do it before doing Kramer so I will now do Kramer Kramer formulation is the form so let's take a random variable a vector random variable because I will need vector random variables to solve some of the models with a given pdf and then take a sample vector random variable, x in such a way that the sum of these vector random variables is the sample mean then question which is solved by Kramer which is the pdf random the IID random variables which is the pdf of the sample mean and you have to do it in steps first of all you have to compute the generating function and the generating function is the average over the pdf of the variable x of the exponential of the scalar product of a lambda x, ok, you have to do this average ok, then once you have performed the average if psi of lambda is less than so is finite and differentiable then, these are the hypothesis of the theorem then the probability that the sample mean is x is exponential of minus n i of x so it obeys a large deviation principle and what is important, the theorem is constructive because it gives a way of computing i i is the Legendre-Fenzchel transform of log of psi ok, so now you understand why I have introduced the Legendre-Fenzchel transforms and everything because this is a very important result ok, so let us use it for an exercise where the bias is the coin so the probability distribution d mu of the bias coin is in the case of balance is the sum of two deltas, but in the bias case is 1 minus alpha times the delta plus alpha times the delta of x plus 1, sorry there is a parenthesis so you can do it with Kramer, so you compute the moment generating function the psi, and then you do the I leave it to you as an exercise ok, and then you do the Legendre-Fenzchel transform of that is quite involved it should work a little bit you see expo of lambda minus to alpha and then you have to invert because you have to do Legendre-Fenzchel and you have this moment generating function sorry, this large deviation function which in the unbiased case is centered in zero in the bias case is not centered in zero because the average is moved away from zero and also the shape of the large deviation function is no more symmetric around x equals zero, ok but you see that is a tool that can be adapted to solve slightly more complicated problems in probability now, so this now is a slide where I tried to adapt a method that was used before we used this for spin system was used in two different areas of research and I learned this method from a postdoc of mine from Freddie Boucher who came to Florence for three years and he was coming from a group of geophysics so it's very important to know this it's very important that people bring new methods from very different areas of research and the method was used to study the statistical mechanics of the Euler equation completely different area of research but then we read carefully the papers and we understood that it was possible to adapt it to spin systems and I will present to you the method which is very powerful and allows to solve the entropy of a really wide class of models so I want to destroy Kersen Wang in micro canonical interface as much as we can so first of all so the Hamiltonian is a function of the dynamical variables you have to reduce the set of dynamical variables to a finite set and this turns out to be particularly simple and dynamical variables can be expressed in terms of mean fields for instance in all the Hamiltonians that you have seen you can express the energy through the order parameter m, q, so only a finite set of variables then you have to prove that the rest, if you express the Hamiltonians in terms of these gamma so the Hamiltonian hn is a function of the dynamical variables omega n but I would like to write the Hamiltonian as a function of the global variables gamma which are functions of the dynamical variables omega n and I would like that the rest is small when n goes to infinity when n is large the rest is small so once I can do this then I can say the Hamiltonian is well represented by a set of global variables gamma then using the Kramer theorem I can compute the entropy function if you want the large deviation function so which is the volume of the phase space of the Hamiltonian at fixed values of these global variables so the volume is so suppose I have you have seen in the examples I have fixed the number I have fixed the number of defects which gives me u and then I have fixed the magnetization which gives me the global variable m and I have computed the entropy with respect to these global variables the energy can be expressed in terms of these global variables you have seen the relation for both the models so energy can be written as something in terms of the global variables so when I use the Kramer theorem you will see how I will do it and then there are two problems that you can solve according to the one problem you can maximize the entropy in terms of the global variables at fixed energy and this is the micro canonical problem you can minimize the mass potential so beta h minus s this is the energy transform of s of gamma and this will give you the canonical free energy so I think this is a nice point to say one thing optimizations are different in this case you take a soup of something in this other case you take a nymph of a different function and it is clear now it is explicit and evident that the two problems can have the same result of course but it could happen that they have different results and this is in fact ensemble in equivalence so solving this problem you get the micro canonical entropy solving this problem you get the canonical free energy and the canonical free energy is not the Legend transform of this because the entropy can be non concave and the general principle in fact it has been used also in different areas different from statistical mechanics so I think it is time now to stop because I would like now to solve the three state pots model which is very simple using this method but I prefer to stop here because you have the ingredients now to do this step and if there are questions if there are no questions you rest five minutes and we start again in five minutes so we take a break of five minutes