 I think we should start, so is everything okay? Okay, so we have lecture three of Jorge. Okay, so today we will do two things. I'm going to do an exercise which many of you may have already done, but it's nice to remember it. It concerns extensivity and how do we use the saddle point in practice, how we use it in everyday life. And then we're going to do what I think is an extremely beautiful exercise, which is we have a particle in a potential, even one-dimensional, it doesn't really change. And we connect it to a lot of other particles, which are going to be our thermal bath, and then we will solve them away, or as we say integrate them away, and we will in this way recover the equations for the stochastic equation of motion, which will be the object of a lot of things we're going to do and which are the object of a lot of things that are happening. And that surely you're going to use. So what we said is we have a system, and then, and this is the part you always guess, at some point suppose that the system is composed, for example on a lattice, and suppose that it has short-range interactions, meaning that there is no interaction across the system. If it has long-range interactions, then you have to rethink the whole thing again. And then you say, okay, the energy, well, how does it scale with the system size? Well, if I break the system mentally into pieces, I expect that roughly speaking the energy is going to be except for some breaking into pieces means that I will cut this interaction. Then it will be roughly proportional to the number of pieces. And then, well, I expect this to scale like the number of degrees of freedom times. Remember, I use lower case for quantities per unit degree of freedom. This is the quantity that has a nice thermodynamic limit. This is the part that typically you always depends. When you do mathematics, you go to the books that do this mathematically, you suffer for a few months, and then you come out with a rigorous thing. But most of you will not do that and just use your intuition. Okay? Now, there is the partition function as I mentioned yesterday, which is the sum, this is the inverse temperature, and here we put the energy of every configuration and we sum overall. And we ask ourselves, is this a good quantity that has to be added? But then we realize that because this energy is itself a sum of energies of the subsystems, roughly speaking, this rather than being a sum, it's more like a product. So we expect this to be like a product. And then, I don't know, arbitrarily cutting into M pieces, let's say. So we don't expect this to be a nice quantity because multiplying is not a thing that is additive. If you want a very simple example of something that is not additive, think of resistances in parallel. If you add more, this doesn't add, and so on. Okay, so this suggests very strongly, though, that the log of Z also called e to the g, or also called e to the minus, sorry, minus beta f. The notation for this changes a lot, and the name for this changes a lot, but the name for this and this are the same. There's a temperature in the middle that for the moment is purely conventional. This is what we call free energy. And about this, what we can say, because of this thing here, is that this is going to really be something like g of the temperature, or, okay, when I, sorry, so that these quantities have a nice limit. Okay, so what is additive is not the partition function, but it's logarithm. So we give it a name. We know it's going to be important, and you will see why it is important, and it's okay. Let me give you another example so that you get some intuition. Remember that I said that we were, yesterday when we were talking about the evolution of a system, we were talking about the volume of, for example, an ink stain or something like that. So the volume of something in phase space, is it an additive quantity or not? And then, so what you think is, imagine I have a dynamical system that has some degrees of freedom, and in the compost, this is just the composition of these two systems, I have a certain volume. Now we're going to think, always cutting a bit arbitrarily the system, and imagining two independent systems, which, because I am mentally confused, I confuse them into one, okay? So what is the volume? Well, if the volume here is this volume in here, cross the volume in here, no, because they are separate degrees of freedom. So you see that the volume here scales as the product of the volume in these two spaces, if they are not. So already you realize that volumes in phase space are not a quantity that is going to be additive with a number of degrees of freedom, its logarithm will, because if the volume is multiplicative, the log of the volume is additive. So for example, one question that many of you will face in life if you do complex systems is, what is the base, if I have a dynamics that takes me to points downhill, what is the size of the basin of attraction? It's a volume. And in terms of all my coordinates of the system, what is the size? How do I do a thermodynamic limit of that? This kind of argument tells you, don't ask for the volume of the basin of attraction. The basin of attraction is all the points that are led to this same minimum. Don't ask for the limit of the volume and the thermodynamic limit. What you should calculate is the log of the volume. This is the quantity that will scale with a number of degrees of freedom, and this is the quantity to which you have to point. We will see a particular case of that, which is when the log of the volume has a name. It's called the entropy, as we said yesterday. But usually you have to, unless you want to be mathematicians, you have to develop an intuition reasoning in this dirty way. You cut the system in two in a reasonable fashion. You check that you're not saying anything that is really immoral, but they are really reasonably non-interacting, and you look at the quantities you're interested in and you decide whether they multiply or they add or none. Okay? Very good. So let me do one calculation. Yes. Yes. Oh, he didn't see it. I took the logarithm away. Matthew's not paying attention. Okay. Okay. We will do one calculation that will illustrate a bit these things. I would like to preserve this. I don't know how I'm going to do. Okay. Calculation. If you don't follow in the more complete details, and here from now onwards, I'm going to be thinking that some of you may find technically, you can, of course, if you have doubts, ask me later, but try to at least, if there are steps you don't understand, skip them in your mind and go to the conclusion and remember the conclusion at least of how the argument went. That is the way we read books. We don't follow line by line. We just skip in diagonal and it's part of the experience of life. Okay. So imagine I want to calculate the volume of phase space that has a given energy. So of all the coordinates, there is a volume that has a given energy. A way to put it symbolically is to say that I'm going to write a delta function imposing this fact and I'm going to integrate. Yeah. This is a Dirac delta function. Okay. And now we're going to do a series of steps that will take us to another quantity. And these steps, you have to learn them because these are what you do every day if you do statistical mechanics. Okay. So when you have something in statistical mechanics, you immediately have to put it in an exponent. This is rule number 101. Everything has to be written in an exponent. So we will do this. So remember from quantum mechanics or whatever that you could write the delta function. Beta is now just a variable. I call it beta for other reasons that we will see. There should be an i here and this would be a representation of the delta function. Remember it's the Fourier transform of the delta function and you did it in quantum mechanics, for example. Okay. But we don't put an i. We put a minus. This is arbitrary because it's integrated over. We in the books never put an i here, which is I should. We just say that the contour of integration goes like this which is the same as calling it with an i. Why don't we put an i? Because if not, it makes the notation more tedious. Okay. Furthermore, most papers wouldn't even tell you this. So they would say, well, we have an appropriate contour of integration. Okay. Very good. And now, let me try not to make too many mistakes. Well, I will follow tradition and not put it anymore. Just distributing. Oh, sorry. I forgot here that came from above. And I am going to go to lower cases. Sorry. Because I expect that this is a nice quantity. As we said before, the energy. So I'm going to explicitly call it with a lower case in preparation for the fact that it's going to be a nice limit quantity. Okay. Now, we look at this, and this looks very much like the partition function we said. Here comes the big step, is that we are going to say I expect, and then of course one has to check, but I expect that this quantity here for the reasons there, and you will see the power of making this assumption, is going to be, I do it directly. I'm sorry, I'm going to use the G. It's the same, but it makes life a little bit easier. This integral is going to scale with a size this way. No, because of the argument there. This is the only crucial step here. Whatever this integral is, it's going to be this. Notice that this is indeed a partition function for a given beta. The fact that beta in fact is imaginary because we are integrating here for the moment we just take with philosophy. And then here, this is e to the beta and e. And then we have to do this integral. Now I'm going to write it again so that we can discuss a bit. Good. And now this is a function of beta. This is a function of beta. This is constant. This is n, which is large. So we have, don't copy this because it's a repetition. So now comes the technical thing that we saw yesterday. When I have an integral of something, it has a big number n outside. So this is going to, we are assuming, and this is our argument here, we have that this thing goes like... So this function here, beta e minus g is going to be something. And we are doing e to the minus n times this function. e to the minus n, and n is very large, this is going to be something like this. And so we expect this point to dominate in the integral. So this integral here, sorry, what am I saying? Yeah, this point will dominate because this has a minus, so you want the minimal point. When it's large, this is minus an enormous number. And so the actual integral is a superpicked there. Just like we said, happened with the volumes in phase space. Again, concentration of the measure. So mathematically speaking, so we can consider that this point dominates. Now if you want a bit of more mathematics, what is going on is that you are integrating over some line from minus i in the beta complex plane from minus i infinity to plus i infinity. Remember that it came from a delta function. Somewhere here, there is a saddle point, which is going to be the minimum of this thing here. And it's going to be precisely on the imaginary axis. This is why I chose, or everybody chooses, to directly call this variable without the i. It's just notation, okay? And what is the idea? The idea is that if you remember your complex analysis, your original thing had to be integrated along this line. And then you argue, okay, but if I go through the reals and I do some contour that has a constant phase, then I deform my integral and now it will be dominated by this and in this way you argue your saddle point calculation. In practice, when in physics books, you have saddle points, you do them without any complex. In the mathematics books, in practice, they don't do saddle points. Although they come from analysis and all that, this is one of the powers of how we theoretical physicists work and mathematicians. Usually their problem is that it's hard to make rigorous a saddle point. We don't care about rigor. And so this is a difference, okay? So when you read a paper, this part, they don't even ask for excuses. They don't even put the i, et cetera, okay? So don't be worried about that if you're going to do physics. If you're going to do maths or mathematical physics, it's a different story. Okay, so after all this talking, we take the saddle point here. So it's this function, so we have to differentiate with respect to beta and look for the saddle point. And this happens in a certain point, which I'm going to call beta star, which is the saddle point in beta. Okay, I took the derivative of this one. So this already, how do I read this equation? This equation tells me that I can, instead of doing this integral, I basically will estimate this integral as the exponent, e to the exponent, evaluated in this particular value, which is read from this equation. G is this function, which you have to calculate somehow. But it's what? It's a partition function, morally. But not a partition function that I proposed directly, but it happened because of the representation of the delta. Okay, very good. And then from here, we have the dictionary of the beta, which is the saddle point and energy, which is the same as saying that it's a dictionary between the temperature and the energy. Yeah, this saddle point equation gives us the relation between temperature and energy for this system. You have to know the G, but that's another story. The only thing we used about the G is how it scales with the size. Then, of course, if you want to solve this problem, you have to get somebody to compute this for you, which is tough, but it's another problem. It's a partition function. Okay, so now you put it in. You put in the thing, and what you get from that is just evaluating this thing in the saddle point, put it there, so it's e to the minus n, and here to the plus n, and here you put the value, the one that came from the blue equation, e is given g of beta. So I produce this dictionary, put it here, and I conclude that the volume I wanted to calculate it, I can approximate it to leading order, or if you want the log of the volume to be more precise, I can compute it with that. And now, this thing, this is a function of beta, a function of the temperature. Or of the energy, however, because energy and temperature have a dictionary. This thing is another function that I am going to call with small s, because it's per unit volume, per unit number of degrees of freedom, the entropy. That is the entropy. And we will check in two ways that this is the usual notion of entropy. You will see it is. And it all came from the fact that we did this, the crucial assumption is the extensivity of the log of the partition function. Extensivity means being proportional to the number of degrees of freedom. You choose, you have a dictionary here. Yes. Okay. So, yes, b star of e. Yes. Yes, yes, yes. But, okay. So, you remember that I told you that g was also called beta times the free energy. But you also know, perhaps you remember that they told you that the energy is e minus t entropy in thermodynamics. So, beta e minus beta e, beta times t is one times s, it's okay. This thing coincides with this thing. Okay. So, we have recovered the fact that this is the free energy that comes from thermodynamics. And let us do one more verification. Before doing the one more verification, sorry, you seem confused. Okay. Before doing one more verification, tell me questions about this, what I did? Yes. Here we assume that there's no poll on the contour. Yes, you assume a lot of things. And let me tell you, there is a complete divorce between the more mathematical literature and the physics one. You will never, in a paper of physics, physics physics, you will find this kind of calculation ad nauseum, but you will never find any discussion of non-analyticities or things like that. You just go on and have faith in life. The actual mathematical literature would of course care about all that, but the mathematical literature, my feeling, maybe Matteo has some idea on this, it's so hopeless to discuss if there are polls or not that they don't almost use saddle points. But remember that saddle points are silver bullet. We can solve lots of things that mathematicians cannot do because we use that. Yes, but I was thinking about, I don't know if it's related, but if there is a first transition in the system, would the partition function be, like doesn't behave in the way that we can do this? You can get all the transitions in this way, including phase transitions. What happens is that a saddle does something strange. It moves to another place or something. But as late as I think, when was the Onsager solution? At the beginning of the 20th century, people didn't know if this approach included phase transitions. They were afraid that the phase transitions made this approach invalid. Then thanks to the Onsager solution of the easing model, which sometimes people don't say, you do it exactly without any of these and you get the phase transitions, so they said, okay, the partition functions are okay. I have a question, because at first you said that beta was just a parameter, an integration parameter. But afterwards, aren't you treating it as the beta we already know? Yes, yes, yes. When did you make this switch? When you calculate the saddle point, then beta is no longer an integration value. It's telling you that the integral is dominated by that particular beta. That particular beta that has this connection with the energy has become a saddle point. Next, when you look at it, you say, oh, this looks like the inverse of a temperature. Well, actually it's the contrary. The temperature, as we know it, comes from this calculation. Actually, the justification of one thing called temperature is this calculation, in fact. So once you see it there, you say, oh, I have discovered theoretically temperature. Right, thank you. Okay, so let us make a small verification where, okay, I'm going to erase this. Just to check that I called entropy something that sometimes we call entropy two. For those of you who have seen the Boltzmann formula for the probability of a configuration, that is the partition function. This is the Boltzmann formula. So this is P of E of a certain EI. And remember, maybe you've seen it, it's even in the two Boltzmann, while some or integral, whatever, you have seen this formula. So now I plug this in here, and I take the logarithm. So this could be an integral or a sum. So let me do an integral of all the variables. And then I have to add the logarithm of this. The sign is positive because there was a minus here. So I take the log, and there is an integral of P, but this is a constant, so I take it out. And, okay, this integral is just the integral of a probability, so it's one. And this integral here, it's a probability of a configuration times its energy. So this one is the average energy of your system. So, and here there's the log of Z, but the log of Z, we were saying that it was beta, by definition, its name was beta times the free energy, or if you want the G function I introduced. And you see that this is the usual definition, as I did the calculation there, of the total entropy. The capital letters because I'm not dividing by the volume. Okay, so the Boltzmann definition of a probability and of entropy are perfectly compatible with everything we are saying. Okay, I should add that this thing of, remember the logic we were following, this thing of having a delta function, writing it in the exponent with an integral representation of the delta, integrate it over the complex axis. Taking a subtle point. Having a subtle point equation that gives me the dictionary between two things. And then replacing, this is what we did, goes under the name of doing a, the whole procedure put together is a legendre. What did it transform? You started with an energy and you ended with a temperature. That's the transform. You started with a micro canonical and you ended with a canonical. That's what you transformed. Okay, just to repeat infinitely many times the things, I'm going to do another exercise, which is the same, but I think I'm going to erase. If I'm erasing while you're writing, tell me to stop. Sorry if I'm insisting a bit too much, but I think that these things, it's nice if you can really learn them. So imagine I have an experiment with independent trials and the outcome of the experiment is this thing. I is the index of the time I do my experiment. Let's say that it has a probability of happening, which is the same for every trial and it depends on a parameter, the humidity of the day. I want to define the average. I'm repeating the experiment many times. I would like to understand how the distribution of this quantity, which is the average, this is the number of trials I'm doing. I want to find the distribution. You know how to do this calculation, but now you're going to do it in the spirit of what we have been saying. You know how to do it otherwise, but let me do it like this. So I want to study the P of E. I put it a hat because it need not be the same as that one. So what do I do? Well, same philosophy as before. I integrate over all the possibilities of all the experiments. This is my space. I say with a delta function what things are. This is just saying this in a delta language. Then I say that all these are the same function that depends on the humidity. And this is it. Now remember, statistical mechanics is the art of taking things to the exponent. So, no, I'm not even saying it. This time I spare you the I infinity. This is, I just wrote the representation of the delta, probably with an wrong sign. Indeed, probably the sign is this is negative. This one has to be positive. So then this is plus. No, this is minus. I changed this as a delta function is even. Okay. And now remember I said all the probability functions were the same. So I can write this as, remember this is an integral over the complex plane, but we never say it. I used small letters because it's an average quantity. And here I have, yes, this is just really shuffling the equation. Okay. But now look at this integral here. Well, here, no, because it's here. Okay. Now look at this integral. This is a function of beta. No, because once you integrate one of them, but you see that these integrals are all the same. Whatever the I, I is a dummy variable. Again, a trick you will find very often. This is an integral dxi. This gives some function of beta, but it doesn't care what the I was. They are completely independent and it's the same integral. So this is all this term including the product. This is this integral, which just to invent a notation, I'm going to call it z, which is the dx the alpha of x e to the minus beta. e n times. It's a product n times of this. Yeah. So this is what now. This is integral d beta e to the n beta e. And this z, z, just to call it in some fancy way, it's going to be n times a function g of beta. Here it's a, sorry, without the n, but the n appears because I have it n times. It's a product of n times the same thing. X disappeared because it was integrated. You see, this is an integral. Thank you. Thank you. But you still, okay. No, now it's okay. Now you do it. You do the integral x dies and you're left with something that depends on beta, which I decided to call it this way. Thank you. Okay. And now, again, once again, we want to, you see this is a function that is all multiplied by n. It's a function of beta that is multiplied by n. As usual, when you have to, this has to be a reflex that you have like the Pavlovian dog when you see something in the exponent. So two reflexes. First, whatever you have, put it in the exponent. Second reflex is, if it is in the exponent, it has a big number outside, do saddle point. I mean, this has to become second nature and this is all physics. It's not only static. You agree? And so now I do this with saddle point because there's this n outside. So I have to look for the minimum of this and then what is the minimum? Well, again, I differentiate this with respect to beta and I get e equals g prime of beta. This is the famous dictionary we were talking about. The g somebody had to calculate for me and let me say that it depends on the humidity here. Very good. Now, this means that we can estimate this as e to the n beta star, the saddle point, e minus g alpha, which somebody calculated for me, of beta star. This is also known as the law of large numbers. The experiment I'm talking could be throwing a coin n times and making a zero in its head, one of its tails and tends to one half and this is the concentration of the measure. But you see, and I did it on purpose, to convince you that when we do Legendre transforms, we are doing exactly the same thing as when we do the central limit theorem in probability. It's always a saddle point calculation. This, you can see that it's very clearly the same idea as before, only that a little bit easier. Now, one last thing so that you keep it in mind, I hope next week I can get there. Let's keep this in mind. I'm not going to do the calculation for this next thing, but just so that you think in these terms, I'm going to erase. This is what I'm going to describe as something that is, in a way, fashionable. So imagine I have an experiment, but now it's not this measuring n times a thing. The experiment is made over time. So the one I was telling you that has been discussed a lot is I have water. I have something that stirs it. So here, all sorts of nasty things happen inside the pot of which I don't know much. And I'm doing a work per unit time, a work energy per unit time that I'm spending. I plot it versus time, and I see that it does something because the water is madly moving inside so it's not quite constant. I mean, sometimes a vortex comes and touches my thing and then I get it and then et cetera, et cetera. So you want to describe this situation. And now we're going to think about that. I am not going to do the calculation. I just want you to think and convince yourselves that it is more or less the same story. So imagine that there is a typical time when the system decorrelates. It forgets what it was before. It takes some time. It's, for example, the eddy time, the time for a turbulent thing to decay. So I could imagine mentally that if this is the time in which this happens, it's an estimate. My experiment, I would like to know the distribution of the total work I have done per unit time. So I do zero t and this is the w of t. So I want the distribution of that. Now the total work is, you see, more or less, the sum of things that I can consider independent because this is the typical time. This, I have to consult with a physicist that knows the problem and they will tell me this is a time where the system forgets what it was before. So this experiment reminds you of those independent experiments except that at time divided by this characteristic time, t zero, you can think of this as the number of experiments, something like that, so that when you are adding the total work, you're adding these results which are more or less independent. And now you could do more or less the same calculation. Look, this formula reminds you of this one. You're doing an average over time. And then you ask yourself, what am I going to get? What is the additive quantity? What is the multiplicative quantity? You can do roughly the same. And what you conclude is that the total work, the probability of it just doing the same reasoning is going to be e to the, it's not going to be, well, the n over there. So time divided by some characteristic time which is the memory time of the system. And then a function that is like our g before, intensive, what does intensive here mean? There is no thermodynamic limit, intensive in the sense per unit time. So this is a g of total work divided by time which usually I call this with small letters. You see that the analogy with respect to what we did before is total except that now time is playing the role that before was played by space when we were doing the canonical, micro canonical. Yes? So this is what you call a large deviation function or a Kramer function, one of the two. It's the other one. I'm not sure. No, this one I think. Okay. You see the logic is exactly the same as before. The only thing is that additivity in space has been replaced by additivity in time independence of different patches of your system has been replaced by forgetfulness in time. Of course, if things are very correlated in time, you cannot do this. But this means the following, that then Mahesh does a measurement and he repeats this experiment a thousand times, measures the w and then because the average and the longer the experiment, the more the average is going to be. As usual, you throw a coin many times and the distribution gets more peaked. So how is he going to plot it? Well, he won't plot w, he will plot log of w, but he will plot it divided by the time because he wants, sorry, it's divided by the time. It's this way. You divide it explicitly by the time. If not, I'm dividing it twice by the time. That's okay. But then furthermore, because, and then he will get some distribution. Sorry, what am I? This is the logarithm of the probability, sorry, of this thing. And he's going to get something. And this is here. Okay, now it's okay. But there is a problem. If he makes the experiment longer, the thing will peak more and more. So that is still not the nice way to plot things because the longer the experiment, this is the same as throwing a coin many times. Now here, what it is suggested to you is that what he should plot is this thing here but taking this factor into account. And then if you divide by this factor, this log, then you're okay. So the log of p divided by t tau zero, this is the g of w over t, which is a universal function for large n. So what he has to do is take this log, now log of p divided by the time with a certain constant doesn't matter, and then plot this against the work per unit time, and then this will tend to a universal function. I am just putting in words what this calculation here will tell you. So again, these are the large deviations. But the more times you do an experiment, you throw a coin many times, the more the data collapse. But this collapse is given by this factor outside. If you divide by that factor, then the curve is universal. Is this okay or... Universal means that once you have your experiment and you do larger and larger and larger times, this curve begins to reproduce itself. You see, universal in the sense that it now depends on per unit time and nothing else. A time has disappeared from all this part of the thing. It is here, and the factor here is what makes the curve become more and more sharp. Or let's put it this way. Once he finds this curve, put it in here and already time will take care of the subtle point concentration, let's say. So this is the way to plot, and what I have done here is what is called the large deviation principle. Why large deviation? Because you see when time is larger to get this value here, because of this t factor, it's becoming harder and harder. But that's no mystery. It's exactly like throwing a coin many times. If you want to have three quarters of heads, the more times you use, the more difficult it's going to be. Three quarters of heads is an enormously large deviation, because in the limit of a million times, the probability that you get three quarters of heads is really, really very small. But the nice thing is that it has a universal function, and I'm giving it to you today because I want to show you that the idea of extensivity in thermodynamics is exactly the same as the idea as large deviations, is exactly the same idea as the central limit theorem. Only that large deviations, for example, happen in time. But it's exactly the same idea of independent pieces, and then you repeat independent experiments, and then the measure collapses, and then you have to divide, et cetera, et cetera. And the technical method by which you do it, we did it before with this experiment, and when you do it with here, you will see it's the same idea exactly. You just have to say that one of these is one of these, and you're done, okay? So I'm telling you this because unfortunately, the literature of this belongs to the mathematicians for historical reasons. So if you go to read the mathematical books that talk about that, you're going to struggle. But in fact, you are doing exactly the same thing that we learned in physics without so much pain because historically this was not the territory of mathematicians, but it was the territory of physicists. Questions on this? Because I'm changing the subject. Good. We're going to start our calculation, but I don't think we're going to finish it because I don't want to hurry up. This is a beautiful exercise but I think it was done... I'm not very sure who did it first. I don't know, one of these, which is the following. You want to study a particle which could be in any number of dimensions. We will call it coordinate Q. And we want to study it in contact with a thermal bath which is at temperature T. Have you done it in the master in France? This exercise? Well, the source is the same. So the mistakes are going to be the same. Okay, so you want to study it in contact with a bath. What does it mean a bath? A bath doesn't mean anything special. It means that the coordinate Q and eventually its moment P. I am doing it with one dimension because you gain nothing from putting more. But the bath, yes, has to be very large. So here there are many degrees of freedom with coordinates. I'm sorry, I've changed notation. I usually use X for both P and Q. And now I've changed notation. But okay, these are many, many that compose this bath. Okay, and now what does it mean a bath? Nothing special. It means that there is an interaction between this and this so that the total energy can be written as a system which depends on P and Q. In our case, only one. Then there is a Hamiltonian of the bath, meaning something of all the PIs and XIs. Then there is an interaction term which is an interaction term between the bath and the thing. So this is a function of, we will make it of Q and all the XIs. And then there is another term we will have to add which unfortunately we will have to add which only depends on the system and you will see why. Okay, so our trick has to be that we want to do the effect of this but don't talk about it anymore. So the idea of a bath is simply that you have a big system, you have another system and about these, you study their effect on this but you don't want to talk about them. So a bath is just a lot of degrees of freedom of which you're not talking. Okay, so we cannot do this for almost anything because we don't know how to solve almost anything. So the only thing actually you know how to solve in this life is a harmonic oscillator. So we will make the bath made of a harmonic oscillator. So this one, the bath is going to be made of simply harmonic oscillators. The mass I'm going to put to 1 because it doesn't matter. Their frequencies indeed can change, can be different according to, we will have a full distribution. So this is going to be just a lot of harmonic oscillators. Why did we choose harmonic oscillator? Because it's the only problem we know how to solve. The interaction is also easy. We're going to make it simple. These are constants that we have chosen. We are going to call M the number of these guys which we are going to assume is extremely large. And this counter term comes from the fact, we're going to compute it just now. It's a bit of a nuisance. It comes from the fact that there is some energy in the interaction actually term that if we want to reproduce the original system alone we have to take it away. And I will justify it in a moment. But you will see what it is. So what is the definition of the bath? For the moment the only thing we are saying about the bath is here the frequencies of the oscillators and here the strength of the interaction with my system. This is all for the moment we are saying about the bath. And I have to justify this counter term. So I want my problem to not have this energy here which you can think of as an energy of interaction. I want the system to interact but I don't want to count this energy. I would like to be able to isolate the part of the energy that belongs to my original system. So what is this typical energy at a given temperature? I have to do the partition function h bar. So what's the reason for taking the counter term? Sorry? For taking the counter term. Yes. I don't understand. Okay, let me do the calculation and I'll try to answer again. I was thinking like this kind of taking of counter terms are when there is some divergence. Okay, the name is forget about field theory. This is just a name. Yes, it's true. The counter term name is also used in renormalization. This is not it. This is compensatory term. So now if I do h of the bath, let me compute the partition function and I put the interaction term. If I only had this, what happens? I have this and I have this. Okay, so this would be e, let me do the partition function integrating over all variables e to the minus beta and here it's pA squared over 2 plus omega A squared over 2 xA squared plus cA q xA. And you see without even having the system, this gives me, this is a Gaussian integral. It's easy to integrate. And what you get is the following thing is e to the... So this, you see, this gives you like an extra potential to your system which you do not want because you want to isolate in this part what was your true energy. So this counter term is simply minus this that will compensate for that thing. It's just a thing that part of the energy, you see this is only oscillators and my system and this part of the energy wasn't in the original problem. So it's nice to consider it separately. So we put it here as a separate thing and then so that what is left is the original energy here. Okay, this is a nasty little thing. It's not conceptually important. It's just that we want to take away something that is artificially added by the bath. Okay, very good. But the counter term doesn't do much. It adds a harmonic potential to your potential here and nothing else. Okay, so what do we do with this? And I think that I won't do the calculations today but what do we do with this? So the beauty of this exercise is that now I can do not the statics but the full dynamics of the problem in the following way. The harmonic oscillators have a dynamics that I completely know how to solve. Their coupling with my system doesn't make it less solvable because this Q I don't know anything, this is my system, but it's still linear in the harmonic oscillator variables. So what I can do is I take my whole system, assume some motion for Q that we will have to... This is our question, how does Q move? And when we assume a motion for Q, we can integrate, we can explicitly solve for the oscillators, take the solution of the oscillators, re-plug it in the system and the oscillators will have gone of the problem and you will have a Q but a Q that is now dressed, its motion is altered by what were the oscillators which you, thanks to the fact that they are oscillators, you managed to completely solve them. So you will have a dynamics of Q and X of Q that depends on everything else, a dynamics of the axis that depends on everything else, meaning the axis, that depends on X and Q also, this is the dynamic of the axis, this is the dynamic of Q. Now this one you can solve so that the axes are gone and you get the dynamics of the axes as a function exclusively of Q, because you solve for the axes, re-plug because this is the dynamics of the axes here and now you have a more complicated dynamics of the Qs that have been, you know, the axes have been solved in terms of the Q itself. So at the end you get a monstrous dynamics of the Qs. This is the exercise we shall do tomorrow and the nice thing about it is that we haven't yet said anything about the bath, we only said these two things. But now comes the nice thing that will allow us to philosophize a little bit. There was no randomness here in the problem and now comes the nice thing is that we will say something about how the bath is energetically speaking. So we will say that the bath itself started in a state of, nice state of energy in equilibrium for the bath, this is innocent, and now we connect the system to it and then we will have a dynamics that is truly in terms of a bath. And the nice thing of this exercise is two things. First is that it's first principles, you're not assuming anything. And the second thing is that there is nothing specifically classical in this exercise. You can do it quantum mechanically, it's just a little bit harder, but not much. You can do the same philosophy because in quantum mechanics also harmonic oscillators are solvable. So that's perfect. And this, okay, this is an exercise that it's going to be important for us because it will give us the equations of motion, the stochastic equations of motions that you will find everywhere around in your life. Sorry. So here essentially you are computing the partition function, the red part, the partition function. Yes, this is a sort of side remark. Yes, but this is appropriate if you think that your system and your bath are in a larger bath. Yeah, yeah, absolutely. What you should do is probably to treat this thing in the micro-canonical in Sambor, right? Yes, yes, but in fact it's only, if you want an auxiliary step to justify adding this thing so that at the end of the day the energies you get are given only by the and not by that aspect. But for the moment take it as an arbitrary thing that I'm adding. This is just to hint at why we do it because the problem is that if we don't do it then the system will behave as if it had that potential once it is in equilibrium. So that's, when you do quantum the counter-term becomes more subtle because of what you say because then you cannot say the same things and it depends on time and it has all sorts of nasty properties. Okay, yes, another question. So the counter-term arises because we want to treat those XA and Q as a single harmonic oscillator potential. So there is the extra-square term and that extra-square term here should be managed by a minus. Yes, because I don't want it because I'm adding it and I would like to see something that has only to do with V. So I put it explicitly with a minus sign so that in the absence of V I have nothing really. So it's a way of bookkeeping. Okay, so it's the effective potential? Of the, yes. It's an effective piece of potential that comes from the oscillators which is not interesting for me so I explicitly take it away. It doesn't change anything to anything important. It's only to keep things in mind because if not what would happen is that even if I don't have a potential to start with, I end up with a potential that comes from the oscillators and that's not what I want. I want the oscillators to equilibrate me but not to change my potential so I explicitly take it away. Okay, and one thing, this is okay but about the last deviation function calculation can you motivate us by giving some example like it was just... This is for the previous thing. Yeah, previous one. Yeah, I motivate you. Okay. Physical problem. Well, the one I gave you of you know stirring water exists, it's an important thing. There is a lot of an enormous amount of activity in the example of the water is an example but there are thousands where you have a system and you stir the system in some way. Could be with an electric current could be with an alternating field and the system responds to you on average it sucks energy from you always because this is the second principle but there are fluctuations because the system also is a bit random or very complicated, no? And you want to understand the distribution of this or also when you have a conducting system with two temperatures so who was doing the quantum dots? And the system is very small so the current if t is larger than t prime the current is usually going in that direction but there are fluctuations because this is a system that is hot so in fact the current fluctuates and sometimes even in rare occasions because of what the electrons or whatever are doing here there can be fluctuations that even reverse your current for a very short time and now you want to understand these fluctuations and you're measuring over a big time so how should I plot them how should to make sense of them? Okay of course I would want to do p of the total current the past but well it's clear that I want to do it per unit time on average, no? But is this okay? No, I need a logarithm because the fluctuations are in the exponent according to our calculation but then I have a problem so here is the current per unit time total time but then I have another problem that if I do a longer experiment this average concentrates even more on the mean I want to remove that effect to be able to plot a curve that for all times will work so what we did says that we should divide this quantity by the time and then you will get a curve like this with all your experimental points of experiments done over different times so the scaling of the fluctuations of the electric current across this tiny little object have to be plotted this way and the longer the experiment the more concentrated on the mean it will be but just out of central limit theorem but this time is sort of compensating exactly that fact and so you can do experiments of every length and your points should tend to be on a universal curve that is called the large deviation function which measures what? It measures the rare occasions in which the current was not close to its average these moments and these are the large deviations now there is an infinite literature on this and because and maybe we will get there if you consider the rare moments where the current reverses which is very improbable there is a very nice theorem a relation between this and this that is called the fluctuation relations and these have been the object of 3,000 papers at least Can you give some specific reference? We will later on Now in practical life the applications of large deviations that I know that are the nicest are on weather because on weather it is a cyclone for example of course it's a large deviation it's a rare thing that happens but when it happens on you you feel it and so these people want to understand these things and so you have the equations of turbulence in the atmosphere and the large deviations are interesting for you so there is a whole line of research in this way in the same way the waves in the sea have a typical size given the wind and other things but every now and then there is a rogue wave which is a wave that can have 10 meters or more and nobody knows I think yet exactly what they are but they are not necessarily produced by an earthquake they are just some random fluctuations that happen to be very large and again they are rare but if you are in the ship that is receiving it they are important for you so there is a whole activity on large deviations that is very active let's say and then there's the fluctuation theorems which say something about these large deviations which can be a lot or not so much depending on how optimistic you are but they are one of the very few results we have out of equilibrium so they have received enormous attention also but you are pointing to things that are away from the mean that are rare so and when you divide that logarithm of the work by t is it similar to have an unbiased estimate? I don't see the connection maybe there is I don't know how to say you said that if you don't divide by t you are going to have a very small curve like this and when you divide by t you are going to have what you do in the boat because when you, in statistics when you have an estimate and if you have maybe the mean that is equal to summation of xi if you complete the expectation of that summation of xi you are going to have n and the real mean if you divide by the n you are not going to have the... but here it's a bit more but here it's a bit more because it's not the mean here we are studying all the deviations so it's the estimate of all your deviations from the mean in fact this is a thing that gives you a lot of more information than that if you are doing large deviations Hello, coming back to the boat so the way like the logic of this is that you want to somehow introduce a fluctuation into the system but excluding the effect of the interaction of the boat is that correct? Excluding a part of it excluding how it changes the measure but not excluding how it, in time how it bothers you this is just a static calculation and we are taking away the effect it has on this extra potential let's say but once we take it into account here then the bath is not excluded at all in time in our calculation with time there's no time here in our calculation with time the bath will be there and it will be doing things to you okay so we are just subtracting a thing that would buy us a bit your... why? because part of your energy is stored here and this is okay this is physical this is everything but normally you would like to study your original system and don't talk about this a lot it's almost conventional to explicitly put the counter term and take it away because you don't want to say okay I have but in physics this part of the energy is perfectly correct there's nothing wrong it's a piece of information that is in the link between bath and system a piece of, sorry, of distribution you take it away just out of because you don't want to give an equation where you specify something about the bath all the time so in taking it away you have the measure that you would have had with the system alone the equilibrium measure but the dynamics the bath will do a lot to you so in a way similar to when we derived conventionally the canonical ensemble where we have the two part of the system somehow in equilibrium but we don't care about the interaction yes, exactly but we care and we don't care because here we don't care in the sense that we don't put it in the partition function in the energy but we will show what it does to you over time thanks actually just a moment on this so your heat bath is going to infinity I guess, no? so you have this idea that the heat bath is going to infinity yes but at the same time your interaction is negligibly small usually with respect to the bulk energy so how is this working here so your CA are going to zero no, what is going to infinity is M and that's all M is going to infinity and that's all here the interaction energy is much stronger than the self energy it could be as strong as this one that is not something that you can sometimes make the interaction with the bath negligible but we don't do that what we do is make the bath so large that it doesn't care about what you do to it up to a certain point so it's like when you have Brownian motion you have one particle of pollen in water the glass of water is enormous with respect to your pollen but that's I didn't understand the remark you made Matteo about the fact that we have to treat the head part in the micro canonical because if I think at the experiment we were telling about if you think about an isolated system here you're thinking about an isolated system with the bath and the system right and then the energy is constant so you should describe it in micro canonical yeah so what Matteo is objecting and he will object even more tomorrow is that I will say that my oscillator bath is in the canonical ensemble now you asked me how did it get to because I am trying to use this to convince you that you tend to the canonical in contact with the bath it's a kind of recursive argument in the sense that I will assume it for the oscillators and show you that this induces a dynamics that takes you to the micro canonical measure in your system so you inherit the micro canonical property in fact it is true and then we will go back to this maybe that being canonical is a thing that you inherit I am canonical and big I make you canonical after some time and then you can act as a bath for something smaller and you will make it canonical and so on where does the story start it's a bit like the creation of the world where do you start it there is a little step where you should say okay there must be one system of which I can prove something but here for us it will be enough but yes the canonical property is hereditary and this is what we shall prove he would prefer because it would be more logical to make system plus bath micro canonical and then prove everything which could also be done it's more work but that would be if you're a purist a bit more prayer okay okay so I think it's a good time to have a coffee or whatever you want and we will resume at eleven in the computer lab which is on the other side of this hole okay be sharp rationalize the use