 Okay, so this is a quick reminder that I'm going to do just to remind you so that you see the connection, the formal connection with what we're going to do later. So Schrodinger equation tells you this is, today it's going to be one of the most technical days, but if you find it hard to follow the technique, at least try to, or ask me and you cannot see, it's hard, why? Because it's yellow, because it's small, it's small and yellow, okay, I'll make it larger. So as I said, if you are, if you find that the technique is beyond you, try to hop to the conclusions and, okay, so, okay, I'll keep this as my own. So, for those of you, I think that did physics, the Schrodinger, you have the Schrodinger equation is an equation for the evolution of a wave function, which is a vector, which we denote as vectors like this, with a bracket notation, and it's an equation that says this, and where H is the Hamiltonian, for example, okay, and is this okay now, yes, okay, and so how do we solve this problem? It's a linear problem, so we first construct a basis of eigenvalues, eigenvectors and eigenvalues of H. Next, we write our initial condition as a linear combination of these things, which is, it's a basis, I can do it, and then I know that this equation simply in this basis takes the simple form, this simple form, and so, and in this way, oops, sorry, I can, I can solve exactly the equation by the fact that I have diagonalized this. So, those that, so, okay, so why am I telling you this? We are not going to do quantum this time, but many of you, or I suppose all of you who did physics have studied this in the course, and have studied in particular the notation of brackets where you denote vectors like this instead of denoting them like this, but it's a usual notation that we use in quantum mechanics, and this instead of denoting it like this, dagger means that the vector is horizontal instead of being vertical, and the scalar product instead of, so, what will happen to us is that we will find in our, in the course of what we're going to do things that are basically, technically the same, although the interpretation is completely different, and I think that it's useful to connect things, and so, again, the notation that you find in all, but I think, well, my notes for sure, but I think that Kadanov Swift did the same, but for historical reasons, this kind of formalism is applied by physicists, but not by mathematical physicists or mathematicians, and so, what happens is that the same physicist, without realizing, is exposed to two different literatures to do things that, in many senses, are the same. I told you already this happened before, and with the idea of large deviations, again, here, this is what happens. We will see now the stochastic problem, which is not a quantum problem, but usually because you study in your life stochastic problems later after studying quantum, at least unless the courses have changed a lot, it's, I think, like this in every country. So, you study a whole lot of techniques that you use for elementary quantum mechanics, and then you go to stochastic and you change gears, because the history of the world is different, and then you do the same things, and because they have been in the hands of a different set of people. So, I don't like this, and so I try to use as much as possible the notation and the analogy with quantum mechanics, also because it is important. So, I keep them here. Now that you know them, you can perhaps recognize them, but I cannot leave them in the big version, because I'm not, I am not going to be able to do anything in the black equation. So, this equation, sorry, in terms of the eigenvalues and eigenvectors becomes trivial and can be solved, and so we can study the evolution of any wave function just by diagonalizing H. So, with this said, let's do it with what we're doing. So, we had arrived to two different equations. One is, okay, so when we did the, with the noise, Markovian and, so, delta function and proportional to the temperature. So, these were the two equations that we did last time, and F means it's an optional. Sometimes we will want the system to be driven by forces that perhaps don't derive from a potential. So, I say it explicitly here. Okay, sometimes I will omit it, and then the system will be in contact with forces that do derive from a potential. Okay, and what we said was that, okay, so this one is the one we arrived to first. Remember what was the logic? We did the bath, we got a Langevin equation with memory, then we said, okay, let's imagine that the memory of the noise is very short, then we arrived at this one, and then at the end we said, okay, but suppose that the inertia can be neglected and we are over-damped, then the second derivatives can be neglected and we fall into the over-damped equation. So, notice that this one is an equation that happens in phase space, and this one is an equation that happens only in space. So, this one, if I want to write it like this first order, like in Hamilton's equation, I need to make this happen in the whole space of coordinates and momenta. If I want to eliminate p, then I get a second derivative thing, but second derivatives in time we don't like, because we don't know how to solve easily the equation. So, just like when you do Hamilton, this one lives in phase space, this one lives in ordinary space. And now, when you are in the corresponding spaces, you start, let's say, from some initial condition and you evolve this thing. Now, the point is that this equation is an equation, it's called stochastic, because those, the two of them, because they have a forcing, let's say, that is a function that is a random function of time, which is delta-correlated, white noise. So, if I put this equation, let's say this one in the computer, I have to launch a simulation. Let's take this one as an example. There is a force, there could be an extra forcing or not, and then I have to generate random numbers, and p, p, p, p, p follow the thing. Now, when I do this, I'm going to get a trajectory. I can do that and it's okay, but I can do another strategy, which is, because this is the random that corresponds to the randomness of the bar. I can do this, but I can also repeat with another realization of noises. In the computer, you do that by changing the seed, and another, and another, and so, at the end of the day, for example, at time t, I'm going to have a certain, in my space, a certain distribution of probability. This distribution of probability is considering the ensemble of possible noises. And so, what we're going to do today, which is going to be our job today, is, okay, now imagine that instead of doing this, I want to understand how does this work? So, it's the probability distribution when I do the sampling of all the possible trajectories, and then I collect my data, and then at each time, I have a cloud of probability that is expanding. So, can I get an equation directly for this one that derives from this one? And these two equations are going to be sort of counterparts of the same phenomenon. Of course, if you're doing here, your space is PQ, and if you're doing here, your space is only Q. Okay? This is, it's PQ here because I don't want a second derivative, so I do like when you do Hamilton. Okay? So now, you're going to derive the equations that are satisfied by P of Q, and in this case, by P of Q, P and the time. Is this okay? Okay. So, let us, one word, and this is quite a horrible thing, but I have to tell you this. It's not nice, but suppose that you have this equation. Look what happens. You have Q dot, which is a velocity, and here there is white noise. Peel the RRL things. So, you see that Q dot has a component that comes from this. This white noise is something that looks like this. At each point, it's completely uncorrelated. So, you already see that I would want to take a derivative of this and find the acceleration. It's going to be awful. So, this equation has problems. This one less so because here it is the acceleration that has the problem, thanks to the fact that this has inertia. There is a mass, and so the velocity is well defined, and it's the acceleration whose derivative is problematic. Now, these things bring a set of problems in mathematics. So, I'm going to be brief on this, but it's something that you will need to know. It's a chapter I dislike profoundly of this subject, but it's inevitable. So, imagine you have this equation, let's say, without the forcing, and you want to simulate it. So, you take intervals of time, T, and so you're going to do like you do in a computer. You simulate this to get the next time. Here you take a random number, and here you have to normalize it like this. I have to explain to you this bizarre fact here. So, this, you know, when you're solving step by step an equation, you discretize it and you move by steps, no? In the computer. Now, why is it here that I have a delta T to the one-half? You see, normally this is the kind of thing you have. Why do I have a delta T to the one-half? So, when you think of it, the reason for this is that if my delta T is this size, and I'm finding one random number every delta T, I'm picking up a random number, and I do it n times, n times delta T. So, I have n, let's say, m random numbers, no? Eta here, eta here, eta here, eta here, eta here, eta here. Now, and of course, m is the true time divided by delta T, the time interval divided by how many times I... Now, imagine that I take half so that m goes to 2m, okay? Now, I'm taking 2m random numbers, but what is the size of... So, that is a random number, but what is the size of 2m random numbers? When you think of it, it's not twice, but it's the intensity of it grows by square root of 2, because, you know, when you sum variables, it's the variances, the square of the standard deviation that is added, and when you want to see the typical size, you take the square root of that. So, if you have, for example, a score that goes between zero and one randomly, and you do it 10 times, your expectation of this... So, sorry, a score that can be plus 1 or minus 1, and you do it 10 times, it's called... The average is zero, but what is the amplitude? Well, the trick is you have to do the sum of the squares and then take the square root. So, when you make 10 steps at random, your typical size is square root of 10 and not 10, and this is because the signs are random. And when you do 100, it's the square root of 100. So, the noises add as a square root, and this is the explanation of this fact. But this is very concrete. When you're going to put in your computer this thing, you will choose your delta T. But if you make delta T so small that the number of steps is multiplied by 10, then this has to be scaled, not with 10, but with square root of 10. It's a tricky business. Remember it. It's awful, and I will continue a bit more. I don't like this part because I think that conceptually it's not especially interesting, but it's something you have to know that it exists, because you are going to do surely most of you, if you're going to do complex systems, you're going to do equations of this kind, whether you do economics, econophysics, or ecology, or whatever. So, you will find this problem. On top of it, I tend to say that it's not very important. You will find that in books it's considered to be more important. Okay, so another problem you have is how am I going to discretize? And there is another way of writing this equation, which is taking the intervals in a different way. So, I am doing the same in the computer. Here, the V. I'm not going to take it. What has changed in this equation is that I have taken things, not evaluated before, but before and after in the interval. This one is not very nice from many points of view. For a computer program it's a bit of a disaster, because you want to find out this, but you have it also here, and you have it in a way here. So, it's not very nice for making a step-by-step computer program. But mathematically, this form has better properties. I'm not going to get into the details, but so, the words you have to remember is that this is called the Ito Convention, and this one is called the Stratonovich. I promise you that if you're going to do all sorts of complex system dynamics, you will fall into these two names. This is just a reminder. I don't think it is something that is too important conceptually. Why do I think it is not too important conceptually? Because all the problem here comes from having taken the Markovian limit. Remember that we had this memory function that was like this, and then we said, okay, we're going to the limit because we prefer a Markovian, a local in time thing, and this is going to be delta of t minus t prime, and this is how we got these equations, and this is how we got that eta eta was delta. So, this is eta eta of t, eta of t prime of 0 or of t prime, okay? And this is t minus t prime. It all comes from having done this limit. So, Ito Stratonovich, how you normalize the delta t to the minus a half, all of this is a headache that we have gained because we insisted that we wanted a noise that is local in time, that is, has delta correlate, that is white noise. And what happens is that when you do white noise, this function is really horrible because at each point it takes a completely random variable number. Now, I personally think that conceptually, this is not so important. Why? Because at least conceptually, I can always imagine that I am solving this problem which is what life is truly about. I mean, there is no white noise in life. All noise will have, as Amakhesh asked the other day, when you are being crushed by molecules, they take some time to replace themselves. So, before the second one comes, there is an interval. That interval is very short, but it's not completely instantaneous. So, the noise is not white ever. But, you know, for certain things, and for every time you use this equation in your life, you use a thousand times this one. So, unfortunately, this is life. But, okay, so this is a technical problem, but you have to simply be very careful because if you don't discretize correctly, your results are different. So, it's a, it's a, let me put it this way. It's not a nice fact of life, this problem, but you have to bear it in mind, okay? Yes. So, about the Itto Convention, isn't it dimensionally incorrect? Like, the two sides of the equation initially has same dimension. And then square root of delta t is multiplying with delta t. Yes, but remember that eta eta is a delta. And then the delta, when you rescale, it has the dimensions inside. So, at the end of the day, don't worry. It's, there are some dimensions hidden here, which is the correlation time with which you regularize your delta. So, yes, it's... So, the, and the difference between us in Stratonovich, we are taking the midpoint convention. The midpoint convention, yes. The idea is that the same physical situation can be written in Itto or in Stratonovich. Now, if you write an equation in Stratonovich convention, you can transform it to Itto so that the physics is the same. But, if you did the wrong thing and you wrote it thinking that you're writing it in one convention and not in the other, you can change your physics. So, it's either two things that represent the same. But then, if you write an equation and the same equation, you interpret it ala Itto and ala Stratonovich, then you can have different results. Especially, this happens when here there is a multiplicative noise, other things. But, the whole point is this, and the same happens in a lot of domains of physics. You write things that are continuous, like this equation, but they don't have a real meaning in a continuous. You need at some point to say, wait, what do I mean by this? I should discretize. And, and this is the recipe to discretize correctly. So, to tell you personal thing, there is one such thing as Itto calculus. Mr. Itto made an entire calculus out of this and did it properly. And, there are experts on this. And, I, I swore to myself at some point of my life that I was going to die without learning it. And, I'm in a good direction to reach that ambition. One thing, there should be equal sign in the second equation. Before square root of delta. Here, there is an equal sign, yes. But, it doesn't really matter. Second. Is it an equal or is it equal to zero? Okay. Doesn't matter because the sign of this is random. Okay. Okay, good. After telling you this, remember this. Be careful with this. I don't think it is enormously, ah, one, one, one extra comment. Maybe we're going to do, maybe. I'm not sure. The distribution of work. Now, work, as I said many times up to here is force. Let's say it's a force. Scalar product with velocity. Okay. This is the work per, the power you're doing. No force times velocity is the power you're injecting. Now, look what happens. Velocity, I told you here, is a horrible thing. You already see it's a horrible thing here because the velocity is equal to something that might be smooth, but time plus white noise. So, white noise is the worst function you can think of. It's just hopping madly from one point to the other. So, the velocity is, has this velocity is like this. And you're multiplying it by force, but then it's a mess because it's a force at the same time or delta T more or delta T less. And you have, you run into all of these problems. In here, thanks to inertia, something lovely happens is that acceleration is horrible, but the velocity not so much because the velocity is an integral of, of the acceleration. So, the velocity is continuous in this equation and not in this one. Thanks to the mass. So, this object is horrible for over damped, but it's okay for when you have a mass, when, when you have the top equation. Okay. So, you see something that comes from all these questions from having white noise that seems so terrible. Just add a mass and the problem goes away. So, this is why, and sometimes it is better to leave the mass there and then eventually send it to zero at the very end. So, you see, this gives you a measure of how conceptual these stuffs are. But if you read, for example, books of economy, econophysics and economy, I think, too, they start with a very long introduction on this one very mathematically on this problem. Okay. Having said this, let's go back to our, our problem. And of course, if you're going to put stochastic equations into the computer, you have to be very careful with how you discretize. Okay. So, now I'm going to, so, as I said, these equations, you can consider them as single trajectory by single trajectory. I mean, for one realization of noise or the distribution. And there is always a back and forth between one thing and the other. One thing I would like to do for you next week is to do a couple of things that have been done recently, quite recently for single trajectories that are very cute where you really don't work on the average or better. You work at the same time with the average and with the single trajectory. And this has been one of the big developments in the last years which have thousands of papers. So, and those we will see next time, next week. Okay. So, let me take this one as an example. And I would like to write an equation for the probability distribution now, forgetting about individual trajectories, just considering a sampling of the whole thing. I will use a method that is, I think, nice, but there is not the one you find in the books. So, imagine that this term would be absent and that you only had this in one dimension. By the way, we don't gain anything by doing many dimensions, so most of the time I will work with one. What you learn is okay for. Okay. So, what is this? This is just a random walk. You are kicked randomly to the right and to the left. So, what would the distribution be in this case? With time, it would be, it would be diffusion equation. No? So, that if I consider all the possible trajectories, this is time. At a given time, my distribution is a Gaussian. This is just, okay. So, this is, and of course I can solve this, but it's a diffusion equation. Okay. Sorry. If I'm in one dimension, if we are in one dimension, it's simply this. Okay. Now, imagine that on the contrary, I had this other equation without the noise term. So, I'm doing separately this case and this case. Okay. The solution to this, it's just a probability that is being advected by this force field. And this is a standard exercise. In many dimensions, it would be like this. And in one dimension, it would simply be. This is the equation when you have a force field, how a probability is carried over by this force field. And in fact, because I could, even in a computer, simulate this by switching from one to the other by very small intervals, what the result of the whole evolution is the, is the addition of the two. And this is the equation to which we want to get in many dimensions. Oh, I'm changing notations all the time. I'm sorry. Can I go back to Q? Sorry. But if not, I'm going, this is going to become, if we allow for a possible, okay, so we get to this point and we were, we're going to talk a lot now. This equation is alternatively called Fokker Planck, which I think is unfair. Perhaps the correct name would be Smoluchowski. But mostly, I suppose, you're going to see it referred to as Fokker Planck. And this one is the one that is related to this one. Okay. So I want to pause a little bit and make sure that you understand, even if you didn't understand how I got there, but that you understand the meaning of this equation. First of all, first of all, this forcing, I have added it, often it is not there, or sometimes it is the only thing I have and I don't have this one. This depends on your problem. You can be forced by a force that does not derive from a potential or only from a potential or any combination you want. Okay. This depends on your problem. Okay. So when we're going to do an example of active matter next week, there's going to be only a force that doesn't derive from a potential. Okay. Okay. So that's easy. Then the next thing you should see is that there is a temperature here that comes from, remember what we did before. This is the second derivative. It's the diffusion term. Okay. So you are being driven by this force and at the same time you're diffusing and the measure of this is the temperature. If you send the temperature to zero, for example, and f is zero, this is just a thing that is sliding down the hill with a potential. Okay. What else? Things to notice. There are second derivatives. So in this sense, it resembles a lot Schrodinger's equation. However, it doesn't have exactly the same form because there is a term that has a first derivative, which Schrodinger equation doesn't have. There is another thing that is important comparing it to Schrodinger equation is you see it's a linear equation in p. No, it's nonlinear here, but as a function of p, it's a linear equation. So I should be able to solve it just like I solved the Schrodinger equation if I can develop in eigenvalues and eigenvectors of this thing. So I am going to do now already one thing just to make the analogies a bit stronger. I'm going to call this guy here. I put a hat because it's an operator, fp for Fokker Planck, so that just to underline the analogy with a Schrodinger equation. This is a linear operator that is a bit of the flavor of Schrodinger's equation, although as you see it has second derivatives, not in the same places, and it has a linear first derivative, which Schrodinger doesn't have. How did you obtain the equation for the probability from those equations? Can you just tell a bit? How did you obtain the equation for the probability? So what I did is I broke it in two. I said what would I have if I only had this term, and then I used, I didn't prove to you, but I used the fact that you know that this is ordinary diffusion with a random force. Okay? And then this gives you this term. And then I forgot this one and did this one. Okay? And this is the equation of q dot equals some force field, and I suppose that one knows that this is the simple advection, which is this equation here. But I didn't completely do it, in the sense that I didn't do the whole derivation in detail. If you want the whole derivation, you can find it in many places. The references are in my notes, but in general the Riesken book is a good place to find it. Okay? At any rate, at this stage, if you take it as half proven, but the important thing at this stage is to see, because if I do the whole proven, I'm going to lose you completely. So the important point here is to see what this means. Okay? What is this equation doing for you? Okay? So can I suggest that maybe we do a tutorial on this thing? I mean stochastic differential equation to Fokker Planck. I can do it this afternoon at 2 p.m. There's an opinion down there at the end of the room. End of the room. I just want to say that in the diploma course we have some lectures online that we talk a lot about Hito Stadanovich. They are available. So I have six lectures on that online. About, sorry, can you say it again? Stochastic differential equation. Perfect. Hito Stadanovich, et cetera. And you have the person who did them in person. Yes, exactly. So I can find which lectures online if it's useful. Of course it's useful, but it's unequally useful, I think, if I... Okay, so then we'll send you an email pointing to these lectures, okay? Or maybe... Okay, excellent. And if there is a not online tutorial eventually, of course I will participate, except if you want to make me an expert in Hito Calculus which, as I explained before, is something I would... But, yes, okay. But, okay, but for the purposes of what we're going to do now, it's going to be all right if you understand perfectly well what this equation does, okay? What it is doing for you. Okay, I doubt a little bit the order in which we're going to... Okay, I will do now even more briefly the equation that corresponds to this one. Okay, it's a different equation. So I'm going to write the result and we will discuss two minutes about it. Why am I doing it? Because I want to discuss two minutes about it and because it's probable that next week we're going to use that one. So what happens if I want to do this equation now and study the probability of a cloud, but now instead of having a space where this is only q, now it will have q and p. So now my equation is a kind of probability cloud that moves in this space of q and p. Okay, so the equation that satisfies is very similar. So this one is for overdamped and now it's going to look like this but now of q and p and instead of having this operator it's going to have another one and the equation is going to be, is called the Kramer's equation and now I have to tell you what HK is. Okay, it has the same, the identical interpretation, it's just a cloud of probability that is moving this time in p and q given by the cloud is constituted by the ensemble of trajectories of this kind and now the Kramer's operator is even uglier. I have to say that I'm going to denote H, whole Hamiltonian, the whole classical energy and it's calligraphic because I don't want you to confuse it with the operator. It's awful, it's what it is. But the important thing here again and again and again is that you understand what this thing is doing. So I let you finish and then we can about the different terms of this equation which have a nice interpretation. Sorry, the last term is minus the derivative with respect to Pi of fi of q, isn't this? You could put it here, yes, I agree. You could commute it if you want. No, yes, but this is applied to, okay, sorry if you want me to, so let me move once again my formula. H is p square over 2 plus v and here if you prefer to be more clear you can put it here. Okay, I just commuted them. Okay, it is a linear operator. It contains derivatives, some of them second derivatives, some of them first derivatives just like the Smoluchowski Fokker Planck one and now let us discuss a little bit the different terms. First of all, the temperature in red is here and always as you would expect the temperature is together with a second derivative because it's a diffusion term. It's the only diffusion term in this equation. So the fact that you have a bath is said there, okay, and then this term for those of you who have studied classical mechanics you should recognize it. This term is a Poisson bracket by definition. This is the Poisson bracket of you acting on whatever it is acting and this is telling you that this term is your evolution given by Newton's equation, Hamilton's equations, sorry. Yeah, this is the evolution of Hamilton or Newton or you will or whatever. So the other way to see this is if there is no dissipation and there is no force then that's Hamilton. This is Hamilton. It's a clever way of writing Hamilton, a la Liouville in the sense that sorry the passage from a trajectory to a cloud that is moving in phase space is also possible in Hamilton's equation. You do this and then it acts on a cloud of initial points, the famous ink blot I'm telling you all the time that distorts and it's ruled in the pure classical case by this equation. Then to make take advantage of Matteo's remark, here this is the bath and notice that it's and it's not a coincidence very similar to this equation but only that it's acting on the piece but you see that it has a form that is very similar to this one which is okay. If this term is absent there is no diffusion no friction and you're you have what you have and finally this term which could be there or not is the forcing. The only thing that makes it forcing is the fact that I'm saying that perhaps the force doesn't derive from a potential. I'll give you will give you a special microphone. It's good. It's good. Don't make him feel ashamed. Why isn't there a diffusion term with a special coordinate? Okay because I decided unilaterally and arbitrarily that my noise is going to act on the on this equation. Could I have put some noise here too? Yes yes but people don't do it because usually you don't expect a bath to talk directly to the velocity you would expect it to talk to the coordinates like a potential no but you could have put the noise here also some noise here in this equation and then you would have diffusion in both of course you do you do not have to you have to go back remember ah sorry and where does this where where was this bias arbitrary bias where was it born that remember the equation we did the exercise we did the other day so I'm going to erase here it's a nice question I remember that there was an interaction term when we had the bath and the oscillators and I decided to do q which was the coordinate of my system and then I here I did sum over ci xi and these were both coordinates and there was a term like this which was the interaction between system and bath here already I chose because I could have added for the same price and even I could have added for example p other ci's or even pi i's here I didn't do this but I could have done it and if I do everything redo the whole thing I will get a term that will couple to the p and then at the end of the day it will become a noise that will be here and this is perfect perfectly legitimate I don't know maybe Edgar or Matteo know of any place where this is explicitly done but it could very easily be done um can we get Fokker Planck operator from this Kramer's operator ha that's a nasty question um yes but it's a mess okay um okay that's a delicate and and very delicate question because remember that I said remember how we got here we we said okay let me try the case where the mass goes to zero the mass you can put it here if you want and then boom boom I throw away a second derivative and land up here okay remember it was very easy last time from here to here easy now your question is how do I go from here to here or let's say from here to here no it's the same thing I'm changing operators and so this is uh usually a very difficult problem that we face a lot in physics and in maths it's the case where you are neglecting a derivative because it's the acceleration here that you are neglecting now neglecting a derivative is a nasty thing because you can neglect a term but if you neglect a derivative how do you know that it's small so in our case what we neglected is mass times the second derivative with respect to time of the position and we said this is small I throw it away but do I have a right to do that well more or less because imagine that this function changes very fast in time then even though this is small this can be very big so neglecting derivatives is a tricky business always okay so that's one consideration I think the idea is that you can neglect it when you look at long times right so you can neglect it when you look at long times but you can also I mean if you have to check time to or you have to check a posteriori when you solve your new equation that the new equation doesn't have a variation that is fast because then it feeds back into your original problem so to be you will find in risk and this thing done and let me tell you how it goes it's complicated and tricky if you want to do it at the level of operator you should be able of course you your question is good if there is any justice in this world you should be able to do it at the level of but what is the problem the problem is that first of all you have to change spaces here you have a space made of peas and queues and you want to get rid of the queues how do you approximate an equation so that you lose coordinates okay let me give you a quick hint of how this is done because it's worthwhile you will find it in a risk and well done so imagine this kind of approximation that leads you from here to here in the well done is imagine you have a quantum problem now for those of you for example a channel like this with periodic boundary conditions and this is very narrow okay and you want to study the dynamics so this other this coordinate I'm going to call it q because for and this other coordinate I'm going to call it p it's a name okay how do I use the approximation that this is a very narrow channel my wave function is a function of p and q so I want to lose I would like to do the approximation that p is super narrow with respect to the length how do I do it well the way to do it is for every slice you first solve it as if q didn't move because you say that p is moving super fast and then completely solve it eliminate q p and then this you get an effective equation for q for every p this is I'm going fast this is called uh bone open Heimer approximation it's an approximation where you say I have a very fast variable I completely solve it at each time and then I carry it over adiabatically adiabatically would mean it's a tricky business not super difficult but you have to take some time to read it and risk and does it essentially for here so what does that mean imagine that the mass is very small so you cannot neglect the velocity but what you can say is that the rattling of the velocity is very very fast with respect to the motion of q and so you do bone open Heimer analogous to this one in the sense that you solve the this this equation but for given q and you let p move and then once you integrate away p you move along q and you end up with the other equation for those of you who are interested look for it in risk and but it's a word of warning that yes throwing derivatives in equations is a tricky business okay everything okay so now I think I sort of made you suffer a bit so we will continue next week but we have to say a couple of things that are important and these with this we will finish today so it's not okay well anyway the analogy with Schrodinger we will come back okay so I leave you as an exercise that if you are not very memorious of how you solve Schrodinger like this in general it's the first three pages of quantum mechanics have a look at it okay so that you go back to the bracket notation and so okay so what is the nice thing of these equations that both have the form dp d time equals some operator which could be Fokker Planck Smoluchowski or Kramers a minus is because I like a minus there p okay and this is going to make an evolution p of q and eventually p in its kramers and the time so what these equations have that is very nice I think I can erase this and this okay what these equations have that is super nice is that if you start from a distribution so you you start in your space with some now initial ink stain but now there is noise so the the stain will will move and if we let us think two seconds at the case where there is no forcing okay so from now this what I'm going to say is without forcing what happens is that the p will evolve and one can show and we will show that it eventually settles into the Gibbs measure so p of in kramers q p t equals infinity is simply the normalization e to the minus beta h of pq that this is the ordinary energy function divided by the normalization and p of q t this is for kramers problem and for Fokker Planck where we have is simply e to the minus beta v of q over the normalization so t equals infinity so this has the lovely property when f is not there this is super important if f is there okay this is when this is not there and this is when this is not there what happens if there is an f that doesn't derive from a potential well you're on your own there is no nothing I mean you have to solve the equation the brutish way there's nothing you can do nothing really nothing there is no general rule for solving for the target distribution there may be a target this there going to be a target distribution but you have to compute it brutally with you know solving the equations what about perturbation theory or perturbation that's brutal yes things like perturbation I mean you're on your own it's you're in the world of physics no perturbation approximations of various kinds etc but an a priori formula like this one or this one forget it then you could ask even more do you do is that what I really want a formula like that for an out of equilibrium problem is this really the information I want I would say not really but that's another question okay so this is an important thing to bear in mind because all problems like when you have two temperatures that you are kicking the system out of equilibrium or when your particles are motorized like in active matter they are sort of driven they they have their own they have an F implicitly they are out of equilibrium and there is no way that you can in a closed manner solve for the distribution yeah so to me the left one the last time solution of the focal length dynamic looks like the Boltzmann distribution but without the kinetic term exactly but like how can I make sense of that because I mean the particle is obviously moving it's a tricky business related to the question that he asked before you have integrated away the velocity but then when you threw away the mass and you said it was zero zero the velocity is infinite all the time because if you have particles that have zero mass their velocity so if you ask the question what happened with the velocity you shouldn't ask that question you see you have made in throwing the mass away you have made a mess this not with standing this this is the equation you will find more often but with respect to velocities you have made a real mess and you pay it you gain because you have a simpler equation than that one that looks really awful but you pay it because then you have all the ito stratone of each mess which is a a remembrance of this fact that you you have been playing dirty of course remember that if I had left the kernel of memory as we had it the first day with a generalized Langevin equation everything would be sweet and innocent but of course the equation is nasty okay let's check it it's very easy let me plug this one in here now let me be a bit more precise so what is this Gibbs measure it's e to the minus we are in the Kramer's case beta p squared over 2 plus v okay this is this divided by z but you see that already this term will kill this one because this term you you you see it's the Poisson bracket of h with e to the minus beta h so it's zero or if you want calculate it explicitly apply this to this thing this thing and you will see that you get zero but it's not no coincidence because this is the Poisson bracket of something with respect to itself okay and this is normal the Hamilton equation this is up to here pure Newton of course will leave this thing stationary because it doesn't do anything to it it just keeps it this is by the way you will see okay if you want to do it explicitly you just compute this derivative acting on here so h will come down and you will have a term dhdq and here you have dhdq dhdp and you have exactly the same thing here and here and they kill each other okay so this is normal it's this term we said was absent so let's cancel it and cancel it and let's cancel the one here so that you know our discussion we are clear now this term is more interesting this is the bath and the bath the way it is you see that it kills it will kill this part of the term because the derivative will kick a p down with a beta that will be cancelled by this t and it will have a minus and a beta where m and here you will have the same thing with the opposite sign so already this term here you don't even need to take this derivative this term kills this one and because this is a product it kills everything check it look at it funny no because it's only killing the kinetic part you can be worried about this but if you're not I'm not going to worry you okay good so we found out that when h acts on p Boltzmann it kills it so the time derivative is zero so so the Boltzmann Gibbs distribution is stationary then we will have to discuss if it's the only stationary and we will just we will say we will discover that roughly unless you do something really nasty yes but we have to decide what nasty means okay and now let's look at this one this one is even easier so we want to show that this gives you zero so the way of giving zero is that all this thing applied to p gives zero but it's much easier than that each one of these terms kills this one you don't even need to take this derivative why because dvdq here where are we what happened with the Boltzmann okay so p in this case is e to the minus beta v so when you differentiate one time a q will come down here and with a beta that will be killed by this one and with a minus because of the minus and then dvdq is the same thing so this will this will kill the thing so already this little bracket kills the measure and each one of them kill the measure so zero okay so you have proven that the Gibbs distribution is stationary so now think a bit what we have discovered is that all the exercise of getting rid of the bath etc etc that we have been doing for three days we have now we have an ergodic theory we have a system with noise that we let it go and it goes to equilibrium as as one likes okay if there is no forcing again also in this equation if there is a forcing that doesn't derive from a potential you're on your own even for the simpler case you have to there's nothing you can do which means that you have the things of theoretical physics perturbations and things like that but there is no clever thing that you can do like this and get a generic formula like this one okay okay questions even if you didn't follow every step I hope that you see the strategy okay yeah ah yeah I think I should have said perhaps if you have a forcing term you're putting energy into the system potentially you can put energy into the system and and then you're taking it out with this term which has friction and in those cases if the potential is reasonably bounding etc you can show that you have a stationary distribution but it has to be shown because sometimes your but if your forcing term is too cruel and it overcomes the frictional term then you have a problem I want to go back when you wrote the Stratonovic discretization the noise term has to be compute from the next time step is there meaning in there or it's just the definition no no it's all constructed Stratonovic has an advantage technically that if you do things alastratonovic then you can apply the rules of calculus with a chain rule it's constructed for that so for theoretical things it is more powerful but in a simulation it makes your life completely miserable because you need to know what the next step to be able to make the next step so it's bad but it's it's it's it's a formal thing did did Edgar did you give this ito Stratonovic okay it's it's a subtle thing and it's a nasty thing but it's life yeah sorry you need the so I don't want to create controversy but ito on the other hand has also some advantages for calculating our average use yeah in terms of yeah you can find this in the lectures that yeah Stratonovic is better to you want to do calculations like derivatives because as I was explaining you can use the standard rules of of calculus in it or no you have to you cannot do a derivative with a chain rule for example but it's I would say it's not that much you have to add it's one extra term in the derivatives but it has the advantage of averages to compute averages ito is very convenient your details I can explain you and let me add in in when you do things with path integrals and such we always use ito I have never seen but it is a subtle problem and and I think that the thing is that the risk is that the trees don't let you see the the forest because when you face this ito Stratonovic question it's rough and and and for a moment you forget what you were doing so this is why I I insist it a lot okay okay so thanks okay so we recommend at 11 in in the computer lab okay