 So, welcome back to the Spring College, so let's start with the third lecture of Edgar on stochastic thermodynamics. Okay, so welcome to the third lecture, and today I will discuss briefly fragmentation theorems, which is one of the key results in the foundations of stochastic thermodynamics. But before going into this, I will discuss again, one of the proofs that I showed yesterday and I believe it was totally unclear what I did, because even myself when I was looking at my notes yesterday I couldn't remember what I did right. And this happens a lot in physics. So, yesterday I was showing the second law. So, mainly that is thought a dot. So it's done derivative on average is always positive. And I was going through this proof. And at some point here it was very unclear I must recognize. Okay, so I would put a big question mark here, and I'll try to say, what did I do in this step. I believe you recognize that, okay, this is the system entropy. And this is the environment entropy. This is easy from what I explained, and that the first term on average is zero. This is just because of conservation of probability. Right. This is so the only term we should take care of is the second one, which is j over dp times dx dt. So, here, I want to show you that, okay, this transforms into a simpler kind of form, if we do a change from Stratonowitz to Ito, but this part wasn't clear in my explanation so I'll try to explain it a bit better. So, first of all, to understand this proof, forget about this term. Okay, so I think this you don't need. And you can jump directly from here to here. Let's try using the equation that I have in the middle, this theorem. You see that what I'm doing is I'm transforming a Stratonowitz integral or Stratonowitz product into, so like here, into an Ito product of the same thing. I have Ito product of the same thing, j divided by dp times dx, fine. And then there is a second term that has a ds, well I write ds, but it's dt. And it involves g squared, as I show here, divided by two times f prime. Okay, what is g squared? g squared is the amplitude of the noise. So g squared over 2 is nothing but d. Okay, so when we have here divided by d, in the next terms, there was a d divided by d, and a d divided by d, which cancels, okay, this is the first thing. The second thing is that there is f prime here. And what is, this is for f times dx, but now we have j divided by dp times dx. So we have to do derivative of j divided by dp. So the derivative of j divided by dp is, the derivative of respect to x is, okay, we do parcel with respect to x of jxt divided by d dxt is to take out the d. And then what you get is the first term will be parcel x of j divided by p. j parcel x of p, actually with a minus, this should be a minus divided by p squared. Okay, these are the two terms that appear here and here. Right. So why next I said that this is equal to zero. This is because I've applied the Focke-Pak equation. So the Focke-Pak equation says that parcel x of j, okay, I'm doing a big mess, parcel x of j is parcel t of p with a minus. And this I show you up here. Okay, this is equal to zero. Okay, that's why we can cancel this term. So we are just left with two terms. This one, which has ito product with dx and this one which has dt. Right, these are the only terms that survive in this calculation. So now, okay, I continue and I say, in the last line, I have what I've done is, so in the previous line I use the Focke-Pak equation. In the next line, I use the Langevin equation. I wrote dx as mu f plus square root of 2b db. Okay, that's why the first term appears mu f, then there is a db term, and then there is the last dt term that comes before. So I have three terms, but I'm saying the one in the middle is zero. Why the one in the middle is zero? It's because I'm using a theorem from ito calculus. The function evaluated at xt times the increment of the bernie motion, multiplying ito sense, on average is zero. That's why the term in the middle is zero. So we are left with just these two terms. So the first time to be multiplied by dt, I guess, no? Actually, yes, I forgot that's what, here we miss a dt here, exactly. Okay, there is a dt missing. On the right you see j partial xp divided by p squared times dt. So there are two terms that go with dt in this average, okay? Actually, what I'm calculating here is the average of ds. Then I will have to divide with dt to get ds dt, right? So there are two terms that survive. And then, okay, these are the two terms put together. And what I use in the next step is the definition of probability current. So mu f minus d partial x of p is j. So I get j squared divided by dt squared. And these are all things that are in this formula are positive. That's why this quantity is always positive. Okay, there is another way to show the second law and this it will come later in my course, which is the fact that, okay, we know that s dot for a trajectory t is kb times the logarithm of the probability for a trajectory. So I call this divided by the probability of the time reverse trajectory. I call it xt plus. So when you take averages of this, if you take average, what you do is you integrate this with, okay, you multiply this by p, sorry, pxt. And then this is a path integral. You integrate over all possible trajectories, dxt. And this thing here is p can see as p log p divided by q. So this is a cool backlier divergence in reality. This is the cool backlier divergence between the probability to see a trajectory. And another measure, I call it q, but in reality is the probability to see the time reverse trajectory. Okay, and we know from information theory that any cool backlier divergence is positive. You can get a simple proof of the second law from purely information theory. So here I use the definition or the result that I use showed you the data. The stochastic and reproduction associated to a trajectory is kb times the logarithm, the path probability to see xt divided by the path probability in the time reversal process to see the time reversal. Okay, this is another way shorter maybe, but the other one is more physical. So both are correct. I hope this helps. I think yesterday it wasn't my best day. So this is the first thing. Please, if you have other questions. And this is the time. Ask me anything. Otherwise, and I'll go with the question. Excuse me, I have a question. Yes, yesterday when we went through the case in which we have a discrete system with with a mark of process. Okay, at a certain point, we defined the entropy, the environmental entropy, as Katie the log of the ratio of them of the jumping rates. Okay, I didn't get whether that was a definition or if there were some other reasons why for that. So I didn't get the physics actually out of that. This is often an assumption, a key assumption in stochastic. So you have a state one and state two, and when you are not in equilibrium. You're out of equilibrium just assume that the rate in going from one to two, and the rate in going from two to one are related in terms of the environment entropy. Related as e to the minus the environment entropy in going, the environmental entropy change in going from state one to three to divided by Katie. This is something we assume in stochastic thermodynamics. Okay, it is the way we can do thermodynamics if we don't have this constraint. It's very, very complicated to stochastic thermodynamics because we don't know in every trajectory in every jump what is going on either in the system or in the environment. This is called the local detail balance for this local balance for this. But many physical systems obey this, this, this relates for instance a molecular motors or colloidal systems way obey this, this relation this would be typically the minus the heat going from one to two. This is a key assumption that we use throughout stochastic thermodynamics and is valid in in many small systems that are in a nice or thermal environment or in equilibrium environments. Okay, so the key point here is that we have a system immersed in a thermal bath and this this one or several thermal bath are in equilibrium, you can have non equilibrium here in the system but you are in equilibrium in the environment. That's what leads you to prove this. You can also go from microscopic dynamics to mesoscopic and prove this result but this is even more technical so I'm just sketching briefly the assumptions. If the environment is in non equilibrium, this is not valid. So all the models that I'm going to show I assume there is a system, maybe driven off out of equilibrium in an environment that is in, in thermal equilibrium. Okay. Yes. Okay, thank you so much. Perfect. Other questions. So is this usually a good assumption. Sorry. I wanted to ask, assuming or modeling systems to be in thermal bias. Is it a good assumption. Looking at the experiment. Yes, yes, looking at the experiment. Yes, because, for instance, you have a colloidal system that has one degree of freedom, you have it in a in a bath of water molecules, and there are typically one of a number of water molecules around it. So I would expect that the bath has reached equilibrium at some point. Also, the variables of the bath, something very important is the timescales so how long it takes for a variable in the bath to relax the equilibrium. So if you are comparing an atom or a water molecule, which has a size of angstroms with a colloid that has size of microns. The system relaxes much faster to equilibrium than this one. So you can really, in many situations assume that you, the bath is in equilibrium whereas the system is out of equilibrium. Okay, of course, you cannot do this with atoms to atoms and do a stochastic thermodynamics of one water molecule with another water molecule in vacuum. This is not ruled by a stochastic thermodynamics. So you need clearly a separation of timescales degrees of freedom that relax fast and reach the equilibrium very fast, and the reason freedom that are slow like the position of this colloid, and take longer time to reach equilibrium. This is a sense for to do a stochastic thermodynamics in a way that makes sense. Thank you. Questions. Hello. What is the difference between this is stochastic entropy. And for instance the entropy in statistical mechanics. Okay, very important, very important questions. So the entropy and in classical mechanics, for example the entry production. I told you the idea of, okay, you have this process you have a gas. This has millions. Okay, this has 10 to the 23 particles degrees of freedom. And then you compress this gas. And you finish in the situation. Okay, along this process, you have a dissipation of heat, right. And you have a changing entropy because here, there is higher entropy as initial of the system, then here as final system is confined so there is, there's less disorder in the system. So if you have this process, imagine you do it at a given speed could be, you can say that the total entry production is minus the heat, whatever the temperature, plus the size. Okay. Fine. However, the key point is that this in microscopic dynamics this is a number. The entropy of a compression of a gas if you go to the books, it's, you can calculate. There is a number. It's, I don't know, a five five kids, for example, no, this is a microscopic. Okay, we'll be more because we have a bigger system. But. Okay, so here what happened was the following, you had a protocol, which was, for example, you were changing the volume of the gas or the pressure. You were decreasing the pressure, something like this. And you had the response of your system which was, for instance, I don't know the volume, which was decreasing or whatever, no, but you will have a response of the system that this. Okay, let me call the response. I know are the curve like this. Okay, fine. It means you do one protocol, and you get out of it one response. And then this is not the case. This is not the case. So you do, for example, a compression in the sense of compression could be, for instance, I don't know how to say that. When example could be you have a harmonic trap like this. And you have a particle initially at zero, and you compress the system in the way that you make the trap stiffer. So you go to a situation where the trap is like this. Okay. And the protocol as well. The protocol here will be if this is a trap with energy, you, which will be one half of kappa, kappa of tea. Okay, I think this is not very nice. I should change the color of this. Okay, sorry. Okay. This means that I am changing the stiffness of the trap. For example, please. Okay, this will be top of tea. Okay. If you have a small system. It means fluctuations are important. So you can apply this protocol once and get a response. For example, of the position here the response will be the position of the particle. And in one run, you may have this. But if you repeat the same process on the next day at the same time with the same initial condition. You won't get the same response because of the fluctuations of the system. You can get a totally different response. Okay. That's why we do stochastic thermodynamics, because we have the same physical process, the same driving on the system, but they can be even with the same initial condition. Totally different responses. That's why we introduce the notion of stochastic entropy is an entropy that depends on the evolution of the system which is fluctuating. If you and I, we do the same process, compression of a colloidal particle with the same experiment, we start in the same conditions, we will get different response from the system and different measurements of heat. You get the point. It's like we are recovering the information of fluctuations, we lost, we lose the fluctuation by using the stochastic methods. Yeah, using the stochastic methods, we, we, we cannot not only analyze the dynamics, because the dynamics would be what is the what is the particle at time t or what are the distribution of the particle, but we can also analyze the thermodynamics. So each of these trajectories that we we obtain, have a different value for the heat for the work for the entry production. And this, I mean, this is clear because the system has fluctuation so it's because it's a way of analyzing the fluctuation of a system in reality. So, we are not only interested in in where is the particle, but also on which thermodynamic signature this motion. Okay. I hope this was more clear. Thank you. Other questions. These questions are good because I think clarifies better. I'm talking about. So, and all right, I'll go on with the lecture. So today, I wanted to introduce at least briefly, what are the fluctuation theorems. And, let me just raise this. Right. Okay, so fluctuation theorems are, in few words, are statistical properties that obey the fluctuations of work or or heat or off an off entry production with a degree of universality so. Okay, I hear I call theorems but most of the times they are not theorems because physicists we don't do theorems we do, we obtain results. We have a problem now with this drawing. Because I'm having some trouble with my tablet. Okay, I'm sorry but having to close the app. Sorry, I don't know how to solve this a drawing mode, drawing mode. It's not working. Okay, I have to close this up and open it again. Okay, we're back. Yeah. All right. So, the audience are results that are obeyed by many physical systems. For example, you can find a result that is true for long event systems, also for Markov systems, etc. This is somehow a degree of universality. I'm really having a big trouble with this app. So, okay, okay, this is not working. I go back to zoom, I stop share, and I share whiteboard. Okay, you see my whiteboard now. Great. All right. So, let me see. No, it's not working at all today. Okay, I'm really sorry. I don't know what is going on with my apps, but I will say no. Okay, but I can't. Okay. No, no, it's not, it's not charged. It's something else. I was having a problem with my, okay, really sorry, but it was a problem with my iPad. Please can you stop the recording for a second. Otherwise, this will be very useless for this post. I will send you the reference later. Just because the lack of time. I can go direct to one of the key results is from the year 2000, that is, groups for this in theory. Okay, so as every, what I put in theorem, because we usually call for this relations as every theorem, theorems have some assumptions. So, we have to first introduce the setup, and the setup, I can explain it with a kind of an experiment. Imagine you have a particle in an optical trap. This is like having a particle in a spring. And we are at time t equals to zero in equilibrium. This is time t equals to zero. And we are in equilibrium. There will be a small system, which will be formed by this colloid and also a small chain, which could be, for example, DNA. This, that is attached or to a pipette. Okay, by this here, and we can control. So in our case, the control parameter will be the position of this pipe. So we have a reference frame, zero. This will be X, which is the location of the pipe. Actually, we can also call this lambda because it's our control parameter. So we could hear say that this is lambda of t. Okay, so we will be starting in equilibrium, which will be this configuration and we will have a process in which we will take this pipette and move it away from the trap. This particle will suffer fluctuation. So it will be moving left and right randomly. And the pipette will be moving in principle in a deterministic way. So, at a later time, what we will have is a particle in the trap. We can move out of the focus slightly. So it will suffer like a harmonic interaction, it will be like a spring for the particle. And then here and the pipette will be a bit, sorry, a bit farther from from what it was before time zero. So the pipette will be, for example, here, and the DNA will have now, I don't know it could have even a loop like this. Okay. This is in the middle we are in out of equilibrium. And at the very end, so we do a process of a finite duration in which at the very end we let the particle relax again to equilibrium. So we are pulling from this DNA chain with the pipette. And we, we are doing a protocol in which we are stretching the DNA. So you can find very well explained in papers by the lab of Felix return with one of the pioneers of these single molecule experiments with systems, and you can reach a final equilibrium state with the DNA a bit overstretched like this. And the pipette in a further position. Okay, this will be time t t final and we are in another equilibrium state with a different value of lambda. Okay, here lambda was had a small value and here lambda is bigger. Okay, as related to the question made by either is before, as you see, when you have a small system like DNA, when you pull the DNA, you will have different responses in different realities. So the fact is, this will be very, very clear here. The key point is that we will define this process in going in this direction as the forward process. This will be forward in time for what process. And we will also look at the reverse process in which we start a time to zero in this equilibrium state. In the middle, there is no equilibrium driving. And we will go backward. So we will do something like this. Okay, this will be so called the backward process, backward process. Okay. And the top for the theorem of crooks that you can find in in two papers in one is in p.r.e. Okay, let me just write here one is papers are single author is just crooks coming crooks p.r.e. 99. And there's not one p.r.e. 2000 both are great to read that very well explained. There are really classics in the field. Okay, so we are saying we start in equilibrium and we finish in equilibrium. So what is important in equilibrium is the partition function. So something that we can say out of this process is the following that at time zero will be in a canonical equilibrium state with temperature T and at time at the F we will be in a canonical equilibrium state with temperature T. So, in particular, I will focus on what is said in the space in this paper. So I will follow the notation in this paper, and I will introduce first a trajectory in the forward process. So the trajectory I will call it like this X, sometimes I will call it X bar from this X bar T, and so on, which will be a sequence of observations X zero up to X at 90. Okay. Or I think it's okay. Right. This will be a trajectory in the forward process and we will also introduce the time reverse trajectory. Time reverse trajectory. Trajectory which what yesterday I use a notation X still never realized that's not the best notation. So I think it's better to use X the bar plus, which is the time reverse trajectory of this. It will mean that the first thing I see what I will call it X plus zero. So it's the first element of X plus X X plus zero, then it will be X plus one, and at some point we will have X plus T. In such a way that X plus zero equals the final value in the forward objective so it's like time mirroring the forward objective. This will be XT, then to see XT minus one, and then find ending in X zero. Okay. This is the time, the time mirroring of the forward trajectory. This is notation I will sometimes also call it X bar, and sometimes call this X bar plus bar stands for trajectory. Okay. So, at time zero I mean equilibrium. It means that the distribution of X of the position of the colloid at time zero. X of time zero is a canonical distribution. It's e to the minus beta, e, X of time zero minus F zero. Okay. F zero is a. I have a question. Yes. Also is it X zero is equal to X plus T. Yes. Yes. Okay, so what I'm saying here is for example you have a trajectory of this like this. This is like this. This is the forward trajectory XT. And the time reverse trajectory will be I start from here. So it will be first year. The second will do like this. Then we'll do like this. Okay. I have done a time mirroring of the trajectory. And maybe another example is, yeah, sorry. This goes out. This would be, okay. Forward trajectory will be something like this. Okay, this is one forward trajectory. XT and the backward trajectory will be time mirroring. So it will be something like this. Do you know? Or not. Okay, please. If not say it please. Okay, this is the initial distribution. What is the distribution of X of time. Here F zero is nothing but the partition function. So it's minus cavity log of the partition function or. And here is minus cavity log of the partition function. The partition function is just something like this is somewhere X e to the minus beta EXA times zero. Remember, I'm having here EX zero because the energy the Hamiltonian is changing time. Okay, this is the initial distribution and the final distribution is similar because I end in equilibrium. PX, P, sorry P plus X zero, because this is the beginning of the backward process. So in the backward process at time zero, I'm in equilibrium. So this is E to the minus beta EX, I can call it T minus FT. Okay, why am I using EXT? This is the energy in the forward process at time T because time zero in the backward process is time T in the forward process. So this is the energy X of time T, and this is FT which has equivalent. So FT is the same thing as here. So I copy the same thing, but they have minus be EXT. Okay, so these are the initial and the final distributions for the position of the party. And now something that is very simple to prove is that you can compute probability for the trajectory XT divided by the probability in the time reverse process to see the time reverse trajectory. Okay, so you do an experiment, you get a trajectory and you do many experiments and this gives you the probability to see that trajectory. So you ask yourself, what is the probability that if I run the process in reverse, I get the time reversal of the trajectory I just observed. Okay. You can show easily that this is the priority. First I do this one is the priority X zero time zero, and there is the probability of the rest of the trajectory, given the initial value. Okay. And now in the backward, you do probability plus of X zero plus zero, the probability in the backward of XT backward given X final zero, okay, because at time zero in the backward, we are in XF, which is the final in the forward. All right, so now, now it comes out in very important. Actually, this is related to what I explained yesterday, that you can first recall that these two things, the ratio between the conditional probabilities is related to the heat. I assumed yesterday, and we can say that this part is just the exponential of minus beta, the heat associated to the trajectory XT. And the second thing, because we start an end in equilibrium, we can use this relation and see that that race of probabilities is e to the minus beta, the difference of the energies minus difference of the free energies. So we can write these as e to the, maybe there's not much space, e to the beta, delta e minus delta. Okay. Very important point, when you read the Krux paper, and I don't know if it's very much emphasized. This is stochastic. It depends on the trajectory. This is also stochastic. It depends on the initial and the final value. This is not stochastic. This is deterministic. Okay, these are numbers. This comes from the normalization of the probability. You see, we are summing over all possibilities. This is not stochastic, but all the rest is stochastic. Okay, so how can I write this? I can also write it using the first law as e to the beta. The work done in the trajectory xt minus the free energy change. So this is the non-equilibrium work, and this is the equilibrium free energy change. We often call this dissipation at the trajectory xt. And also we can show that this, okay, this is the same thing I explained yesterday, so you can also write it as e to the s dot of xt divided by k. Okay. So this is very basic, and I think it's easy to prove. And now it comes in order to prove, I will go, I'm going to do a very unconventional proof of the fluctuation theorems. And I think it's a nice one, which is, I will reproduce a result that appears in the paper by Krug's in PRE, which is really nice. And the result is the following. So we will consider averages of functionals. So we'll take functional, a function of the trajectory, omega of x. So instead of just, instead of putting xt, let me just take this out and put x. Okay, you understand what I'm talking about is just a trajectory. So functional, as you know, it's a, it's an object, mathematical object that transforms something trajectory, this point in Rn into a real number. Okay, this could be whatever. What Krug's shows in this, this could be for example the entropy, the heat, the work, et cetera. What Krug shows something is as follows. So consider a functional and take the average of this functional, for example the work in the forward process. Okay, this is what I consider like this. So this will be the sum over all possible trajectories of a, sorry, if I'm not following my notes well, of the probability to see a trajectory times the function. Okay. This is just the definition of the average in the forward process. Now, what I'll do is I use this release probability for any trajectory. And it does e to the beta, the dissipation of the trajectory times the time reversal, probably the time reverse. So, this will be some x probability, the backward process of the backwards trajectory time reverse trajectory. Okay, let me take this out is because I'm just following the notation of Krug's paper. And then we get exponential of beta, the dissipation of this trajectory. Here is not the time reversal, be careful. Here it is. And then we have omega of x. Okay, very simple. I just applied the fluctuation. I recognize that. W is odd and your time reversal, the dissipation and the entry production. If, if I calculated over a time reverse trajectory. This is minus the dissipation of the trajectory. Okay, this you can see very easily because if you change here x by x t bar, this will be the inverse of the previous one. So it would be. If you take logs, this will be the minus. Okay, so this means that the dissipation that reproduction is odd and there's time reversal. Okay. Because of this property, I can change this one by. Some of the x p plus x reverse e of minus beta W dissipation. So the dissipation of the time reversal. Okay. And now I will introduce an assumption. So first, this is a fact entry to the action of dissipation is other than the reversal, and now I introduce an assumption that the functional omega is even under time reversal. Okay, here I use extreme. Okay, let me use extreme but it's the same as x is about okay, is equal to omega. Actually, I can do like this in the time reverse of the time reversal trajectory is something. Okay. This is an assumption of the result. And this means that this even under 10 reverse. So if I assume this I changed omega evaluated at x by omega plus evaluated at x t plus. Okay, this was an assumption and you will see there are many observables there are many functional we can we can use with this property. And then you will see it's very nice. It's an assumption but there are many interesting functionals. Okay, so I reached here and you see that everything is with pluses so I can also write this as an e. Sorry, the average of E minus beta dissipation times omega plus in the reverse process. This is the result of this theory, the average of a functional of an event functional in the formal process equals the exponential of minus the dissipation times the functional in the in the sample of the reverse process. Okay, this looks very mathematical and in reality it is. But another version of this theorem will be the case in which imagine I put here e to the minus beta W dissipation. So if I do the same story. Here I will have the E minus beta W this which will cancel with this. And I will get an analogous version of this theorem which will be omega E minus beta dissipation. And these all are functional in the forward equals omega plus in the background. Okay, this is the same as what I showed. This theorem is very important. It is not so much highlighted by many people in the field, but to me, it is the nicest way to show the practical theorem. And now a collaborator of mine reference a treat we call this a mother fluctuation theory. Okay, so it's the mother fluctuation. We call it like this because it's, it has, you can derive many children out of it. And one of them is very famous, which is Jaczynski equality and another one is also very famous, which is the crooks. So we use this mother fluctuation theorem. We can also call it fire as you want. But is this one minus beta W this in the forward equals omega plus. Okay, this is the the mathematical result. And I will take one example of these functionals. One function that I will take is the following. Imagine the most stupid functions you can think of. Omega equals to one. Okay, so this function takes a trajectory and gives you always as output one. Okay, this is even because if you apply this to the return reversal trajectory. This is also one. Okay, so it is equal to omega of x t. It's even fine. Okay. So if we apply this equation to one, we get one times e to the main minus beta dissipation. I won't put forward because when I put nothing, if I do this, this is equal in my notation to forward. Okay. So this is beta W this, which is exponential of minus beta w minus delta F. Remember here, that was to castig but that type is deterministic. This is equal to one average reverse so it's good one. Delta F, I told you it's deterministic. So I can take it out of this, and I can move it to the right. So I can, I can write this as E minus beta w stochastic equals E minus beta delta F deterministic. This is probably the most famous result important result in stochastic dynamics. I'm making a b square here. And it's called the Yershinsky equality. The Yershinsky's, the Yershinsky's, the Yershinsky's, the Yershinsky's equality. So I'm not proving this result using the paper by Yershinsky, but I'm proving the result from here, which is one line. Okay. So this is a very, very important result in thermodynamics. It's, it's, you should be very surprised by this because we are getting a quantity from equilibrium. This delta F from measurements of non equilibrium. You can do a process infinitely fast, compute the work. This stochastic work I was talking to this ago and it was something very aesthetic or something very strange. This gives you information of the equilibrium properties. So to get the free energy in this process you have to delta F is the work when you do the system infinitely slowly was static. And some processes have infinite practice on time so you cannot do this process infinitely slow. Also you have an experiment and you don't have all your life to do an experiment. So this equation is extremely insightful because you can get equilibrium free energies from equilibrium measurements of work. Okay, this is just an appetizer of all I want to say about this equality but just this is a proof. I'm just introducing the key results. There's one that is very spectacular and it's kind of a generalization of just in ski is it's called the crooks Ferguson theorem that you can obtain with the following choice. So you can take us omega, the following a delta function for the work. The trajectory be equal to small w, be careful because this is big w function of the work and small w is a number. Okay. And you can pick as the other functional. You can pick the following delta of W tilde. So this is the work in the time reversal process applied to the time reversal trajectory to be equal to minus W. This is related to the probability in the time reversal process to see a work of value minus W. You can show easily that this is equal. Okay, w plus x plus is minus. This is plus w a x, we can show that this is equal to a w. Okay, w x minus. So these are even. Okay, so this is very important. This is just applying dentals. So when you apply these two functionals in our theorem in here in the mother tradition theorem, we show that this is okay. W x forward minus value w E minus beta w dissipation, evaluated at x delay. Okay, this is in the forward equals the average of delta. Sorry, I'm making mistake here. Delta, I'm going to see my notes. So give me a second. W plus plus w. Okay, so this is w plus x bar plus w in the reverse. Okay, this is in the forward and this is in the reverse. So very important is that here again, I take out the beta delta f because it's deterministic. And what I get next is delta of w x bar minus w e to the minus beta w x. In the forward. Okay, and just like this equals. Okay, the same thing, just delta w plus x plus plus in the reverse. Okay, and now some of you will notice that what comes out of here. We're doing an integral with a delta function so we will take out the value of the work to be equal to w so it will be e beta delta f e minus beta w and and then what what's left is the average of a delta the average of a delta with a probability distribution becomes the probability, the forward process to get the value of the work equal to w. Here is an average of delta. It means we are only summing those events where this happens. This is also probability. It will be in the reverse process to get a work for value minus w. This is a very nice result. And if you divide one by the other, you get this equation. This is a value w and not the stochastic work be reverse minus w equals exponential of beta w minus delta f. And this is also one of the key results in stochastic thermodynamics. Okay, this is really important result for this thing. So this is Krux. No, it's not like this. Be careful because the name of this is Krux, Gavin Krux. Don't write like this. Krux, fluctuation theorem, fluctuation relation. What this means is the following that you can relate the probability in the forward process to get a work w to the probability in the backward process to get work minus w. It's related by the dissipation. So, in other words, if you do an experiment, remember we had the forward experiment. So let's say you do this process many times. You do this process many, many, many times from left to right. And you collect a histogram distribution of work. This would be w. And you will obtain this wasn't like this. Of course, this will be the average work in the forward process. Of course, the average work, as I told you, is greater than the free energy change. So typically the free energy change will be here. Unless you do this infinitely slowly, infinitely slowly, this, this line here will go here. Okay. This will be a probability density in the forward process to get a amount of work w. What does this theorem says it says that if you do the backward process and plot the distribution to get work minus w will typically be different and typically will be something like this. Like this. This will be the probability in the backward process. We must have different sign because we are doing the process in reverse. So you will do the, we get, instead of doing work, we extract work here. So this will be the average of work in the reverse process of minus works. This will be our minus value. Okay. And very importantly is that what is the value at which this histograms cut. Because you can get from the equation they cut when okay pf. Let me call the cat. Well actually the cat will be at Delta F. This is I'm anticipating pf at w. This is the cat is divided by p reverse a minus w. That equals to one. If you put a one here it means that this happens only when W star equals delta f. So if you do a process, collect the work distribution, you do the time reverse process you collect the word distribution and you put the distribution of minus the work, they will cut at Delta F. And this happens at any speed of the process. This is a universal delays. It is true for any velocity of course. This is very nice. It's very insightful and okay I just finished by saying that the way. Well I think I will need a second lecture on this topic because this is very important in the film. The way experimentalist techniques and I'll show you in the next lecture and experimental results is instead of plotting distribution like this what they do is they take logarithms. You take logarithms, you get pf w. equals beta w minus delta f. Okay, so it is linear with the work. So experimental is typically what they what they do is they do the race of between these histograms and they get points like this. So they plot the logarithm of f w divided by p reverse. That is w versus w. They get some data like this. I'll show you. I've also done this type of analysis with experimentalist. Then it happens that this it goes like a line. It's linear with w. The slope is beta. So one of our cavity so you can get the temperature of the system measuring the work. This is very nice. The intercept, Y intercept here is minus beta delta f. So you can get the free energy by doing a process, the equilibrium free energy by doing a process infinitely fast any speed you want. Or you can get those. Okay, so these are consequences. So it's the x intercept is the other. Sorry, it's the x intercept. Yes, because this must be. Okay, when this is that equals to the type log is zero. So yeah, sorry, it's here. This is here exactly. It's the x intercept. That's a good point. So so with the x intercept, you will get the free energy and the beta will be the slope. So if you do the process, if you do a process with has a different free energy change, but the same temperature. What you will get is mainly something like this. This is when you change the temperature. No, no, sorry. This is when you change free energy. So here you go go to beta delta f prime. So here delta f is negative. That's why you are here. So typically delta f is positive. You will come here. And if you change the temperature, you will change the slope. You have a different temperature. It will be at a higher temperature. Okay, so this shows that there are universal results. These, these are true for all the systems I was explaining land you run dynamics, Markov chains, electronic systems, et cetera. All these processes have been tested for all this type of process. In particular, the first test was was paper with Felix return with biological systems which were RNA herpes, and it was very spectacular. So I show you in the next lecture this more carefully, but I prefer now that you ask me questions because I went more or less halfway of what I wanted to explain today. But I think it's important. We have questions. Can you explain how the four I mean I understand starting with the at some initial condition, which is that equilibrium. How does one end at equilibrium. You can, okay, something very important is that you don't need to end in equilibrium. In the backward process you start in equilibrium. So you can manipulate the system. You start in equilibrium, you end up out of equilibrium. The value of the of the control parameter lambda. And now you wait for a while, you let the system relax and you start the backward process. So in the backward process you start in equilibrium. Okay, and actually I can show you something which probably it's interesting. I'll give you a second because I have to find it in the paper. So we need a test with optical tweezers. One second because we have to find the right reference, which is not what I'm. Okay, okay, I think this is probably not the best. Okay, I can show you some extra data. You can share. This one is a paper we published in 2013. It's pretty wild. And here we have some optical tweezers. I don't know if I have a, no, we don't have a here. But we are dragging an optical tweezers. This is the, this black line here is the center of the trap. So initially we are for a while. The center of the trap is at, for example, minus 50 nanometers, we wait until we reach equilibrium. Then we drive by moving the trap to plus 50 nanometers, and then we let the system relax. And then we go backwards. So we start the backward process in an equilibrium and state. Here we are out of equilibrium, but we let the system relax. And then we start the forward process in equilibrium. This is the setup that is used in Krux theorem. And I can show you, look here, we are doing the histogram of the work in the forward divided by the history of the work in the backward. Actually, there's a minus missing here, but okay, this was an error that you see here minus this is correct. And you see that there is a linear release between this asymmetry quantity and the work. And here, okay, this is a different temperatures because we could control the temperature of the system, they cut in the same place or the same free energy chains, close to zero. And here you see a clear manifestation of the theorem. So it's, this is experimental test, you can see in this paper with Matthew Martinez in PRD 2013. And so it's a very nice test, but you will find hundreds of papers with experimental tests. So I'm not saying this is the most important one, this is just one more of a big class of experimental tests. Thank you. So I had another question. Like, it was regarding. Once again. Yeah, look, you can send me a question later by email. It's also fun. If you need more time, it's okay. I just remembered it. So you mentioned that this theorems are work for Langevin system, Markovian system and discrete system, I think. So, does it work for underdone fluctuation in these fluctuation theorems. Yes, yes, yes, totally. Yes, because up to here, I show you only. I'm going to go back in one second. I'm having a lot of difficulties today. So the proof that I showed you today. This proof is based only on this, which is just probability theoretic arguments. So I didn't use almost nothing here. I just defined the entropy of a trajectory that is overdone. So here, there is no momentum degrees of freedom, but you can easily extend this momentum degrees of freedom by considering that this when you reverse you also change the sign of the momentum. So it is not much to do this for underdone systems. This is for all systems in which the dissipation is takes this expression. Okay. And Markovian systems in general, this is overdone Langevin equations underdone Langevin equations as well. So it's, that's why this theorems are important because they are valid for many a big class of physical systems. Okay. Yes, that's not changed this, this result. Okay. I'll just on this, this time reverse trajectories are basically even the protocol is time, time reversed or it with the same protocol but with a different initial condition. It has to track to the original trajectory is that how it works. Okay, okay. And so it is more simple than you what you're thinking. This is a process which has a protocol lambda t. Okay. This is another process, which has a protocol which is lambda till the tea, which is lambda of tau minus T, where tau is the final time of the protocol. So it's one protocol is doing, for example, like this. Okay, I don't know what's the best illustration. I think you understand to me, one protocol is doing, for instance, this. Okay, I'm only throwing traversal protocols. Give me a second. So one protocol is doing this. Okay, and the protocol is doing this. Okay. And this is lambda, and this is lambda plus. Okay. And the protocol lambda produces trajectories. So what I'm putting here is a given trajectory of in the protocol, a lambda in the former protocol, which could be for example this one. Okay, this one. This is the probability here is the probability in the backward process to see exactly this one. It's difficult because the power process starts from here so this is very, very unlikely trajectory, but it can happen. It can happen. We assume that all trajectories are likely, even very small. That's why this is weighted by the dissipation. When you do a process in equilibrium, these two things are very similar. So, probably different objectives and time reverse are the same. The more non-equilibrium, the more different they are and the more dissipates. But this is one thing for this to prove this. Okay, but this is just a mathematical trick to go into the key result. And this one says, I do a forward process many times and I collect a histogram. And I do a backward process many times and I collect the histogram. I don't need that after one trajectory of the forward, I see the same trajectory in the backward. No, no, I don't need that. I don't need that. Okay, thank you. Okay, maybe other questions. Yes, please. Yes. I have a, I want to know if I got it right. Can I say that the phases space in the way we are doing these calculations, the phases space has become, for example, the two different parts that in the parts there is equilibrium there is a path from do to these parts together which make the balance. Okay, so I think what you're thinking is much more complicated than what I'm explaining. There is, there's no breaking at this. There is just a process that I run forward, and this process generates trajectories. And then there is a backward process that are backward that generates trajectories there is no. The first and the last parts are in equilibrium in the middle part that trajectories are not in equilibrium. All right, so I start in equilibrium. And I drive the system away from equilibrium. Okay. So the system is, okay, the system is initially in equilibrium with value lambda zero. And now I'm driving the system with lambda t. And along this, this driving the system is out of equilibrium. Whatever. Okay. And now I finish with a value of the protocol, which is here, lambda f. This is the forward process in the backward process. Okay, this forward trajectory can do whatever but then I will, I will let this relax, because I will keep my protocol at lambda f. Okay, but here I'm not doing work. So I'm doing nothing. There is only relaxation. And now I will execute this process backwards. So I will go like this. Okay, go backwards, and I go like this. So in other words, I do like this, then I relax equilibrium, and then I do the opposite, which will be like this. Okay. So here I'm in equilibrium here. I am out of equilibrium all the time. Here I'm relaxing to equilibrium. It's a new equilibrium state here. And then I'm out of equilibrium. That's the setup. Thank you. Okay, so we have just four minutes break before the next lecture. So thank you very much again, Edgar. Thanks to all of you for your next lecture. Yeah, then thank you.