 Sure, one second. So welcome, everybody. We start today with our fourth lecture. Before starting, is there any question for the previous lecture or any doubt? OK. So not a doubt, but a clarification. So the entropy definition, which based on trajectory, clearly is not a state-dependent variable. But in thermodynamics, entropy is like a state-dependent variable. Why do you say it's not state-dependent variable? In a sense, it depends only on the state-dependent means initial and final state rate. Here, it depends on the whole trajectory, if you look at it. Yeah, yeah, but be careful. One thing is the total entropy production, which depends on the whole trajectory. And the other thing is the system entropy. System entropy is what its state function in a microscopic thermodynamics, classical thermodynamics. And here, in a way, it depends only on the initial and the final state, but it is through p. So it's p of x0 and p of xt at the final time. So you could say, OK, it doesn't depend. It depends only on the initial and the final. But it depends on the p. And the p depends on the whole trajectory. So in reality, it's not the case, OK? So it is different, essentially. So you have p of x0 and p of xt and logarithms. This is what is entering the equation. So it's not exactly the same thing as in classical thermodynamics, OK? OK. But be careful when you identify what is a state function and what is not. And be careful in confusing system entropy with entropy production. Entropy production is related to the probability for the entire trajectory. And system entropy is related to the probability at the initial time and to the probability of the final value of the trajectory at the final time. OK, it depends. So the system entropy depends on the solution of the focal plank or of the master equation at time 0 and at time t. Whereas the entropy production on the solution at all intermediate times, all right? OK, yes, yes. OK, I can go later in the whiteboard and we discuss this if you want, OK? OK, OK, sir. OK, any other question? Right, so then I'll go ahead. Today, yesterday we were a bit slower than expected. So I broke the lecture of baptism relations in two parts. So today in the second part, it will be much more relaxed. I still had that question about last time you mentioned in the reverse trajectory, we start with the x t, which is at equilibrium. But one thing is not clear to me in experiment. You measure. So you start with x 0, let's say, and go to x t. And at x t, you're going to measure the work till x t only, right? OK, OK. I have the answer for this in the next slide, OK? Be patient, OK? So this is a recap from what I explained yesterday, Krux and Jarsinski. And this is the setup. The setup is the following. You have a forward process. The forward process starts in an equilibrium state with control parameter lambda 0. This is what you see in the blue curve below. So we need an equilibrium time. And we start with a Hamiltonian with a control parameter lambda 0. And then we start driving lambda with a protocol lambda t for a finite time tau. This is what this blue line going up means, right? So we are driving the system. And at some point, we stop changing the control parameter and we freeze the dynamics with this value, which should be lambda tau, OK? So the main point that you have to realize is that in this process, there is only work done on the system in the ramp. In the beginning, in the equilibrium, there is no work done on the system because the Hamiltonian is not changing. So keep this in mind, OK? So you start in equilibrium, you drive out of equilibrium, and then you relax to equilibrium. The only work that you care is what happens in the middle, which is starting from equilibrium and ending in non-equilibrium, very important, generally in non-equilibrium. Unless the process is very slow, then you will end in equilibrium, right? This is the forward process. The backward process is as follows. You start with the Hamiltonian in equilibrium, but with a different control parameter with lambda tau. And you drive it backwards in time. Well, I mean, time goes always forward, but what you're doing is the time-reversed protocol. So instead of going from lambda 0 to lambda tau, you do from lambda tau to lambda 0, as I show here in this red curve on the right, OK? Again, we start in equilibrium, we drive backwards, and we reach a non-equilibrium state, but then we let the system relax to equilibrium. Very important, the only place where you do work is in the ramp, right? You don't need, in the backward process, to start in the same x that you ended in the forward process. Very important, OK? I hope this qualifies your question. I hope. Yes, sir. All right, all right. Then what you need to do is you measure the work in the ramp in the forward process. This is what I showed in the first formula, partial H with respect to lambda d lambda. So you just measured the working for a single trajectory. And you collect histogram of many realizations of the same process, OK? You measure in each of these trajectories, you measure the work from time 0 to time tau. And you obtain a histogram or a distribution. And then you do the same in the backward process. So you have a different initial state from a different equilibrium state. You drive it with this protocol, and then you collect the work from time 0 to time tau. This is what I show in the formula in the second formula. OK, just to realize that x dagger is the trajectory that you're obtaining in this backward experiment. Lambda dagger is the driving in the backward experiment and so on. Very simple. It's much more simple than what you were thinking about. OK, this is the setup. And the result is the following. This is what I showed yesterday. The distribution of the work in the forward process. So the probability to get a work for value w is related to the probability in the reverse process to get a work of value minus w. So when you plot the histograms, you see that they cut in the free energy change. This is a good prediction for all these type of models like Landjeven or master equations, right? OK, this is the setup for Krux. For Jarsinski, very important. OK, give me a second. Just be aware of something. In Krux, there is two physical process, forward and backwards. Two protocols. In Jarsinski, there's only the forward protocol. Only one protocol. And what we showed, and you can see it very nicely, I proved this from this so-called mother fluctuation theorem, but you can also prove Jarsinski from Krux. So you want to know the average of e to the minus beta work minus free energy. You average with the forward distribution. And then you use in the second line, I'm using Krux theorem. So I change e to the minus beta dissipation times pfw by p reverse minus w. So what you get is an integral over all work values of the distribution in the backward. And that's one, because it's a normalized distribution. So e to the minus beta w minus delta f on average is one. And as I said, I start and finish in equilibrium. Free energy is not stochastic here. So I can take out from the average delta f, and I get that the average of the exponential of the non-equilibrium work equals e to the minus beta the equilibrium free energy change between the initial equilibrium. And the final equilibrium, which is not the end of the protocol, is after relaxation, OK? I hope now it's more clear. But it is very important that you realize that all these theorems or relations have assumptions. And the assumptions you have to go through, read the paper carefully, and understand, OK? In this field, it's more important that you understand things that you are a mathematician, OK? This is concepts and physical research, physics, OK? All right, so now I will, for completeness, I will discuss experiments that people have done to test these results. So the first experiment, it was really a breakthrough. It was published in Science, which is one of the most important journals in science, in all fields of science, not only physics. In this paper, they did the following experiments. You have on the left a particle in an optical trap, a second particle connected to a pipet, and what they do is they link DNA between these particles and they pull the DNA, much like if you take a rope and you pull the rope, OK? But this rope is very small, so it's affected by fluctuations strongly. So if you pull like this, you won't get the same rope in each of the pulling, but it will be different because of the fluctuations. This is what you see in panel B on the top. You see that at some fast pulling speeds, for example in the green curves, the trajectories that follow the force, so here they can measure the force done in the colloid because the optical trap can be used to sense the force. The force is different in each realization of the same process. So this is a manifestation of fluctuations. And you can use these trajectories to measure the work done in each of the trajectories. And what they get is what you see in the panel D on the right, these are distributions of work or of the dissipation for different pulling speeds. And now you can use the formula that they show in the middle, which is just Yersinski's quality solving for delta F. And you just do e to the minus beta w. You collect values of w and you do the average not of w, but of e to the minus beta w. You do the average of that quantity, you take the log, you multiply by KBT, and you get the free energy. This is what is shown in the last figure. It's not OK. Back then, experiments were not as precise as now. So there is an error bar of 0.5 KBT, which they can argue. And they see that the difference between the theoretical value of the free energy and what you get from molecular dynamics and what you do from non-equilibrium pulling, it's very small. So the theorem was tested with great accuracy in this science paper. OK, this is for Yersinski's quality. And Yerski's quality has an issue, which is it is dominated by, so this average of exponential is dominated by very negative values of the work. So you need to find very rare events in order to have a precise measurement of the work. So this is, let's say, a negative, or not negative, but it's a shortcoming of this equality, that you really need to find rare events in order to have a very precise measurement of free energy. All right, that's for Yersinski. And for Krug's theorem, there was a very analogous experiment published in Nature, so also a very big journal here in which what they did was a very similar experiment, but with an RNA, well, I put here DNA. I think they did with DNA and with RNA, hair pins in which, OK, you can see top left figure. This is the hairpin. So it's like a rope, which is making like a loop. And they pull from the two streams, and they open it and close it. So you can see curves of opening and closing this hairpin in different colors. So you see the orange going up and the blue going down. So what you see, and it's very important, is that there is a hysteresis curve. So it's not the same when you do like this, but when you go back. And this is a signature of non-equilibrium in a way. So what you can do with this data is measure the work in unfolding and the work in refolding. And these are like they obey the assumptions of Krug's theorem. And you can collect the work in unfolding and refolding and plot the histograms. So what you see on the right, you can, for example, look at the red histograms. You see, first of all, the data is not perfect Gaussian distributions because it is very difficult to collect this data. So maybe they have 20 runs or 50 runs. So you have to come back to real life. It is not a simulation. This is not an equation in the blackboard. This is very difficult to get. Maybe it takes years to make this experiment. And what you see is there is a red histogram on the right. This is the distribution of the work in unfolding. And a red histogram on the left with dash lines, which is the histogram, Krug's theorem is probability for minus the work in the reverse process. And you see they cut more or less in 110 kbts. Then this is done at a given speed. You see, or a given pulling speed, which is 20 piconewton-second. You can do this slower. And this is what is shown in the green curve. When you do it slower, the histograms change because you're dissipating less. So the average work is getting closer to the free energy. But still they cut in the same place, the histograms. They cut in 110 kbts. And when you do it even slower, the histograms get narrower, but they also cut in 110 kbts. So this is a manifestation of Krug's theorem in a way. They are cutting same system, but at different pulling speeds. The histograms of the work cut in the same place, which is the free energy change. And moreover, they can compare this cutting point. Well, they put it here in kilocalories small. You can convert kbts to kilocalories small very easily. And they can compare with what you get with molecular dynamics. Molecular dynamics is you do the simulation with all the atoms and molecules of the herping. So it's something very, very expensive and precise. And they see that the free energy that you get from Krug's theorem is within experimental errors the same as what you get with molecular dynamics. So it's really, really good, this theorem. It's good because you don't need to pull infinitely slowly. You can do this quite fast. It is very practical. And finally, on the bottom right, what you get is the logarithm of the ratio between the histogram forward and the histogram in the backward. And they are linear with W, as I showed you yesterday in the theory. So this is what you, something you predict in the theory and something you can see very clear in a very complex experiment because this is not just a Langevin equation. It's something much more complicated. It's a biopolymer in water with exclude volume effects, et cetera. So it's something really nice. That's why this paper was in nature, actually. All right, so there were more recent tests. Yesterday I was talking to you about a recent experiment, well, not so recent, but we did something like this with optical tweezers. It is not such a, now many labs are doing these experiments. So what we do is we have a colloid in a trap and then we move it, we move the center of the trap to the right, measure the work, let it relax, move it to the left, measure the work, let it relax and so on like this, okay? You can do this at different temperatures and collect the, he's going to have the work. This is what you see bottom right. He's going to have the work change when you do the process of different speeds or at different temperatures. And on the left, I showed the ratio between the symmetry function. So the ratio between this log of PFW divided by P reverse minus W. And you'll see the slope is changing because we could do the experiment at different temperatures. So the temperature has an effect on changing the, you see the slope of this curve, right? So with this, I finish from here and I'll now share my whiteboard to continue a little bit with the theory. So do you see my whiteboard now? Yes, yes, yes. Okay, thank you, thank you. Okay, so the first thing I'll do and it's something really nice, it's to explore the consequences of Yershinsky quality. So yesterday, I explained that Yershinsky quality says the following, W equals E to the minus beta Delta F. Okay, in other words, you can also, okay, me a second because this is not working very well. You can also say that exponential of minus beta W minus Delta F equals to one, right? Very nice. And now the first consequence that comes out and it's very simple and it's the second log and you can see it from the following. So first of all, let me discuss that this is a recap of mathematics, which is Jensen's inequality, which is the following. So if you have a convex function, as for example, the exponential function, there's a convex function like this, let me call it fc of x and this is x. It is very easy to show that the value, these are two points and you take the average of these two points, which is here. The value of the convex function at the average, which is something like this, is smaller or equal than the average of the, between this point and this point, which is the average of fc evaluated at x, okay? This is Jensen's inequality. So this is true for any convex function and in particular, we can take here e to the x is smaller or equal than average of e to the x, okay? Very simple, all right? So now we can apply this inequality to Jensen's inequality. So what we say is that e to the x will be e to the minus beta, w minus delta f will be one, okay? This is e to the minus beta, w minus delta f, right? And this is greater or equal than e to the minus beta, w minus delta f. So the average dissipation comes out here, right? So from this, well, you say e to the minus beta, this is greater, smaller or equal than one. So we are just saying, okay, this is the function e to the minus x, okay? This is the function e to the minus x versus x. So this, okay, to be greater or equal than one, this is one, this must be negative. Therefore, all the elements must be negative. So in the end, we get that w minus delta f must be always greater or equal than zero. So applying Jensen's inequality to this Jensen's case relation, we get the work minus the free energy is greater or equal than zero, which is nothing but the second law. So we get the second law out of this. W greater or equal than delta f, okay? So Jensen's equality and Krug's theorem are in a way a generalization of the second law because they provide more information than the second law and they give the second law as a corollary. So it's just convexity applied to this equality imply the second law, which I discussed before in my lectures, okay? Just simple math from here gives you the second law. All right, so now I'll introduce something which I believe is more exciting, which is the extent of second law violations. So this you can find in some papers by Krzysztof Czenski, which he called second law violations with quotation marks, okay? Okay, so let's recap a little bit and discuss the fact that the distribution of the work when you do a process infinitely slowly as this type of shape. So when the process is done very slowly, it goes on distribution, picked, highly picked at the free energy. However, when you do a process faster, you are out of equilibrium, you have here delta F and typically you will have distribution that will be like this, okay? This will be W and then we'll have, this will be B of W. Okay, sometimes I was putting PFW, but now there is no reverse process. I will just discuss one process. This will be the average work which will be, so the second law means that the average of this distribution is greater than the free energy change. That's the statement of the second law. And now what is very important is that this region that I put, okay, this region here, it's what it's often called this second law violation, see quotation marks. So these are events in which the work done on the system is below the free energy change in all these states, okay? And this can happen, you can do an experiment and find this. So the question that, okay, so first of all, let me just formalize this close, this is close to equilibrium, P, sorry, I don't need here at the F, P of W close to equilibrium. Get close to delta W and delta F. This is close to equilibrium, equilibrium. And here we will have non-equilibrium. Here the super sum won't be exactly delta W minus delta F. What we get in non-equilibrium is that the average is greater or equal than delta F, okay? And now I will ask the question, which is the following. What is the probability of the left negative tail? So how likely is that we are, for instance, getting values of the work here in this tail? I will call this distance xi, okay? And xi will be greater or equal than zero. So I would like to know what is the probability? It's the probability that the work, I can call it this PR, it's not the probability density, but it's the probability that the work minus delta F is smaller or equal than minus xi. Sorry, I did a mess here. So I have to take this out, exactly this, okay? What is the probability that work minus free energy change is smaller or equal than xi? What is the value of this probability? And this is what is called the extent of the second law of violations in some of the works by Kerstiarsinski, right? So I'll try to give an answer to this, and there is an elegant way to give at least a bound to this quantity. And the elegant way is just using Kerstiarski equality. So we will do as follows. So I would like to know what is the probability that the work minus the free energy change is smaller or equal than minus xi, okay? What is this? This is just the area under the distribution from minus infinity to this value. Just integral minus infinity to delta F minus xi of dw p of w, okay? This is the definition of what is the probability that the work is below the free energy by a quantity xi, right? So having said this, this is an equality, it's clear. And we can show because in this area, here, w minus delta F minus xi is negative here, okay? So work minus free energy is smaller. It's smaller than xi, w minus delta F minus xi is negative. So this means that in this area, let me do here a star. In this region, so w minus delta F minus xi or plus xi actually, sorry, plus xi. This is negative, which implies that e to the minus beta, w minus delta F minus xi is negative. Greater equal than one. You have this inequality in all this region. So this implies that we can do this here, integral from minus infinity to delta F minus xi of dw e to the minus beta w minus delta F minus xi, okay? W minus delta F minus xi, okay? Because this is greater than one, so then we can do this inequality and then keep p of w here, okay? So up to here we have a lower bound and now something we can do is delta F is a number, xi is a number, the integral is over dw. So all the w's stay in, all the things that are not with w, they go out of the integral. So we can say that this is equal to e of beta, delta F minus xi and then integral minus infinity to delta F minus xi dw e minus beta w p of w, okay? So let me just do this a bit better, sorry. Excuse me, yes? What was the definition of xi? Because I was supposed that it is the w minus delta F. No, so xi, xi is a number, it is equal to delta F minus delta F minus delta F. No, so xi, xi is a distance. So we are looking at, so this point here is w minus delta F minus xi, okay? Ah, I see, okay. Okay, so it's just a distance away from delta F, okay? Okay, thanks. Okay, so just parameter. All right, so we're now in this part, which is the integral of minus infinity to delta F minus xi of this. And now we recognize that this is a positive number. This is always greater than zero. And we can then say that all this integral can be bounded by beta delta F minus xi is smaller or equal than making the integral for minus infinity to plus infinity of d w e minus beta w p of w, okay? Because this is a distribution, this is always positive. This is always positive. So this integral is larger than this integral. I'm just doing an inequality between this integral and this integral, okay? So now we are here. What is this? This is e to the minus beta w p w and we can apply Krug's theorem. I didn't call this, okay, I can call this pf. The beginning I didn't put it, but this is, let me call it forward process. So this one is nothing but equal to e beta delta F minus xi and then integral, well, actually, I don't need to write it like this. So it's very simple to show that this last thing that I obtained is e to the minus beta w in the forward process, okay? I don't even need to do, sorry, I was a bit exaggerated with the notation. I don't need this, okay? This is just e to the minus beta w and e to the minus beta w, following Jarsinski's equality, is e to the beta minus beta delta F. So e beta delta F minus xi e minus beta delta F. This is Jarsinski's equality. So this goes with this, and we get e minus beta xi. And this is a very nice result. You can see, because we are saying the probability of second law violations, probability to get the work minus theorem to be low, a given amount is, you see, smaller or equal, smaller or equal, than e to the minus beta xi, okay? So in other words, let me just to capitulate probability that work minus the free energy is smaller or equal than minus xi is smaller or equal than e minus beta xi. So this means, well, this means, basically that the left tail of the distribution of work is more than exponentially suppressed in the thermodynamically forbidden region, okay? So values of work that are forbidden classically are very unlikely and are forbidden and suppressed more than exponentially. This is what this result means. So I recapitulate. This is the average. This is the free energy. So the probability to be below all this area and this distance is xi is a smaller or equal than e minus beta xi, right? I hope this is more or less clear. Okay, if not, please ask me any question. And this is one of the key consequences of the theorem. And now the rest of the lecture, I will try to generalize a bit what I explained by discussing what is called detailed and integral fluctuation relations for entry products. So the results I just showed can be also generalized slightly to entry production, which is stopped. So as you see in isothermal systems the entry production is work minus free energy. So if I get a theorem for the entry production and I take the particular case of an isothermal system, I will get a practicing theorem for the work and the free energy. Detailed and integral fluctuation relations for entry production. Okay, so again this is explained very well in the paper of Udo Seifert. The same I discussed the other day. It's BRL 2005. 2005. Okay, and the setup is what I also explained in my second lecture. The setup is a non-equilibrium driven Markovian process in which there are the obtained trajectories, which for example is X, X0. Sorry, I think you left your microphone on Macha Molybden. So there's some noise happening there. XT and the time reversal trajectory will be X plus 0 to X plus T, which is XT to X0 is the time reversal. And we consider a non-equilibrium driven Markovian process. All right. So again, non-equilibrium driven Markovian process, which includes those that are described by Fokker Planck equations or those described by master equations. Okay, this is a broad class of systems. So what can we get here is that if we consider a specific choice of initial and final states, we can use again the so-called model, like this one theorem, this one, and show that instead of putting now the dissipation, we can put the entry production. This is more general than what I showed before in my lecture. In my lecture, my previous lecture from yesterday, I put here the dissipation, but I proved everything with path probabilities. So it is much easier and much more natural to consider the entry production, which, remember, is nothing but entry production up to time T or associated to a trajectory till the X bar is KB logarithm of the probability for the trajectory divided by the probability in the time reversal process to see the corresponding time reversal trajectory. If you put this here, you can easily, very, very easily prove this theorem for functionals that are even on the time reversal. Okay, same thing as I explained yesterday. And now if you take, instead of omega, yesterday I took as omega, one of the cases was a delta of W function equal to the W, so the delta that the work equals to small w. Now you can take something similar but use omega to be, for example, the delta of the total entry production associated to a trajectory to be equal to S and delta plus the analogous as I explained yesterday. If you plug in this here, what you get is a very similar theorem as yesterday, but like this probability in the forward process that the entry production up to time T equals to S times E to the minus S and KB, be careful. Now S is a value, it's a number. Here is a function of a trajectory. Here is a number. A dummy variable, you could say, is equal to the probability in the reverse for the entry production to be equal to minus S. So same steps as yesterday. This gives the so-called detailed practice on theorem for entry production, probability in the forward process is dot T equals to S divided by the density in the backward for entry production to be equal to minus S equals E S divided by KB. This is the so-called detailed practice on theorem. This is very, very exciting. Sorry for this. This is the detailed practice on theorem for entry production. Of course, you can consider process with driving like the ones I was introducing yesterday and today in which you have something like this in the forward and something like this in the backward. This will be driving, but you can also have processes that are out of equilibrium in which the driving is constant. So for example, you can have something like this. This is the forward and this is the backward. So forward and backward are the same process. One example is, for instance, if you think about a ring, you have a particle in a ring and there is an homogeneous field, velocity field pushing in this direction. Here, there is no driving. The Hamiltonian is changing in time. It is a fixed condition of non-equilibrium and the single particle has more tendency to move to the right than to the left. Sometimes it will jump backwards, but it will do like this and backwards and this and backwards and backwards and this. It will move more clockwise than counterclockwise. This is called non-equilibrium steady state, which often people write it also like this, non-equilibrium steady state. In this case, the forward and the backward process are the same. So you say lambda plus t equal lambda t and this is for general driven. And here I will do for non-equilibrium steady states. The reverse process is equal to the forward process and equal to lambda. So it doesn't change in time, but you are out of equilibrium. In this case, forward and reverse are the same process and we write it just like this. We write it as t equals to s divided by p equals to t equals to minus s equals, there's the same p here. Very important. Same process. It's a question in the chat. Yes, please. Would you read it? I can read it to you. Would you please explain again how can we determine that the backward trajectory is exactly the forward process happening in reverse? Consider the fact that the process is happening in non-equilibrium state. Yes, okay, be careful because in this question you are mixing two things, trajectory and process. But okay, let me try to explain this maybe with an example, which I'm trying to find in my slides. One second. I think in these slides I don't have it, but I think a good picture, one second. Okay, I'm trying to find a presentation where I have a good illustration. So a good image is better than 1,000 words. This is what you say in Spanish. Okay, so let me stop sharing and I will share something else and share screen. This one. Okay, so for instance, if you have, sorry, this is something else. Don't be scared about the formulas and things. If you see this figure, this is like a roller coaster potential, no? So if you could put a particle there and let it relax to the minimum and flag to it there. But you can also put a torque around this ring. We don't see the picture. Sorry? We don't see the picture. Oh, sorry. All right, all right. You see it now? No, we see numerical tests, language and dynamics, and then it's all blank. There's no picture. There's no picture. Maybe if you don't, if you just show the... Okay, I'll try to... If you just show the problem without the full screen, I think it's okay. Okay, okay, okay. Okay, you see it now, no? Yes. Okay, so this is a particle in a ring and if on top of having the ring, one has a torque. So if the particle is here, it's feeling a potential but also a torque. And this torque doesn't change in time. This creates a non-equilibrium steady state because the torque is pushing the particle moving clockwise all the time. Or you can also think about this particle in a ring. There is a torque all the time pushing the particle. So there is someone pushing the particle with a net velocity. And this is a non-equilibrium steady state because there is a current. You could have this process without the torque and that would be an equilibrium motion, okay? No current. Particle will move the same to the right and to the left. But if I have a torque, I have a current and this is a non-equilibrium steady state. And here there is no time-dependent change of a potential. It is fixed. There's a non-equilibrium condition and it's fixed. That's why the forward process and the backward process are the same because it is not changing in time. Okay, I have another something else I could show you. Okay, I think I had an illustration in another talk. Okay, I think I can also discuss this in my whiteboard. So in other words, when you have a protocol, okay, sorry, it's not very nice but okay, this will be lambda of t. So you could have different options. So one option is lambda t equal to zero. And the other option is lambda t in here. This is I'm doing nothing. This means equilibrium. Okay, here this would be zero. Another option is lambda t changes in time following a protocol. Could be for example something like this. This is a general non-equilibrium process. Okay, this is a driven process. I'm changing the Hamiltonian. I have a parameter in the Hamiltonian that is changing in time. And this will be, I'm not in equilibrium. And I'm fixing the parameter at a value of this non-equilibrium. This is the non-equilibrium steady state. And you see, if I reverse this blue line, I get the same thing. So it's the same process forward and backwards. However, if I reverse the red line on the top, this is different forward and the reverse. That's why we use pf and pr. So I'm going to just make a reply your question. I cannot see the chat. So we hope. Thanks. Okay. Great. I'm trying my best to explain this. Okay. So, so far, I proved a little practice on theorem, which for non-equilibrium state states, which is simpler than general driven. It means the following for non-equilibrium state states. The second law tells you that if you do this non-equilibrium state state many times, you have trajectories of entropy production. It's a function of t of time. And you will have trajectory like this. The second law tells you there is a positive drift. And the probability of this event. To be at time t in value s is much, much higher than the probability of, for example, reaching minus s. This is much less likely. Okay. So you will see a lot of trajectories going up. You will see another one coming here. And another one here. So you will see exponentially more trajectories here with respect to here. And this will depend on which level you look at. So this is what the theorem is saying. p s dot equal to s divided by p s dot equal to minus s, is e to the s divided by kb. Okay. So the farther I put these barriers, the more different they will be. Okay. And be careful on this is not the first part. It's time. It's something else. It's just to be here at this time, and be careful on this barrier. Very important. For crossing barriers and first passage, I can, I will explain in my last lecture, which is part of my recent theory. Okay. This is, you can cross what it means is you are here at time t, and you are here at time. All right. Very good. So now this was the detail for this on theorem. The integral for this on theorem is just the following e minus s dot. Okay. Okay. This you can anticipate is equal to one. No, this is the same as e to the mind. The same thing. We will have this s dot minus infinity to plus infinity, E minus s dot. So now the variable random variable is a stop, and then be of a stop. Now I use this. So PS e to the minus s is PS equal to minus s. So this would be the s dot. Sorry, not like this. P. So this is very simple. P of s dot minus stop. Okay. And this goes from minus infinity to plus infinity. So this is the same as the, as saying the integral of the s dot. The s dot for minus infinity to plus infinity, which is one. So this is the version is normalized. This is one. Therefore, we have this. This is true for non agreeable state states and for general Markov process driven process for this type with time dependent driving and for this type. Okay. Equal to one. This is called the integral fluctuation relation for entry price. Integral. Why integral because it's obtained from the detail by integrating. Just like the integral fluctuation relation for the replies. Of course, I repeat the same thing with Jensen inequality. So Jensen inequality implies this imply that. That's the second law is totally. Is greater or equal to zero. Okay. Good. So, so far, I think I've gone through almost everything I wanted. And just to let you know that as thought has a positive slope. So this is growing. This is what this means in reality. This is growing in time on average. This is the average of many. You will do like this. So it's a process that grows. And you should be a bit amazed by this because my did you have a process that grows in time. You have a stop. This was tea. And this is the process that it has the reasons but it has a tendency to grow. Okay. So when you have something that grows with time. You take a negative exponential. It to the minus a start. Okay. So it grows. It's growing. So it to the minus S. I would expect this to. To go down. No. With something like this. So remember a start at zero. Equals to zero. So this means that it to the minus a start at zero equals to one. So we start. So this one starts from one. But it on average. This one grows like this. This is the average of many trajectories of interaction. But here, this process is very special. The. The one that generates E minus a stop that is a stop interaction grows, but is negative exponential. Is totally flat. It's not changing in time. So it is equal to one. That's a, you should be a bit surprised and you should realize that. There is something special in this process. In entry production. Typically. When you have stochastic process that grows. Typically it's negative exponential decreases. Typically, but not in this case. And this is something very, very deep in reality. And it has something, some connections. This is because you will see in my last lecture. And this is related to something that is called Martin's. So we'll show that. And this is my recent theory with my collaborators that this process is very, very special. And it is a martingale process in non equilibrium statistic. Okay. However, okay, this will be the, in the last two, this will be a very technical lecture at the end, which I will show you very, very recent results and experiments as well. But please, and keep in mind that this is true. Also for driven processes, not only for. The states, but also for the process, as it is. So, more or less, this is it. I wanted just to explore and consequences of these theorems. And. So I leave the floor for questions, if there is any. And a lot of questions. Sir, can you explain why in the beginning for the experimental proof of the Jarzynski relation, you said the rare, like the rare fluctuations are important. I didn't get that part. Well, because you need to do this. Okay. So you need to calculate something like this. Okay. So this is the integral. T w. E minus beta. P f. Okay. So somehow you see this as an integral, but in reality, you have to see that you can do this from histograms. For example, and we'll have a sum of what is the probability to get the work of w w i. You're doing a beating and e to the minus beta w i. So there, there is a following that when you make this sum, which values of the work will be dominant. It's also ready to do some auto classification. So when that is very negative, this exponential will be very, very large. You see. So therefore, the negative values of work will be dominant in this, in this habit. But so probability distribution, it could be that itself could be exponentially decaying, right? I mean, could be, it depends on the case. It depends on the physical process. Yeah, depends. There are many examples. I can send you a reference if you want where this type of average, if you do it very nicely, just with regular beams. This can, can be this average is highly biased as well. So even though, even if you'd have infinite samples, there will be a statistical bias in this, in this average. So just to be very careful on how you, how you estimate this. Okay. And there has been a lot of research on estimating and using proper, with proper estimator of the probability distribution. So I can send you some papers. It is not an open area of debate right now, but 10 years ago, there was a lot of people working on this. So you have to be very careful. Also, on the following idea, I told you that, okay, you have crux relation. So you have forward. Yeah. And then the reverse and I was saying, okay, very nice. So reverse is cutting here. Great. But just realize something that when you do this type of experiments, you are saying that, okay, here is W. And here is Delta. Okay. So this will work. As long as these two are close enough. If you do the process infinitely fast. Well, you will have one histogram here. One histogram here. This will be the average of work. And here will be the type. So what do you do here? Imagine you do an experiment and you have histograms. This. So this is a big problem. So this means that these theorems are useful, but they have a limitation. And W is, if this is for example, a 100 KVTs. And I suffered this problem myself in my research in many times. This is 10 KVTs. You have a problem you will never see in a short experiment, you will never see this. So you have to use some techniques to extrapolate these histograms and see where did we cut. And it's also a big problem. You have to take into account these are universal relations, but they have a limitation for applications. And the limitation is that we can use this if we do the process infinitely fast. This doesn't work. Plus, okay. You can create turbulence, the fluid, et cetera. So it's not so easy. Okay. So close to equilibrium. This is easy to see. It's not out of equilibrium, but not so far. Okay. Okay. Just one last question is related to this. So. Negative entropy parts. I mean, trajectories are very rare. Right. So is it okay to say that. It depends on the system. Depends on the system because. Okay. Total entropy negative is rare. Okay. You're saying it still depends on. It's rare. But imagine I give you a model that is. I don't know. Two. And I tell you the rates are as follows. This rate is two seconds minus one. And this is 1.5 seconds. Okay. This is also two. This is two. And these are 1.5 1.5. Okay. What is a trajectory of negative entropy? So a trajectory of positive entropy will be. Going with the flow. So for example, going for one to two. And this is a cycle. And this is an entropy. It's taught in the cycle, which is okay. You have done a cycle. You return to one. So the system entropy is zero here. So the, sorry, the only. Contribution. Sorry. The only contribution that you have is three times the heat. So it will be three times logarithm. Of a forward rate divided by grade two divided by 1.5. Okay. And this is positive. Right. But now you can have a backward jump about what cycle, because there, this is two. And this is a 1.5. They're very close to each other. So you can have a trajectory going like this. Right. So it's not that rare. So you have the lives on the rates strongly. If you are close to equilibrium, you have much more likelihood to see negative entropy events, which will be a cycle against the flow and in that case, you will have a stock. Will be a minus three. to divide by 1.5 in this cycle okay and the probability of this and this one is very easy to calculate it's just you know it's a monk of change so it's the product of the position rates all right so it's not that difficult what counts is how asymmetric are the rates how different are the rates in one direction and in the other yeah yeah I understand actually the my question how I mean somehow these trajectories are related to works which are really rare in the sense when you calculate work from the same trajectory and entropy from that same trajectory rare entropy trajectories have somehow related to rare work I mean trajectories is well you don't need to think so much so so think about this this example so negative entropy is trajectories that aren't common and positive entropy status that are common so when you see a trajectory actually this is given in in the formula so the formula says the reproduction of a trajectory is kb the logarithm for the probability of the trajectory divided by the probability of the time reversal so if you see a typical trajectory this will be positive because the it's it's a probability of the atypical is smaller than of the typical but if you see an atypical atypical trajectory the atypical this one will be smaller than one so it will be negative okay because its time reversal will be more likely so it's it's all about how typical trajectory with respect to its its time error okay right then of course we we relate then this to to physics because every jump here we relate it to heat because we say kij divided by kji is related to heat e minus beta q i to j so this is the way we we connect to physics you know yeah so we can connect it to entropy I I get that but we like it's not like it's not a logical statement to say that high work trajectories will also be I mean they will be rare but not because of this reason no no they will be also from this reason also as well because with the work what I show you the other day is that the work for a trajectory minus the free energy change is divided by the temperature is the same as a stop okay in an isothermal system this is equal to this so there is a relation between work and irreversibility direct so there is as well so in a sense these two I mean trajectories for rare trajectories work and the entropy are like rare events right in the sense if the yeah okay but but but a rare event in the dynamics it's also a rare event in the work okay so if it's rare in in in the probability it implies that the you get a rare event in the work which is negative work so so if a trajectory is very rare this is very negative and the work is very negative very much very small with respect to to the depth that's the point yeah you can work with simple examples for example the other day I was talking with about the optical tweezers that you drag at fixed velocity you work you you you write the equation for the work and you will see rare rare events are this one the particle it's advancing so instead of lagging behind when the particle is here are rare events and these are events where the work is negative we're extracting work so you can really see the best way you see is with examples the best yeah right try to work this example and you will see okay so any other question so if not then we thank Edgar again for his lectures and we meet next week have a nice weekend have a nice weekend and see you on Monday I will discuss engines on Monday so let's stop here