 Okay, so I have a question. Yes. Yeah. So where does overdamped I mean in the experiments you have shown like two of the experiments, the particles were overdamped, were they ever underdamped? I mean the dynamics was it ever underdamped? Yeah, in general it's underdamped because particles have mass, but it depends strongly on what is the sampling of the sampling time of the trajectory. So if you take snapshots of a particle in timescales at which they are larger than the relaxation time of the momentum, you can say that the momentum has relaxed and the only non-equilibrium variable is x. When you have this finite sampling, which is at delta t, which is larger than the moment of relaxation time, you can neglect the inertial term. And this is typically what happens in many experiments. So in many experiments, the relaxation time of the momentum is of the order of microseconds and one can sample the position of the particle in milliseconds. So you can neglect the inertial term in general. For the dynamics, for the thermodynamics one has to be careful and I can give you, okay, if you are in one thermal bath, you can use overdamped and there are no many risks, but when you have a space-dependent temperature you have to be very careful and I can send you some reference about it. But in the simplest case, isothermal system, you can use an overdamped approximation and get a good approximation for the heat. Okay. Okay, I can send you, please send me an email and I can send you some reference about it. Yes, sir. Thanks. Okay. All right. So if you have no other questions, I'll go on with the second lecture. So the second lecture is about stochastic entropy. So mainly what I like to discuss today is what I illustrate here in the second curve of our class is the second law in stochastic thermodynamics. Okay. So first of all, here is a reminder of what the second law says about macroscopic process. So you have here two snapshots and you know from intuition that this can be typical, this can be a process that can happen, a glass breaking into many pieces. However, this process doesn't happen spontaneously. This is very unlikely. You've never seen this in your life. So is there a physical quantity that quantifies this fact? So this question has a positive answer. It's entropy production. What we say is that in the movie going from left to right, entropy production has increased. So the entropy of the system plus the universe has increased. Whereas when going from right to left, the entropy production has decreased. How can we see this? We can also see this in the compose the entropy production in two terms. One is the interaction of the system, which tells us, I call it delta assist here. This is related as you know, to the disorder. So typically systems physical process will lead to an increase in the disorder in the system. So here there is more disorder than at the beginning of the process. And also it will lead to dissipation of heat. Dissipation of heat has can be interpreted also as an increase in the disorder of the environment, which here I consider all the rest that is not this class. So typically we will see this type of processes. We will have increase in the environment entropy, increase in the system. But of course, think that here there is a law that says the law says that the total entropy always increases. So it is the sum of the system entropy and the environment entropy, which always grows or is equal to zero in equilibrium in microscopic systems. So this means that we can make a system more organized or ordered by reducing its entropy, delta system negative, but we will have to pay a price, which is increasing the environmental of the the entropy of the environment, which means dissipating heat. This is what classical thermodynamics says. Okay, so going back to classics here are two very important characters, Sallikarno and Rudolf Clausius. And they started the second law in the following way. So, okay, one thing is I highly recommend you to go through this book. If you want to refresh your knowledge about the second law is by Enrico Fermi. It's called thermodynamics. What do they say? So they say the following that imagine you do a physical process in which you expand the gas. You have a gas in contact with thermal bath. And what you do is you take this piston and expand the gas in this way. During this process, it takes place a dissipation of heat from the system to the environment. And I will call this Q. Okay. Now imagine you do the same process, expansion of a gas, but you move the gas infinitely slow. If you move it infinitely slowly, even if you do it so, so, so slow, there will be a dissipation of heat, which is called typically Q rep. It means the heat dissipation. Well, here I call Q to the heat absorbed. Okay, so Q will be negative here. So it's the heat absorbed when I do the process infinitely slow. The second law in few words safe for me. Any thermodynamic process dissipates more heat than when you do this process reversely. When you dissipate more, it means it is more negative. This is what this equation says. Okay. So in other words, okay, first of all, if something that was proved by Clausius is that the heat dissipated in a reversal process, it only only depends on the initial and the final configuration of the system. So it is a state function that is called entropy. Okay. It depends only on the final and initial state. And this is often, I will write it also like this, that is system. Okay. So this is a state function. And what the equation on the left is telling us is that is the following that the difference between the reversible heat and the heat is possible. And we can interpret the reversible heat, as I said, as the entropy change. And the heat divided by temperature is the change of the entropy in the environment. So we will assume the environment is necklid. So this means that this quantity here, delta s plus sn, which is the entropy change environment is always positive. And we can interpret this difference of this heat term divided by temperature as a sum of the entropy change in the system plus the entropy change in the environment. This is called entropy production s total. Okay. And this is always greater than zero. This is only zero when q is equal to q reverse or when you are infinitely slow in the process. All right. So this is the second law. It's written in stone as you see. Okay. This is true for microscopic process, very important. Excuse me. Yes. The difference between the up and down system was the speed of expanding the box. Yes. Yes. Yes. Exactly. This is slow. And this is faster. Any other speed. So this is infinitely slow. And this is at any speed that is finite. Okay. All right. Great. So what does this mean? This is also a reminder for isothermal process. If you do this process in a thermal bath, so there is a gas in a thermal bath of temperature t, we have that the entropy production is this formula, what we know. I just explained this. And now I will use this relation between the free energy, the energy, the system, is the system entropy, and this is the temperature. Just recall that when you have this type of relations, these, these, and these are system variables. This is not entropy production environment. These are system entropy. And this is from the environment. Okay. Just a reminder. So if you put this equation, if you put this equation in the first equation, you get this one. So I'm just replacing the definition of free energy. You get this. And now what I will do is I use the first law. So I write delta e as q plus w. And what I get is this equation. The entropy production is work minus free energy divided by the temperature. So this means that the second law, s dot positive means that the work done in any process is greater or equal than the free energy difference between the initial and the final step. Okay. Very, very simple. This is a classic example. Okay. What does it happen when we consider a non-isothermal process? So for example, if you have a gas, it is in contact with a thermal bath, hot bath during an expansion. And then we compress the gas when it's in contact with a cold bath. This is the working principle of heat engines, of the engine of a car, for example. And this leads to work because we are expanding the gas in contact with a hot bath. This is giving us a lot of energy. And then we are compressing when it's cold. So it takes us less amount of energy to compress it. At the end of the day, we get work out of this. So imagine I want to know what is the entropy production in a cycle. So after making a full cycle here. Okay. So in a cycle, delta f is zero because it's a state function. And then we just have q divided by t, but there are two baths, the hot bath. So the hot bath and the cold bath. Okay, q h divided by t h, q c divided by t c. This should be greater than or equal to zero. And if I now use the first law, remember in a cycle, delta u or delta e is zero. So the first law is zero equals, this is the heat plus the work. And the heat, of course, has heat from the hot bath plus heat from the cold bath. So I use this equation here. I get this condition, which is nothing else but the bound to the efficiency. This one here, this is the efficiency, is the work extracted divided by the heat intake from the hot bath. And this is a number that only depends on the temperature of the baths, not on the details of the system. So this means, I mean, in simple words, the efficiency of a Ferrari, of any engine, of any car is smaller or equal than the car efficiency. There is an upper bound for the efficiency of heat. Okay, this is just an introduction, but I will go through this more in one of my next lectures. Okay, another important key point from entry production is that it is related to irreversibility. So now consider a process at the finite velocity. So we have the expansion of a gas at the finite velocity. This is called the forward process, the one I'm showing here. And now say I will do the process backwards in time. So instead of going this way, I will go in the backward process, I will go this way. So I will go backwards. So if you do this and you do a snapshot of the process at intermediate times, like I show here in this red square, you will see that the system has differences. The distribution of the position of the particles of the gas is different in the forward and the backward process. So there is a degree of distinguishability of the arrow of time. So there is here, there is a relation between irreversibility and entry production. I say this because what one gets is that when you do a process infinitely slow, this is the reason that this snapshot that you take in the middle is not very distinguishable. But when you do this very, very fast, there is a lot of heat dissipation and there is it's much easier to distinguish forward and backward with a snapshot in the middle of the process. So there is intuitively a relation between heat and irreversibility or entry production and irreversibility. This is basic knowledge and this is what I'm trying now, I will try to explain also at the mescopic scale. So in classical thermodynamics, one cannot quantify irreversibility, but in stochastic thermodynamics, one can do it. And this is what my lecture will be about today. Okay, so now I enter more into the detail theory on this. I'll try to explain the second law in stochastic thermodynamics and mainly, I will follow this paper here by Udo Seifert in physical regulators 2005. This was a pioneering paper because it is where it was introduced, the notion of stochastic entropy and the second law in mescopic systems. Okay, so in this paper, what they discuss is, what he discussed, it's a simple system with a 1D over damped algebraic equation, like the one I was explaining to you yesterday. So we have x dot equals mu is the mobility. So mu, you can also say that is gamma to minus one, the inversion of the friction. And fxp is a force. And then this is the physical coefficient times, sorry, times y times. Okay, so one can describe this process in two ways. One is with this generating trajectories with non-jewan equation. And another one is in terms of the probability distribution, which is, for instance, you can use Fokker-Pank equation to describe what is the probability that, sorry, that the system is at x, at x at time t, sorry, sorry, at x at time t, given that it was at x0 at time c. Okay, this is what I would call pxp. We know from stochastic processes that for this equation here, the Fokker-Pank equation, the associated Fokker-Pank equation is this one. So we have partial t of txt equals, there is first a drift term, partial with respect to x of mu fp. This we can call it here the drift term. And then there is a diffusion term, which is the second one is d, the diffusion coefficient. And this is second derivative of txt with respect to x. This is the diffusion. Okay. Okay. And often we write this as minus partial derivative with respect to x of jxt, which this is called the probability current, probability current. Okay. This is standard in stochastic process. Okay. So first of all, one would like to define what is the entropy of the system. So in particular, okay, this is what I'm explaining here, in particular, I'd like to know what is the system entropy along a given trajectory, like the one I'm showing here. Okay. What is the entropy, system entropy along this trajectory? I expect that the system entropy will be also a stochastic process to do, for example, something like this. Okay. How can we define this? And here it comes the key definition in this, in this space. It is the following. So I do here a square system entropy at time t. We are saying what is the system entropy for a system that at time t, it's located in x. This, it's defined as follows. It's a definition that that's why I write here three equal nine is minus the Boltzmann constant times the logarithm of the probability to be at x at time t. Okay. Remember, kb, this is the Boltzmann constant. Why this is taking this way? Well, it is taking this way because when you take averages, so let's say you are at x at time t, but you would like to average this over many trajectories. So here I'm saying, okay, if I say x at time t will be at time t, I am here in x. Okay. But I'd like to know what happens when I average over many trajectories. So there will be, sorry, sorry, this is not correct. It should be here. So this is t and this is x. I'd like to know what is the average of this stochastic entropy along many trajectories. So there will be another trajectory that will be here. Okay, this time, it will be another x, it will be x prime. So if I take this object, system entropy, and I average over many of these trajectories at a given time, I get this, which is minus the Boltzmann constant, then I integrate over x with probability to be at x at time t. And what I integrate, what I average is minus kb log of p, which is what I'm putting in this integral. So you realize that here, okay, you realize that this one minus p log p is the Gibbs Shannon entropy. The Shannon entropy is defined as minus p log p integrated with dx. So this definition introduced by Seifert in this paper is consistent with the fact that when you average over many trajectories, you will get minus the Boltzmann constant times the difference in the Shannon. So it's related to the common knowledge of information that one gets. This is the uncertainty, h is often called also the uncertainty of random variable x. So this is the first key equation, which is the system entropy, stochastic system. As you see, if at time t, I am in x equal to 1, it's different that if I am at time t, I'm in x equal to c. So having said this, unlike now, to go forward, I compute how does this entropy change in time? So I have here the definition of system entropy, which is what I just introduced. And what I'll do now is to do the time derivative that I show here, time derivative of the system entropy at time. Okay. So now it comes a question that can be related to what I explained yesterday, how do we do time derivatives? So we can use Stratonovich calculus and do a time derivative as usual. So when you have a function of x and t, if you want to do the time derivative, you will have mainly partial derivative of A x t with respect to t dt. These are Stratonovich rules plus partial derivative of A x t with respect to x is Stratonovich dx. Okay. I'm sorry because I'm doing d dt. So what I have written is dA. So if you want to know d dt, sorry, d dt is just to go because I'm divided by dt. And this can say, okay, this is the way we take derivative Stratonovich. So we do this with this k v log p and we get, okay, the first term is partial s with respect to t, partial s with respect to t. So partial of log is one over p, partial p with respect to t. And the second is partial log p with respect to x. So it would be one over p, partial p with respect to x dx. Okay. Nothing else on that. Now I'd like to continue from here, which I will jump here. And to make this jump, I will just make use of the Fokker-Plak equation. So the Fokker-Plak equation I told you is partial t of p equals minus partial x of j. And j is, I'm just simplifying notation. This is the current mu of p minus v partial x of t. Okay. Of course, this f will be f x t. This is b x t. Okay. And then here is this b x t. But this is just to simplify it. Okay. Why I'm doing this? I'm doing this because I'd like to just change this by something which is related to currents and so on. So from this equation here, I can solve partial x p and it becomes just mu f, and I'm reading here, mu f b times p plus j divided by t. So again, well, mu divided by p is k v t. I'm using instant relations. So d is mu k v t here now. And then I have j divided by t. So I have changed this thing here, partial p with respect to x by f divided by k v t p minus j divided by t. So now I continue my calculation by replacing these two terms here. And what I get is the next line. The next line, well, I keep the same term here. And then for this, I put f p divided by k v t minus j divided by t. So I have first this one, f divided by k v t x d t. And the second one is with a minus j divided by d p x d. Right? This is what I get for the instantaneous change in the system entropy. And now I realize that the environment entropy, which I will write here in red, I said to you yesterday that this minus the heat divided by the temperature. And the heat is f dx. So this will be with a minus. This will be plus f dx d divided by k. This is from yesterday, from Secky-Mottoff. So I realize that here is what I put here, this term that appears in the derivative of the system entropy is minus the environment entropy. Okay. So I have seen that the system entropy, let me put it in the same color, system entropy. When I do this, I have, okay, there is a minus here and a minus here. So I have the environment entropy plus something else. And this something else will be the total entropy. This is what you can find here. The total entropy is system entropy plus the environment entropy. So if I take all this and I sum the environment entropy, they cancel and I just have this first term and the third term. So I get kb times current dp dx d t minus one over p partial t. This we interpret as the total entropy, the change of the total entropy between t and t plus dt. And you must recognize in this formula that it is fluctuating and it can be negative. So the change in the stochastic total entropy or the stochastic entropy production can be negative. Very important. It's can be negative. This is a key insight. So just I will take a different color. So for the total entropy, okay. And now you see we have environment entropy in terms of forces and displacement. System entropy is all this and the total entropy, it has currents, diffusion and problem. Okay. Another important point is that in the heat, you can measure the heat with one trajectory, but you cannot measure the entropy with one trajectory. Why is this? It's because we have here the object p. So the p is a wall that you always face when you work with entropy. So in order to know the entropy production of a trajectory, you first need to run the process many times and construct p and then know what is the p for your trajectory. Okay. This is a key difference with the first term. Okay. So let me go ahead. And now I will ask the key question is that do we have a second law? Because I told you this stochastic entropy can be negative. So then you should be awarded. So we claim that there will be a second law when we average over many trajectories. So let's now consider the average of this total entropy change over many trajectories. So if I go back, I just show you that we have this formula here. So I will take this and average over many, I will average with this probability pxt. So this is what I'm doing here. Excuse me, professor. Yes. Did you share with us these notes somewhere? Anywhere? I don't know on the side. Did you share them with us? Yes. But I honestly prefer that you take notes while you watch my lectures. I can share with you, but honestly, it would be much more useful for you if you try to follow me with your pen. Okay. Yes. But if the slides are already written, honestly, if I'm writing, I don't understand what I'm writing. Okay. Try to do your best. This is my recommendation, of course. But I'm happy to share the notes with you. Okay. Yes. If you can share, for example, these kind of notes, like if you write in the lectures and we can follow your pace, but if the notes are already written, I mean, there's no way one can write the notes and understand what's written in there at the same time. I think it depends strongly on the student. Yes. But after 20 years of university, this is what I found out, which is very hard to share. I'm happy to share. Especially when there's a lot of computations involved and one gets... Sure. Anyway, just remember that... The various calculations. Yes. I fully agree with you. I will share my notes, but just be aware that this will be also in YouTube. So you can also follow this. I will share the notes with you anyway. But please just follow my lecture live and I'll pass you later the notes. Okay. Thank you. Very good. All right. So let me continue. So then what I will do now is to take the average of this object. So I have a first term that is the average of partial tp with respect to p and a second term, which is the average with respect to all the characteristics of this term that contains the fact. First of all, the first term is equal to zero, the average. And this you can see it easily in the next line because when we do this average, sorry, when we do this average, this means average with respect to the process. So it means we will average with p. We have to put pxt to average with px. So it turns out there is a p in the bottom and in the numerator, so they cancel. So we just have partial with respect to t of px dx. And this one, so you can take the time derivative out and this is equal to one, so the derivative of one is zero. The first term is zero. Be careful because this is fluctuating, but all average is equal to zero. So now I come to the second term and the second term is as follows. So we have j divided by dp dx. Then I will take the dt below later. And by now I don't take it. And now what I do is I use the Langevin equation. So remember that I have it here dx dx equals mu f dt plus square root of 2d dv. So I plug in this here and I have the first term, which is with mu f dt and the second term, which is with dv. Remember, I'm doing everything in Stato Novics. That's why I have this circle here. But for the sake of calculations, things are easier to calculate in Ito. So I use again the same theorem I called yesterday. And what I get is, so I transform this one in Ito plus the term with dt, which give me a second. No, sorry, sorry. I am here. So I will have with dot dx. Then I will have a dt term. And I will have a second dt term here. So this comes actually from this one. Both will contain a dt term. And on top of that, on top of that there is a, sorry, sorry, sorry. I think I made a mistake. Give me a second. This is dx. This is correct. Then there is a dt term. This is partial x of j. Okay. And then there is j p square. Okay. Okay. Give me a second because I don't see it now. Okay. I will have to check this later. But if you apply this rule to this equation, you finally get, okay, the first term is here. So it's mu f with Ito. Now we have this one in Ito as well, which is square root of 2 dv in Ito. And then finally there is a dt term that comes in this way. It's j partial xp where the p square dt. Okay. This comes from transforming this into Ito. All right. So after all, one thing you can see is that after all this calculation, s dot total becomes here. So you have j divided by, sorry, j divided by dp. Then you have mu f. And then you have a term which is partial x of p. So if you go back to the focal point equation, you can check that the total entropy becomes, this is just a mathematical manipulation. It becomes j divided by dp square mu f minus d partial x of p. Okay. This is the key result. So you will have a term here, multiplying by another term that turns out to be also j xt. Okay. We come back to our definition of the focal point equation. This is the current. So we can say this is j xt. Okay. Just mu fp minus d partial x of p, right? So when we average over many trajectories, we get that the total entropy and over many trajectories equals kb j square divided by dp square. So it turns out that, okay, this is j to square. So this is a positive number. p square is also a positive number. And this is the diffusion coefficient, which is a positive number. So we get that the entropy production on average is greater than zero. This is the way we formulate, sorry, the second of thermodynamics. So it holds on average when averaging over many trajectories. Okay. In other words, one can also take from here and take the average in this way. So remember when I do an average, it means that I, okay, any average here, it means that I am doing an integral over px of pxt. Any average or something that depends on time. Okay. So when I have this average here, what I have to do is take this, multiply by pxt dx and integrate. So when you multiply by p, the p square denominator becomes p, like here. And then you have this. So it turns out this is the analytical expression for the average integral production in an algebraic system. And this equation reveals one thing. It reveals that it is zero in equilibrium because it depends on the current in ij square. And in equilibrium, there are no currents. Okay. So for instance, if you have a two-state model, which will be of state one and state two, and this is a very little balance. So you have no current between state one and two, no net current. Then the integral production in this model will be zero. However, if you have a system, which has three states, for example here, one, two, three, and the rates are not balanced. So for example, there is a higher rate to go this way than to go backwards. This rate is smaller and then the same here and the same here. And then here we have a smaller rate, like this. In this case, one has currents. So if you look at the current between two and three, the current, the net current is going this way. So we have a clockwise current in this system. In this system, J will be different to C. So then we will have entry products. So this also relates to the idea that we had before on relating entry products with irreversibility. As long as you have a system that goes more in one direction than in another, you will have positive current and this will lead to entry products. Okay. So an important insight that I really want to highlight in this lecture is that the total entropy along a single trajectory is this formula. So we calculated the change. This is what we calculated up to here. This is d s dot pt, or ds in this case, time. If we want to know the entropy up to time t, we have to integrate, okay, this is equal to the integral of basically delta s dot s from s equals 0 to t. And this is nothing but the integral between 0 and t of s. Okay. It's just, sorry, this one. Okay. So as we know, we have calculated it and it turns out that you get, sorry, we saw it before. So we got this from before. So we got it actually here, this formula. This formula is stochastic entropy along a single trajectory between t and t plus dt. So if we integrate over time, we get this one. And what is very important is that this formula can give any number. It can be very negative, very positive, and so on. The only thing we are ensuring right now with this formulation is that it's average at any time it's always positive. Okay. This is the way we formulate the second law in stochastic thermodynamics. All right. So we can also do this up to now. I've done this for continuous systems like Klein-Jobin, but we can also extend this to discrete systems. And one typical discrete system is a Markov jump process, like something that they have here. We have discrete states, a thermal bath, and a system that jumps between these states. Of course, in general, one has rates that are independent like this. But the simplest case and situation is when these rates are time-independent and we can look at either equilibrium or non-equilibrium steady states. So when you have equilibrium, typically, you have this relation, which is relating the rate in going from state m to state n to the rate in going in the opposite direction from state n to state m. In equilibrium, these rates obey this is the condition of detail balance, which implies that the net current between any two states equals to zero. You can check yourself that the equilibrium probability times, so this would be the net current per unit of time between state m and n. And this would be the net current between state m and n and this is n to m. So if you plug in the Boltzmann distributions here, you will see using this condition that you have no current between any two states. However, when you are out of equilibrium, you break this condition. Out of equilibrium, you don't have this relation and you have something that is a generalized detail balance condition, which we call local detail balance, detail balance. Sorry, let me have some trouble to write with this balance. This is the condition of local detail balance in which we relate the rate to go from n to n to the rate to go from n to n with the heat dissipated in the transition. Here, we don't have the energy that we have only the heat and this is a crucial assumption that we take in stochastic thermodynamics. In other words, we say that the transitions are triggered by the environment. So when you have minus q divided by t, this is the same as saying the environment entropy divided by q. So this is a assumption that we do in stochastic thermodynamics and actually we define the environment entropy like this and it will be crucial later to define entry products. So let me now explain briefly how do we calculate the entry production from state x to state x prime in a model like the one I just showed before in other sketches. So in order to compute this, we will say, first of all, we have the same definitions as in the case, sorry. In the previous slide, can you explain again why this local detail balance, why is it s environment? So this is an assumption that we do but it makes a lot of sense in many physical systems. So for example, when you have, for instance, okay, harmonic potential, you have a particle and you will say, okay, how can I jump between this state and this state? Okay, so this jump is, sorry, is often triggered by, this is not coming, the energy that the particle is taking is not coming from an external agent but it's coming from the, from the bath. So what we say is that when the particle is jumping between a low energy state to a high energy state, it is because it has absorbed a certain amount of heat from the bath. Now we say if we are here and we jump backwards, so we are doing the same transition, we will dissipate the same amount of heat but with a minus sign. This is a very, let's say, standard assumption in stochastic thermodynamics. In order to have this type of dynamics, this type of dynamics implies this condition here. Okay, I'm just trying to explain this in a very, very simple language without giving you too much math but this condition reflects the fact that when you have a jump, one state to another and a jump between the reverse jump, you dissipate an amount of heat that is minus the heat that you absorb when you jump between state A and state K. So it's just a consistency, consistency relation for this type of systems. Sir, then shouldn't it be two times of this or I mean, I'm just saying something wrong. Why two times? Because like forward jump, the rate of forward jump is proportional to e to the power minus q and then rate of backward jump is proportional to e to the power q. You have to be a bit careful with this. The rate of jump backwards is not always proportional to e to the minus q. What you know from, okay, this is a random formula like this but when you have this type of dynamics like the one I show here, this is not always and you can show this in many physical models. You don't have always that this rate is proportional to e to the minus q. It is not. Be careful with that. Which makes more sense is this condition, okay? Excuse me. Yes. But aren't we neglecting the possible presence of work, external work? Yes, but okay, when we deal with discrete systems is what I discussed yesterday. Okay, one assumption that we do is the following. When we have two levels or two states, let me call state zero, state one. When we have two states and a system that jumps between state zero, state one, we say that the energy change in this transition is all heat. Okay, so here let's say for example this is cavity. Here the energy change is cavity, right? And we are saying that this equals to q plus w but in our definition of stochastic thermodynamics, we say, sorry, I lost it. So give me a second. Okay, I lost all my, okay. So we say that when you have a jump between zero and one, there's a difference of energy which is delta e equals cavity. And we take this as, well, this is an example when you have a barrier of other cavity. This is an example. Okay, here we say that this is all heat. Whereas when you have a system in which there is no jump, but we raise a barrier to up to cavity, for example, we take this energy change as work. Okay. Okay, thank you very much. This is because of the definitions we are using. Okay. All right. And does it make sense now? All right. So we are assuming this that jumps between different states are triggered by the environment and therefore they dissipate heat and changes in the energy in which the system is not jumping to different states are due to external agent. This is our assumptions. That's why, that's why here we just say, right. So this is a crucial assumption and you will see why this is so crucial in the next slide. So in a discrete system, we have a jump between X to X prime. We can still define the system entropy as we did before. And the change of the system entropy in this jump will be the system entropy at the end, which will be minus kB log P X prime minus the system entropy at the beginning, which is minus minus, which is plus kB log P X. So you can also write it like this. Okay. Minus kB log P X prime P is the probability to be in state X prime. So you can write it like this. On the other side, the environment entropy change, you can write it as kB times the logarithm of the base of it in the rate to go from X to X prime and the rate to go from X prime to X. So why this is nice? It's because when you sum this plus this, you get, this will be the total entropy change time t. You get that this is equal to kB times logarithm of probability rate probability rate. So mainly, this is nothing but the probability, the joint probability to go from X to X prime. And this is the joint probability to go from X prime to X. So, and this is the formula that I'm trying to illustrate here. The total entropy change between t and t plus dt can be written in terms of joint distributions. This is the both one constant times the logarithm of the probability to see at time t, the state X, and at time t plus dt, the state X prime divided by the probability of the reverse trajectory. This is at time t, the state X prime, and at time t plus dt, the state X. So in a different way, we can also do it, I mean, this is a sketch. This is the probability of the logarithm, this is the logarithm of the probability to see first a circle, then a square, or first a square, and then a second. So this is the time reverse trajectory. The nice thing of this formulation is that you can really see a relation between, you can see it like this, a relation between entropy production and irreversibility, because this is related to probabilities of trajectories and time reversals. We will see this later in more detail. Okay, so something that is very important is that the environment entropy change is seen on outside jumps. So as I said, when we have a jump, so we have a jump between one state to another, we say there was environment entropy change. But when you are in one state, and you change the energy of the state, there is no environmental change, it's just work. So very important, outside jumps, the environment entropy is the changing environment. Okay, another thing is the system entropy is typically on average zero, because it's partial zero P, and we showed before this is on average zero. And the total entropy production, which I defined before, you can also take it, sorry, take it and sum this over all the trajectories, and what you get is this formula here, the total entropy production has the system entropy change, and the environment entropy change, which is the sum of all the jumps, so it's the sum of all this here, this is in one jump, from x to x prime, but in a trajectory, we have a lot of jumps. Okay. So if we want to know the environment entropy up to this time, we need to check the environment entropy in this jump, then the environment entropy change in this jump, environment entropy change in this one, and then in this one. Okay. That's why the total environment entropy change is the sum over all the jumps of the logarithm of these transistors, okay, the rates of the transistors. A nice thing is that later on you can also take averages of these quantities, so it's very similar to what I showed for the Langeman system, but now we have to sum over all possible states, and you can also write it in terms of the currents, this is, you can show very easily, and you can do this with the total and the environment entropy and system entropy, so these are analogous formulas to what I showed for the Langeman case. And I will come now to a very crucial formula, which is what I was trying to explain now, is the relation between stochastic entropy production and time irreversibility. This is what I'm going to show now, it's very general, so here I showed for discrete systems, but you can extend it to continuous in this way. So the way we do this is as follows, we define one object that is px, well, sometimes I use this notation, and sometimes I will also use tx, tbar to denote a trajectory, this is the path variety density to observe a trajectory in the forward protocol, so we have a forward protocol, and we look at what is the likelihood to see a given trajectory. And then we look at another distribution, which is p tilde of x tilde, it is the probability to observe the time reverse trajectory x tilde in the time reverse protocol lambda tilde. So let me give you an example, one example is what I show here, so we have a process in which we go forward in time like this, so we are changing a control parameter from value lambda x to lambda b, and this produces a given trajectory like this one I'm showing here. So this first object will be the probability to see this blue trajectory in this process, which will be this black line is grown, this will be the process running in time like this. The second object is the probability to see the time reverse trajectory of this one, which will be a trajectory doing like this, in the backward process, the backward process is running in the other direction, so now the backward process in reality is running like this. Okay, so now the black line is being done from here to here. As you see, this won't be this trajectory that I'm showing, when I show it in blue, it's a typical trajectory for the forward process, but it's a very atypical trajectory in the backward process. So one typical trajectory in the backward process will be like this. Okay, this happens far from equilibrium. When we do this process very slowly in equilibrium, this means that the system can follow the protocol in equilibrium, then it will be, we won't have this type of hysteresis in the arrow of time. The system will be, all trajectories will be equally likely forward and backward. We also have to add an assumption and check later mathematically, which is this one, and then this leads us to show that the ratio between these probabilities, path probabilities, you can write it as follows as, okay, this is the ratio between the initial and the final state, and then we can, okay, we can factorize this path probability as initial state and then probability of the rest of the trajectory given an initial state. And this one, you can break it in the same way I was breaking it now in terms of, I was breaking it before in terms of environmental entropy, you can write all this as e to the minus beta q. This is an essential, an assumption is to classic thermodynamics. This object equals to e to the minus beta q, and this is the q associated to this project. Okay, what about this? So this, you can write it as e to the minus log of the ratio of this, and this is the system entry. So in the end, you get the ratio between two, so probability of the trajectory forward, probability of the backward trajectory, the backward, you write it as exponential of the total entropy associated to the entire trajectory, this one. Okay, this is a very important formula, and here I show a simplified notation you can find in many papers and books, which is this one, px divided by px reverse, so p reverse, the important p reverse of px reverse equals e to the s dot of x of the trajectory divided by q. And this is a central result is to classic thermodynamics, that the total entropy, so I'm taking logarithms in left and right, in the reproduction associated to a single trajectory, I write here as s dot, but in reality this is s dot evaluated at x c dot t is Boltzmann constant times the logarithm of the probability to see the trajectory divided by the probability in the backward process to see the time reversal. Well, what does this mean? This means that in equilibrium, all trajectories on the time reversals are the same, so this object stochastic entropy is equal to zero at all times, so if it goes, in this figure I show here, it goes always in zero, in equilibrium. In non-equilibrium, that probabilities and the time reversals are generally different, so the enterprise of fluctuates can take any value as long as its average is positive, and this is what I show here in this trajectory. Okay, this is the entropy production along any non-equilibrium process. It can even take negative values, okay, this is very important. This is a central result in stochastic thermodynamics, and I can give you a simple example, so things are not so abstract, and a very simple example is here, so we have a particle in a, I think it should be close to finish, but let me just finish with this example, just one minute. We have a particle in a linear potential, a particle, of course, has a tendency to go more in this direction than in this one, and if you compute entropy production for this example, this flangevan equation, you can show it takes this simple expression, so it's just proportional to the position of the particle, so it means that when you go down, you go with the, with the net force, you produce entropy from this formula, entropy is positive, and when you go up, entropy is being reduced, total entropy production, stochastic entropy production is being reduced, so these events can happen because there's a system with fluctuations, and this is quantified as a transient negative entropy in the dynamics, okay, negative stochastic. All right, so I will stop here, I had more to say, but I think we can, I can explain this in the next lecture, and so I prefer to leave room for questions, if you want. Any question? Sir, like in the, we found, I mean, in that paper, is it just the average value, or does it comment about the distribution of total entropy? In which paper, sorry? In the site itself, I mean, the one which you're trying to describe. Yes, this paper, well, they discuss the fact that it can't take any value, this is discussed there, but distributions are not provided in this paper, it is a very theoretical paper, almost in the same way I explained, okay. However, you can find distributions in other papers, like for example, I have one example, right, this is a, okay, this is one example, these are physical regulators, which came after, right after that paper, and you can find in this paper, they do a quantification of stochastic entropy in an experiment, and they plot distributions, and the distributions you can find negative values, so that I can give you more reference if you are interested. And like one, maybe basic question, why is stochastic, I mean, entropy production of like very, I mean, people are interested in it? Okay, this is a very, very big question, I must say, but people are interested because it's a central quantity in thermodynamics, you can see it right in my second, you can see it, trying to go back to my presentation, you can see it already here in the beginning of my lecture, entropy production implies bounds and constraints of physical processes. It's one of the most universal laws, the second law. So for instance, if you apply the second law to an isothermal system, you see it implies a constraint on the amount of work you can extract from a physical process. If you apply it to a non-isothermal system, it implies a bound on the efficiency of a heat engine. If you apply to any other system, it will imply constraints on the energetics of different systems. So it is a universal law that has very relevant implications on different physical processes, but people are interested because of its universality. It's probably the most universal well-known principle in physics, the second of thermodynamics. Yeah, I agree with that, but as you have shown us now, this is not really as universal in the sense that the average is not really greater than zero. I mean, only average is greater than zero, not the total entropy. So I mean, is it like I was thinking something experimental, you're saying something on the universality lines? Yes, so this would be something I will discuss in the next lecture, which is about fluctuation theorems. So in the fluctuation theorems, this was a key discovery in the field, and it was found that there are also universal laws for the distribution of entropy production. So the distribution cannot be arbitrary. So there are constraints on you cannot be at minus infinity for free. So there are constraints on what is the probability to get very negative entropy. This is something that is very new, while it's new in the last 30 years, and it has been found in what is called fluctuation theorems. So I think your question is very important, but I will give you an answer hopefully in the next lecture, because if we discuss exactly this, why are we looking at negative entropy and what new results appear for this negative entropy in this field? Okay, and just one last question. How does this system entropy generated in, I mean, sorry, entropy of system is given by Q reversible by T for a simple process. How does it sit with stochastic thermodynamics? I mean, can we actually see that it's similar in average sense? Yes, you're also advancing something you wanted to say yesterday. So of course, if you use this definition that I'm showing, it's something totally statistical. So this is probability for a trajectory. You don't need to have physics here, right? You can take a financial system and compute the probability of a stock market to have a certain value, right? But the key point is that you can now compute this quantity, this p over p reverse, and go to an experiment and find that this is related to the real heat in the device. And this, for example, okay, I didn't have much more time, but it's good you make me this question. We did this in a paper. This is 2019 and we published this with an experimental collaboration. This is an electronic system which has four states and you can compute this statistical quantity log p over p reverse of trajectories because it's a Markovian process. And you see here on the right experimental time traces of the stochastic entropy. And the slope of these curves is s dot dot, which I was explaining today, is related to the joule heating. So you get that this dot is minus the heat of the temperature and the heat in this system is an electronic system. This is the joule heating is the intensity of the electrical current times the voltage. This is a system that has a DC voltage applied. So it seems it's very statistical, very esoteric, but at the end of the day, this slope is related and this stochastic entropy on average is related to heat in many examples. Okay. Okay. Thank you. Okay. Thank you very much, Edgar. I think we need five minutes of break and I'll stop recording.