 Shall we start? Okay. This is a proposal for the second lecture. Please come up. Okay, so, okay. Can you hear me? Okay. So, did you understand well what we learned yesterday? So, I mean, during the yesterday lecture, there were many questions about the stochastic calculus. So, I think it will take some time. So, if you see this stochastic calculus first, I think it would be confusing, but I think time is the best medicine. So, some experience or some exercise isn't necessary to get used to this stochastic calculus. So, don't worry about too much. So, definitely it is not difficult. Okay. So, yesterday, I talked about the thermodynamics for a long-driven system. So, there we, I talked about the definition of work and heat and how to evaluate the path probability. So, today, I'm going to talk about, I'm starting from the thermodynamics for Markov jump process. Okay. So, Markov jump process has a discrete state. So, let's say that this is a discrete state from 0, 1. So, this is an indices for states and this system is in contact with reservoir with temperature T. Here, this is an energy level and this denotes a probability for each state. Okay. So, with this setup, this system dynamics can be described by this master equation. Here, R ij means the transition rate from j state to i state. So, be careful about this order of this indices. So, here i is not equal to j. And then so, because this is a transition rate, so this one multiplied by probability means that average number of jumps from j state to i state per unit time. Okay. So, it means that, so this first term means that influx into i state. And the second term means outflow from i state. Okay. So, now this summation applies to this term and this term. So, we can divide it into two parts. So, let's look at this second term. Then this summation only applies to this transition rate, not this probability because this summation is only for, I mean, except for i state. So, this term, I mean, this term, so we can say that this is an escape rate from i state. So, if we define the diagonal term of this transition matrix as minus of this escape rate, then we can write this master equation in this simple way. So, here now it includes i index here. Okay. So, with this setup, then now we can define work and heat for this Markov jumper process. So, this is average energy of a system. Then now this one is time derivative of this average energy. Then it has two terms. So, this is the first term and second term. The first term is about the energy level change and the second term is about the population number change. Okay. So, I will show that this first term is power. I mean the work rate. And the second term is heat. So, heat rate. So, I'll show this. Okay. So, let me first do the second term. Okay. So, this one. Okay. First, consider this transition from i state to j state. And this transition is induced by heat absorption, qji. And this qji is equal to the energy difference of these two states. So, if we say that if we define delta nji, which means that number of jumps from i state to j state during delta t, then the total, I mean, then this value. This value is the heat absorption during delta t from the transition from i state to j state. So, the summation of all pairs of transition, then it gives the total heat absorption per delta t. Is it clear? Okay. So, its average value can be written in this way. And we can write this average value in terms of transition rate in this way. Because this one. This one is an average number of jumps from i to j per unit time. So, by multiply delta t. So, this means that average number of jumps from i to j during delta t. So, by dividing delta t and taking a limit, delta t goes to zero. Then it means that the heat rate, right. So, from this we can write in this way. So, this is a heat rate. Okay. And then, now, divide this summation into two parts. This is the first part and this is the second part. And if you look at this first summation, then this indices i and j are dummy indices, right. So, we can exchange i to j, j to i. So, we exchange the dummy variables, dummy indices in this way. Then, now, some of this again, then we can have this equation. So, from the equation of motion, from the master equation, this is nothing but p dot i, p i dot. So, it means that heat rate is given by population change. So, it means that the heat, I mean, in this Markov jumper process, heat is associated with population change. This is the meaning of this equality. Okay. No question. And then, now, let's look at the first part. So, now, let's assume that system is in state i, initially. And now, we change the energy level in this way. This is not the transition, but the system is in the same state, but we just change the energy level. So, after delta t, energy level is changed to E i t plus delta t. So, because a system does not change, so this one, the population does not change after delta t. Only the energy level is changed. So, this energy level change is not induced by heat, but by external agent. It means that this energy level change is induced by providing some work. So, the mean work for this energy level change can be evaluated in this. So, this is energy level change. This is the necessary work to change this energy level times population probability. Then, this gives the mean work for this energy level change. So, by summing over all states, index i, then we get, I mean, the mean work for all changes, total work. So, by dividing delta t and taking this delta t goes to zero limit. So, this is a power work rate. So, it becomes E dot times p. So, it means that work in this Markov-Jumper process, work is associated with energy level change. So, this is the first thermodynamic first law in the Markov-Jumper process in this master equation system. So, we found the what is the thermodynamic first law, how to define work and heats in this discrete state jump system. And then, now let's turn to the how to evaluate the path of probability in this system. So, this is one example of stochastic trajectory of this Markov-Jumper process. So, it starts from the state x0. And at time t1, it jumps from x0 to x1. And it stays at the same state. And at time t2, it jumps from x1 state to x2 state and so on and so on. And this is the final state. The final time is tau. So, this is the one stochastic trajectory. So, we want to evaluate path probability. So, let's consider this one first. So, jump probability from i state to j state during delta t is given by this value. Because rji is a transition rate from i to j. So, by multiplying delta t, it gives a jump probability. And escaping probability from i state during delta t is given by... So, this term, as I mentioned that this term is escaping probability. So, escaping rate. So, escaping rate times delta t gives the escaping probability. And I defined that this escaping probability as a diagonal element, rii, minus rii. So, we can write escaping probability in this way by using the diagonal term of this transition matrix. And then, staying probability is then 1 minus escaping probability is a staying probability. So, this is a 1 minus this escape escaping probability. And because delta t is a very small number, we can approximately write by using exponential function in this way. So, this is a staying probability, escaping probability and jump probability. Okay. So, now let's consider this one segment. So, here, for example, this segment, let's say that. So, here it started from tn minus 1. And the state of the system is xn minus 1 until time tn. And at time tn, a jump soccer from this state to this state. So, now I want to calculate the probability for observing this one path segment. So, let me first consider staying probability, staying probability during this time. Because for infinitesimal time, staying probability is given by this exponential factor. So, the staying probability for finite time also can be written in this way by using this diagonal term of the transition matrix. So, the question here? No? Okay. And then, so, the probability for observing this one path segment is that this staying probability times jump probability. This is a jump probability. So, this is the probability for observing this one path segment. And then, what is the probability for observing this whole path segment? Whole path. And then, this whole path consists of each path segment here and here and here. So, the whole path probability can be written in this way. So, it started from the initial state x0. So, we need initial distribution first. And then, this is a product of each segment probability. Because each segment, each path segment is given independently. So, we can write the probability in this product way. So, this explains each segment probability. So, first segment, second segment, third, fourth segment. And finally, one staying situation remains. So, this probability is included in this staying probability. So, in such a way, we can write the path probability for this whole path can be written in this way. Okay. So, is there any question here? Okay. This one? Summation of rj equal to what? Yes. Yes. I mean, that's a normalization condition. Okay. So, here. Okay. So, if you look at this path, here I defined escape rate as a diagonal. I mean, this transition matrix is defined when i is not equal to j. So, here I defined why it's a diagonal term. Diagonal term is defined as a summation over this transition rate except i. So, this is kind of, I mean, kind of, you can think it as just a definition. So, if we add, so, as you said, if we move this term to the left-hand side and summation over rj, then it becomes zero. Actually, this is called a normalization condition. Yeah. Yeah. Yeah. So, more questions? Okay. So, let me summarize this second part. So, we, I talked about how to define the work and heat in this Markov-Jump process. So, this is a thermodynamic first look. And I also talked about how to calculate the path to probability in this Markov-Jump process. Okay. So, I'll use this, I mean, I'll use this path to probability in the next lecture, lecture two. Okay. So, let me go into the, I mean, original lecture two. Okay. So, I'll skip this review part. Okay. So, in lecture two, I'll talk about two things. So, the first section is about how to define now entropy production in the stochastic system. So, originally, the thermodynamic entropy production is given by Clausius entropy production, which means that the form looks like this. So, heat divided by temperature. But this was originally defined in equilibrium, quasi-static limit, right? So, I mean, we have to extend this concept to any general non-equilibrium stochastic processes. This is not the trivial task. So, I will talk about how to define entropy production in general non-equilibrium stochastic system. In the second part, I will talk about the thermodynamic second law in this stochastic system. It is important because no matter how we define entropy production in the stochastic system, because of this thermal fluctuation, there is always finite probability to observe the negative entropy production due to this thermal fluctuation. But what you know from the thermodynamic second law is that entropy production should be non-negative, but it can be negative. So, what is the thermodynamic second law in the small system? So, actually, this fluctuation theorem answers this question. So, in the second part, I will talk about what is fluctuation theorem and what is the thermodynamic second law in the small system. Okay. Okay, so before defining the entropy production, let us first consider the quantities called so-called irreversibility. So, here, I will show you two movies. Then, when you watch this movie, choose which one is time forward and which one is time reverse movie. Okay. It will be very easy. So, first movie and second. So, which one is time forward and which one is time reverse? Too difficult? Okay. So, very easy. I mean, left one is time forward and this one is time reverse. We know. I mean, yes, it is very easy. But what about this? Can you distinguish which one is time forward, which one is time reverse? Okay. I think, oh, you choose it? I think it is very impossible to distinguish. So, why in this case it is easy to distinguish, but why in this case it is difficult? Okay. So, let me summarize in this way. So, this is the first movie and this is the second movie. So, in the first movie, this direction is the time forward direction and this direction is time reverse direction, right? So, the time forward direction process is more probable to take place compared to the time reverse process. So, we call such process as irreversible process. Actually, but in this case, actually this time reverse process, observing this time reverse process probability is actually very, very, very much small. So, in our daily life, we cannot observe such time reverse process. So, that is why we can easily distinguish which one is time forward and which one is time reverse. However, in the second case, if we ignore a very small dissipation, then we can say that time forward process and time reverse process, they are equally probable to occur. So, we call such process as reversible process. So, the process is irreversible, then it is easy to distinguish which one is time forward and which one is time reverse. But in this kind of reversible process, it is difficult. I mean, it is impossible. So, inspired by this observation, now we can define the quantity irreversibility. So, here let's look at this schematic diagram. So, this is a schematic diagram of a stochastic system. So, let's say that there is a, suppose there is a system and this is initial time, time zero, and final time, time tau. So, let's say that z0 denotes initial state of a system and this is the initial distribution. And this is a final state of a system and this is a final distribution. And here red gamma, gamma means the stochastic, the time forward stochastic trajectory from here to there. And this gamma tilde, blue one, is a stochastic trajectory for time reverse path. Then we can write this is a, this p gamma is time forward path probability and this p tilde is time, let's say that p tilde, gamma tilde is time reverse path probability. Then the irreversibility is defined as a logarithmic ratio between time forward and time reverse path probabilities. Okay, then if this time forward and time reverse process are equally probable, then this ratio becomes one. So irreversibility becomes zero. It means that irreversibility becomes zero when the process is reversible. And when the process is irreversible, then it is, it is not equal to zero. So, you can feel that, oh, this irreversibility is a kind of, it feels like it is, it would be related to entropy production. Because entropy production equals to zero for reversible process and it is non-zero for irreversible process, right? So yes, right. So irreversibility is related to entropy production. So how, how they are related? Okay, so now let's then look at the physical meaning of this irreversibility. Let me first do the, for the forward and the long-term system. Okay, so yesterday we learned how to evaluate the, the time forward path probabilities. So this is a time forward path probabilities. So it started from, the initial state is started from x zero. So we need this initial distribution and this is so in saga method of functional. So in this way we can write the path of probability. And here for convenience, I mean you can use any, any stochastic calculus, but for calculation convenience here I will choose a equal to one-half. Then this product becomes Strato Novich and this becomes this one. Okay, so this is a time forward path probability. And what is a time reverse path probability? So, okay, so let me first, okay, let me first show, show you this diagram. So this one shows you the, the time forward path position. So initially, so x denotes a time, a position of forward path. And this x tilde denotes the position in the time reverse path. So you see that this is the initial state, initial position of forward path. And the next step, next position, next position. And the final position of the forward path is given by x tau, right? And then let's look at now the time reverse path. So x tilde zero means initial position of the time reverse path. But this one should be same as x tau. And this one, the next time position in the time reverse path. So this should be same as this one. So matching this together. So the final position of the reverse path should be same as the initial position of the time forward path. So from this observation, we can now set up this relation between x tilde and x. So x tilde t should be same as x tau minus t. So these are the relation between the time reverse position and this is time forward position. Okay, so time reverse path probability can be written in this way then. So because it starts from the final position of the forward path. So we should start from the final distribution of the forward path. And then here we, if there is some protocol here, t, then this protocol should be changed into the time reverse way. And then here now we use the time x tilde instead of x. Because this x tilde denotes the time reverse position. Okay, so that's it. And then here now define let's define t prime as tau minus t. Then we can write in this way. Actually from this relation by using this definition we can write in this way. And let's look at this x tilde dot. So this is a definition of x tilde dot. And from this relation we can write in this way. And by using this definition this can be written in this way. So this is minus x dot t prime. So I'll use this relation. So by using this relation now we can change it from x tilde to x. Okay, so they change it in this way. And this one is changed in this way by using this equation. And then and and so on and so on. And now this t prime is nothing but the integral variable. So it is now becomes a dummy variable. So we can change it from t prime to t. So we can write the path of probability in terms of forward position and forward time. So I mean this is just I mean it is nothing but the convert. So this is from this x tilde notation to the x notation. Okay? By using this relation. So this is a time reverse path probability. So after the lecture you can follow line by line by yourself. Okay, and then now let's calculate this irreversibility to understand the physical meaning of this. Now we know this too and then we can easily calculate this one. So this first term comes from the ratio between this initial and final distribution. Okay, this is it. And the second term actually is the ratio, logarithmic ratio of this one and this one. Okay, so the ratio between these two to calculate this ratio. So this term are same so they are cancelled out. And this term also same so they also cancelled out. So only the remaining part is this one and this one. So we can write this logarithmic ratio of this conditional path of probability in this way. This one minus this one. So this is this one minus this one. So this is a result of this logarithmic ratio. And then you see that this square and this square they cancelled out. And this square, this square they also cancelled out. Only remaining part is the cross product, right? So the remaining part is only cross product here. And this product is Stratovich. Okay, so now this is a equation of motion over the length of my equation. So we can replace this external function f by using in terms of x dot and qc in this way. So this function is replaced by this one. And then you see that this one. This is a definition of heat. Worked on by heat bath force. So the integration of this heat segment from time zero to tau that it gives total heat during whole process. So the important thing is that, so even though you cannot follow all the details, but important thing is that this conditional path of probability logarithmic ratio becomes heat divided by T temperature. So it is a Clausius entropy. And so we can call this term as a reservoir entropy production. So this term is actually, this is actually the stochastic channel entropy change. So because if we say that this is a channel, this is a stochastic channel entropy. So actually this first part can be written in terms of this channel entropy difference. So if we call this channel entropy difference as a system entropy change, then it becomes a system entropy change. And this becomes a reservoir entropy production. So by summing this two, we have this total entropy production. So the important thing is that irreversibility, which is nothing but the two probability ratio, logarithmic ratio, but it becomes a total entropy production. So this is a meaning of irreversibility in the over-damped long-term system. Okay, then let's look at the second case, Markov-Jomp process. Then it has the same meaning. Okay, so let's look at step by step. So this is one stochastic trajectory. So we learned how to write the path of probability for this stochastic trajectory. So this is it. So it started from the initial distribution, and this product means that probability of each segment product. And then this one is the final staying probability. And then what is the time reverse path probability? Can you turn on air conditioning? I think they look a little bit hot. Thank you. So, okay, let's say that this T, T0, T1, I mean this T is a time, denotes the time for forward path. And let's say that T tilde is time for reverse path. So it means that T tilde and plus 1 equal to 0, but they are matching together in this way. And this T0 tilde is actually tau here. So it means that the relation between T and T tilde is given in this way. T tilde equal to tau minus T. Okay, so now let's write the time reverse path probability. So this is given by, so it should start from the final position, a final state of the forward path. So it started from the final distribution of the forward path. And this term explains that the probability for observing this segment. And this consecutive product means that the probability for each segment. So, and this final staying probability explains this staying situation. So we can write time reverse path probability in this way. And by using this relation between T tilde and T, we can change T tilde to T variable in this way. So if we arrange the terms and then we have this relation. So, of course, now it becomes time reverse path probabilities also written in terms of the time for forward path. I mean, you don't need to follow all the details, but I mean, the point is that now we want to calculate the logarithmic ratio of these two, I mean, these two probabilities. So let's look at what happened. So this first term comes from the ratio between initial and final distribution. And this second term comes from the ratio between these two quantities. So how? So when we calculate the ratio of these two, this term, staying probabilities are same, so they are canceled out. And the last staying probability are also same, so they are canceled out. So only remaining part is this transition rate and its reverse transition rate. So this rate, only this ratio remains, so this is the result. So here now let's assume the transition rate satisfies the local detail balance condition, which means that Rij and Rji satisfy this equation. And then because equilibrium distribution actually is expressed by Gibbs factor, so this transition rate ratio is given by this some Boltzmann factor. So by using this condition, then we can calculate this explicitly. So it is given by this value, so energy difference divided by temperature. And energy difference is actually, so as I explained that this energy difference corresponds to energy and heat absorption, so this is actually the heat divided by temperature. So this is the same form as a Clausius entropy. So we can identify this second term is actually the reservoir entropy production. And this first term is also the system entropy production. So their sum gives a total entropy production. So it tells us that for both long-term dynamics and market jumbo process, this irreversibility corresponds to the entropy production, what we know already. And one comment here, so here I assume the local detail balance condition for clearly showing you, I mean this relation, but without this local detail balance condition, we can also show the same thing. But here I will skip it. We don't have the T dependence in the Rij. Sorry again? In the local detail balance expression, we have no time dependence. No time dependence here? Rij. Yes. But in the original expression, we have the T dependence. So Rij in the right side is kind of... Okay. So I mean here T dependence means that it depends on time. So it has some time protocol. But here it means that the instantaneous, I mean local detail balance. So with a fixed protocol, then we can say that, I mean with a fixed protocol, we can calculate the equilibrium distribution in that situation. So when lambda... So protocol lambda is fixed, we can calculate the equilibrium distribution. So we can call this as some instantaneous equilibrium distribution. So Rij in that expression is related to the equilibrium picture? I don't know. I mean... Okay. So actually, yes, R is... Rij is time dependence, but generally saying it is protocol dependent. And protocol depends on time. And of course Rji also has a protocol dependence. And for a fixed protocol, we can find... I mean, sorry, P, equilibrium distribution for a fixed protocol. So we call this... We can call this instantaneous equilibrium distribution for a fixed lambda. Okay. So we can discuss it later, I think. Okay. Instantly, the heat is absorbed or released from the system. This is the meaning, right? Because if you can skip the time dependent of Rij, we can relate it to the heat jumper between the system and the reservoir, which means that the heat between the reservoir and the system is just instantly jumper between each other. So instant equilibrium means that... Okay. Let's consider harmonic potential case. Harmonic potential... Okay. So let's consider this kind of harmonic potential. So let's say that it is equal to, for example, AAT then. And at time T1, so this is the potential at time T1, right? So we can calculate the equilibrium distribution when the stiffness becomes AT1, right? In such a way, we can calculate the instantaneous equilibrium distribution. We don't care about the... It has no time dependence. I mean, it can have time dependence, and we can calculate the instantaneous steady state with a fixed protocol. Okay. Here. So here you have considered only thermal reservoir. Mm-hmm. But in open system, there can be some... Other means mass exchange is there. So some chemical reservoir or that kind of thing can be also there. Yes, of course. So in that second term, where you have that entropy exchange rate, I think minus Q by KBT. So how that term can be modified in that open system means where there is some kind of mass exchange there? Actually, as I mentioned that without this local data balance condition, we can show... Generally, we can show this term satisfy this Q over T. But this is for only the thermal reservoir, right? So if there is some kind of mass transfer between the system, as you said in yesterday's talk about the open system, then there should be another term regarding that reservoir. Maybe that is a chemical reservoir or something like that. And how to incorporate that term into this entropy rate means entropy exchange rate, entropy flow rate, as I can say. Mm-hmm. So you mean that grand canonical ensemble? Yeah, maybe for open system, that's the situation because... Yes, so when there is a chemical potential... Is there maybe any generalized term means can we generalize that thermal reservoir into any kind of reservoir? That's my question. Okay, so in that case, actually, the reservoir, reservoir particle number also changes. So in that case, a reservoir state space also changes. So I think as I said, in such a case, we need more consideration. Yes, I think in that case, I'm not sure whether we can generally show... Because reservoir state space also changes when particle is changing. So in such a case, I think we need more consideration to derive this relation. Yes. This might be a related question, but in yesterday's lecture, we didn't know that the heap is related with probabilistic concept. And at that stage, how could we justify the identification of heap and work? So what do you mean? So your question is that this energy difference is related to heap? I mean that without knowing that heap is related with probability things, like in yesterday's lecture, how can we justify the identification of heap and work? So within this assumption, this quantity is given by the trend, this is energy difference between two energy levels. So it is necessary for making a transition. The system should observe this amount of heat from the reservoir. So that's why this transition is induced. So we can interpret this energy difference as a heat. That's what I explained in the first law of the Markov-Jumper process. Thank you. Okay, thank you. So more questions? Okay. Oh, so the important thing is that for both Markov-Jumper process and Langebaum dynamics, this irreversibility corresponds to total entropy production. But the thing is that this value is stochastic value. It means that due to the fluctuation, thermal fluctuation, sometimes it can be negative. So what is the meaning of thermodynamic second law? So to know the meaning of a thermodynamic second law, we need to see this interesting property of this irreversibility. So let's look at this exponential to be minus r average. And by definition, we can write this average value in this way. So e to the minus r times path forward, path probability, and sum of all paths. And from this definition, then we can rewrite this term in this way by using the time-reversed path. And sum of all paths, because this is a normalization condition. So it is simply one. So it means that this average always satisfies equal to one. So it means that because r is entropy production, so it means that entropy production satisfies this interesting equality, and this equality is called fluctuation theorem. So, I mean, thanks to Vipo, in the first lecture of Vipo, we learned what is Johnson's inequality. So it means that for convex function f, so the function average is larger than the argument average of function. So by using, because exponential function is convex function, so we can use this Johnson's inequality. So here, this average is larger than this average. And because this one is smaller than one, it means that this r value should be non-negative. So it means that because r is total entropy production, so it means that average value of total entropy production should be non-negative. So this is a meaning of second law. So this stochastic value can be negative, but its average should be non-negative. This is the meaning of thermodynamic second law in a small stochastic system. So this is a fluctuation theorem. So we can regard this as some kind of a generalized second law. I mean the second law can be derived from this equation. So this is the importance of this fluctuation theorem. But let's look at this form again. So here, I mean, because there is some freedom to choose of this denominator. So for example, let's define r star. Here I will call this r star as a dual irreversibility. And this dual irreversibility is defined as logarithmic ratio between four times four the path of probability and some kind of dual path of probability. And this dual path of probability can be different from this time reverse path of probability. Actually, we can choose any other path of probability for this dual path of probability. The only constraint is that it should, dual path of probability should satisfy the normalization condition. This is the only constraint. Then, okay. Sorry? It must be comparable to the primary cause. In the Markov jump system, the probability of the path is by the implementation of small, I think, because the time series of jump, given in history, is in the kind of tools. You can choose another time series, this small transform. It's very small. I mean, is there any, no, very small normalization to the normalization path to the probability of the path? Your question is, when you choose this dual path probability, it should be comparable to, comparable to what, this one? No, no, no, no. I did this one? Yes. Otherwise, choose the gamma of the direct process to be any observer. For example, the number of jumps wise up the platform, then this probability, I think, is very small. Okay. So, I mean, this dual path of probability is, of course, a function of gamma. Right? So, I mean, these two gamma should be same. So, I mean, but the thing is that we can, we can choose, I mean, arbitrary, even though it is comparable or not, we can choose, I mean, arbitrary dual path probability. Only constant, only constant, only the thing is that it should satisfy this normalization condition. As long as this normalization condition is satisfied, then, I mean, we can define in this way whether it is comparable or not. I mean, and this gamma also should be same. I mean, this is just a definition of dual irreversibility. Okay? I mean, it does not necessarily need to be same as this one. I mean, they are different ones. Okay? I mean, by choosing a different dual path of probability, then we can make a different irreversibility. That's all here. Okay, so, well, I said okay? Okay, but we can discuss it later. Okay, so, by defining dual irreversibility in such a way, so, we can show that this dual irreversibility also satisfy the fluctuation theorem. This is trivial because, I mean, constraint gives a normalization condition. I have a question. Okay. If P star is P, self-dual. Yeah. What that means? So, you mean... R star is always zero, right? R star is always zero. You mean this one is always zero? P star is the same with P. This one is same. Ah, okay. In that case, actually, it satisfies the fluctuation theorem. Yeah, I mean, you can choose any dual path of probability. Then, I mean, yeah, right. So, if you choose the same one, then actually R star becomes just one, just zero. So, because this is just zero, so it satisfies the fluctuation theorem. An interesting fluctuation theorem? Yeah, not interesting one. But anyway, you can choose the same one. Yeah, right. So, you can choose anything. I mean, you can choose any dual path of probability. So, it means that this fluctuation theorem is fluctuation theorem for anthropic production. But by choosing other dual path of probability, we can make other fluctuation theorems. So, that's the point. So, what kind of fluctuation theorem then we can make? But as Professor Chu said, I mean, if we choose very, I mean, trivial case, then we will have a trivial fluctuation theorem. Then this is not interesting. But if we can choose very clever one, then probably we can make a very interesting fluctuation theorem. So, how can we choose such a dual process? Okay. So, the first example is gyrosincy key quality. Okay, so let me give you a concrete example. This is about the DNA pooling experiment. So, here, this is a DNA hairpin. And one end of the hairpin is attached to some wall. And the other end is stick to some Brownian particle here. So, the distance from the wall to this center of the particle is denoted by x. And here, we apply optical tweezers. So, it provides some harmonic potential to the Brownian particle. And this center is denoted by lambda t. Okay, this is a DNA pooling experiment setup. So, because the optical tweezers provide this harmonic potential, then we can write the equation of motion of this particle in this way. Okay, then let's provide some protocol. So, this protocol, center of the laser, is given by in this way. So, when time is smaller than one, it is fixed as lambda zero. And when time is larger than zero, it moves linear in time with a constant speed of A. And so, for initial condition when time equals to zero, so the system is prepared as an equilibrium distribution with an inverse temperature beta. So, this is an important condition. It should be started from the equilibrium distribution. So, the initial distribution can be written in this way. This is the equilibrium distribution when lambda equal to lambda zero. And here, z zero is a partition function. And here, f zero, I mean, is a free energy. And the final distribution of this process is needed to be the equilibrium distribution, but it can be any arbitrary state. It can be any arbitrary non-equilibrium state. So, we can write the forward path of probability in this way. So, because it started from the initially equilibrium distribution, so we have to put this equilibrium initial distribution here. And this is a conditional path of probability for gamma. But here, I will choose dual probability path of probability in this way. If we take the time reverse path of probability, then we have to use this final distribution, right? Instead of this equilibrium distribution. But here, I will take some different. This is not the time reverse path of probability, but I will choose this kind of a type of dual path of probability. So, I use the equilibrium distribution with the final lambda value. Okay, so do you have my point? I mean, this is not the time reverse path of probability. Different path of probability. Because this is not the final distribution. So, if we take this dual path of probability, then we can calculate a dual irreversibility. And from these two equilibrium distribution ratio, we can write this potential energy change and this free energy change. And then this heat comes from the ratio between these two conditional path of probability. Okay, so this is nothing but the energy change because this is the over-damped system and this is a free energy difference. So, it can be written in this way. And from the thermodynamic first law, delta E minus Q is, this is W, work. So, in this case, dual irreversibility becomes work minus free energy difference. This is not the total entropy production, but it also has some different meaning. But it has some physically meaningful quantity. Right? Okay. Sorry, can you use a microphone so I cannot hear you? Yes, my question was, so this is DNA pooling experiment. So, in this case, we cannot, you know, I mean, it is, we could say that is equilibrium because it is an experimental situation. So, pooling situation would be changing transition state, not equilibrium state. So, I wonder that if we apply this equilibrium protocol for successful application of this protocol then you should pool the DNA in slow velocity. Okay, okay, I understand your point. So, I mean, my point is that initial time, at time equal to zero, only the, I mean, at time equal to zero, the system is prepared as an equilibrium state. Then it is possible, right? If we wait for a long time at this point with the fixed lambda zero then we can make an equilibrium distribution in this system. And then, as you said, we pool, we pool this valiant particle by using optical tweezers with best speed. Then, this process actually will be deviate from the equilibrium distribution but it will be a non-equilibrium situation. So, that's why I said that the final distribution does not need to be an equilibrium distribution but it can be any arbitrary state. I mean, it can be any arbitrary non-equilibrium state. So, that's all we need. So, that's why I mentioned that because here, if it is a time reverse process we have to use this distribution instead of this equilibrium distribution, right? But this is not the time reverse path probability but this is a dual different path probability. I just take this dual path probability. That's why I said that we have freedom to choose any path probability. So, I just choose this one. So, I mean, this process should not be in the causal state process. I mean, it can be any arbitrary process. Thank you. That's very an important point to understand this Jozinski quality. Okay, so, do you understand this point? Okay, so it means that, so if we take this dual path probability then this dual irreversibility becomes work minus free energy difference. So, it means that because this dual path, dual irreversibility satisfies the fluctuation, which means that this quantity also satisfies the fluctuation theorem. So, now we have this so-called Jozinski quality. So, even though I started from some concrete example, but if we take this dual path probability in this form, then we can show this Jozinski quality generally. And an interesting point of this Jozinski quality is that W, work is non-equilibrium quantity. But free energy is an equilibrium quantity. So, this Jozinski quality makes relation between non-equilibrium quantity and equilibrium quantity in this way. So, this is an interesting point of this Jozinski quality. So, how many minutes do I have? Seven. Okay, thank you. And then the second one is a Crookes relation. Okay, let's look at the same process as we saw in the Jozinski quality case. So, this is a DNA pooling process. So, the initial distribution is prepared as equilibrium distribution with lambda zero. And the process, I mean, process is now the unfolding process of the DNA. So, the final protocol value is lambda one. So, as I said, in this situation, the system does not need to be equilibrium state. Okay, so in this process one, this is a process one, then we can write dual path probability. This is the same for the Jozinski quality case. So, this is for the path probability. It starts from equilibrium distribution. And this is a conditional path probability. And then the protocol changes from lambda zero to lambda one. And this is a dual path probability. And we take, instead of a final distribution, we take equilibrium distribution with lambda one. And this is a time-reversed conditional path probability. And here, let's... Okay, so this one, p1r, is probability for observing this dual irreversibility equals to r. So, from this meaning, we can write this probability as in this way. It means that so this delta function picks some specific gamma trajectory which satisfies this dual irreversibility becomes r. So, actually, this is the meaning of this probability. And now let's consider the second process. I mean, this process... I mean, this second process is not the time-reversal process of process one. It is different process. So, for process two, so it starts from lambda equal to lambda one. And the system is prepared as equilibrium distribution at lambda one. And then now, refolding. Refolding, so by applying this protocol. And protocol is time-reversed way. And then the final protocol becomes lambda zero. So, even though it started from the equilibrium distribution, but the final distribution does not need to be equilibrium distribution. It can be any arbitrary distribution. So, in this process two, then dual irreversibility can be written in this way. So, this is a time-forward path of probability. So, it starts from equilibrium distribution with lambda one. And this is a conditional probability. The protocol changes from lambda one to lambda zero. And here we take the dual path of probability in this way. So, instead of using the final distribution, here we use equilibrium distribution with lambda zero. And then the conditional path probability reversed way. So, now let's consider this probability. So, the meaning of this probability is that probability for observing the dual path of probability is equal to minus r for this second process. Then, by definition, we can write this probability in this way. So, this delta function picks the specific trajectory gamma which satisfies this one is same as minus r. Okay, so this is a meaning of this probability. Then, now let's compare. I mean, this one and this one. And you see that it has the same equilibrium distribution here and same protocol here. So, we can make a relation between these two. So, P1 gamma is equal to P2 star gamma star, I gamma tilde. Okay, this one and this one. And let's look at this one again. And this part and this part. And you see that this equilibrium distributions are same and the protocols are same. So, we can make another equation relation between P1 star and P2. So, there are this relation here. So, we can make a relation between these two dual irreversibility. So, this is a definition of r1 star. Okay? And by using this equality we can change from here to there by using this equality. And then this is actually the minus r2 star gamma tilde. So, it means that between these two processes, one and process two, we can calculate this one and this one separately, but they have this relation. So, it means that if we take gamma tilde, then we have this minus sign in front of a different dual pass problem dual irreversibility. So, by using this property, we can change this probability into this one. So, by using by using this relation, we can change from here to there and then by using this relation, we can change this one into this one. Then, from the definition from the definition of this from the definition of this we can change p1 star into this term. Okay? And then from this delta function, actually this value should be r here. So, this value should be r then because this r value is just a constant. So, we can take this this term out of this summation and then the remaining part is this. And this is nothing but the definition of a p1r. So, now we found find the relation between p1 and p2. So, this mean that ratio between these two probability distribution is equal to e to the r. So, this is a crux relation. So, what can we do with this crux relation? I mean okay, so let me first look at the meaning of this crux relation. So, even though I started from the specific example but if it satisfy this condition, actually this condition is mathematically saying it is involution condition. So, it's involution condition that we can generally show this crux relation. And so for total entropy production case, actually it does not satisfy this crux relation generally but for a steady state process, in such a case, in such a special case total entropy production also satisfy the crux relation. Okay, so what can we do with this crux relation? Okay, let's again get back to this example. So, this is for process one. So, as we did, as we evaluate in the example of a judging security quality and this dual irreversibility it gives work minus free energy difference. So, it means that so this quantity satisfy the crux relation. But this delta f is just a constant. So, it is nothing but the parallel shift. So, we can write in this way. So, it means that if we measure the probability density function of w in the process one and we measure the minus w distribution in the process two and the ratio is given by this special factor. Okay, so by using this we can do some interesting things. So, let's look at the case when this ratio becomes one. In this case it means that when w equal to delta f then this ratio becomes one, right? So, it means that if we say that this solid curve is a probability density for process one and if this dashed line is the distribution of process two minus w and then this crossing point actually at this crossing point this ratio becomes one. So, this crossing point w is actually corresponding to the free energy difference between two states. So, in such a way we can extract some free energy information by measuring the distribution and these people indeed performed this experiment. So, they performed DNA unfolding and refolding experiment and here solid curve means that the unfolding process distribution and this dashed line distribution means that refolding process distribution and this different color codes means different unfolding speed. So, so, you can see that for blue curve they crosses at some point and green curve also crosses at some point and the red curve also crosses at some point and you see that they coincide each other. So, this value this work value is equal to the delta f in this way we can determine the free energy difference between two protein states. Okay, so and so okay, so time is over now so I mean it's a right time to stop okay, so thank you.