 You have the second lecture of Andre Barato. There was a change in the program you see, so he will speak also first day and Friday. So you can go on, thanks. OK, thank you. So that's where I stopped last time. There was this three-state model, right? This was a single enzyme. Again, the experiment would be you put the single enzyme in a recipient, OK? So this thing here is the recipient. There should be maybe water there. So you put the single enzyme in water, and then you have substrate, ass, and product P. Substrate could be ATP, OK? And product would be ADP plus PI. ATP is adenos in triphosphate. I guess everybody knows that. And the enzyme is constantly consuming the blue thing, ass, and producing the red thing, P, all right? And the idea is that the concentration of ass and P are fixed either because the number of ass and P is so large that whatever the time is doing does not really significantly alter the concentration of ass and P. Or there could be some external thing that keep giving ass to the solution and taking P out, OK? Both schemes would work. But the idea is that these concentrations are fixed. The three states of the models for the chemical reaction is this one here, where E plus ass becomes the ass, then the enzyme transforms ass into P, and then the enzyme releases the product into the solution, OK? That would be the forward reaction. The reverse reaction is also possible, although less likely. And if I want to do a representation of this model, there are three possible states. Again, the plus ass and plus P have to do with the solution. So the state of the solution, which is the reservoir, does not really matter. We are interested in the state of the system. And there are three possible states. One could be E, two could be ass, three could be E, P. That's the probability of the system. P of T is the vector. That would be the stochastic matrix W, right? And the master equation would be dP of T dt is equal to W P of T, OK? And I mean, I can solve this master equation. That's not so hard, just a 3 by 3 system. I could, for example, find a station distribution, right? P s. Well, I can try to write with the board. Board does pen. Can be write with what? And try to write with a pen, which is more bold. Sure. Maybe this one is better. Yes, it's perfect. All right. OK, I don't know exactly what you can see here. Sure. OK, so I mean, I would have a stationary solution. P s, which should be the eigenvector associated with the eigenvalue 0, OK? That's something I can calculate. It's going to be a function of all these transition rates. So it would be a function of six different variables, which I'm circulating rather, OK? So that would be my stationary solution. Again, that's the eigenvector related to 0, OK? To the eigenvalue 0. OK, so that's the calculation I can do. And then there will be two possibilities. That is going to be equilibrium or non-equilibrium, right? I mean, and as we see, I mean, equilibrium would mean that P i s w i j is equal to P j s w j i, OK? That's going to be if is equilibrium, right? And I mean, for this model here, I just have a single current, OK? Let's think about the probability current going in the clockwise direction like this, OK? So I'm going from 1 to 2, from 2 to 3, and then from 3 to 1, all right? So this current could either be j 1 2, which will be P 1 s w 1 2 minus P 2 s w 2 1, OK? And this current is going to be different than 0 if non-equilibrium, OK? So again, if you give me the rates, I can solve this equation, whatever the rates are. And I can tell you whether it's an equilibrium or non-equilibrium study state, depending on what these rates are. And if it's a non-equilibrium study state, I'm going to have a current. And if it's an equilibrium study state, I'm not going to have a current. Now, what does it mean to have a current here? Well, to have a current means that it is more likely to do this chemical reaction from left to right to take the product, the substrate s, and take the product P. So having a current means I go in the clockwise direction, meaning I'm consuming s and producing P. If there is no current, the likelihood of going from 4, I mean, when I say going from left to right, I'm talking about this equation here, OK? That would be going from left to right. And this would be going from right to left, OK? So if I am in equilibrium, it simply means that the likelihood of consuming the substrate is the same as the one of consuming the product, OK? So an equilibrium would mean that the chances of producing a P or producing an s would be the same. And if I'm out of equilibrium, normally I would be consuming the substrate and producing the P, so I'm more likely to go from left to right, OK? So that's the current. And I mean, it's not very hard to show that something you can try to do alone, that if I calculate j 2, 3, and j 3, 1, they are all the same, OK? So basically, j 1, 2 will be equal. That again, j 3, 1. And if you look, that just means that I mean, I just have a single current in this model in this cycle here, OK? That's I'm talking about this thing here. So I just have a single current here. This current is going the clockwise direction. And the current in each link must be the same because of Kirchhoff law, OK? There is only one cycle and the current is conserved, OK? And you can deduce this equation from the fact that sum in j j ij is equal to 0, OK? That's an equation we had before. That's an equation that must be true in a non-equilibrium steady state. Now let's think about this physically. What does it mean to be in equilibrium? I mean, what does it mean that I can go from left to right? Like I would have going from left to right is the same as going from right to left. I mean, the official condition that must be true will be that the chemical potential, the thermodynamic force that drives here is the chemical potential mu. Let's write this down. So physically, well, OK, let me change the name of the section. So now there is something very important in stochastic thermodynamics called generalized and detailed balance, OK? Now this is a bit of a strange name. Sometimes it's called a generalized balance condition. This is not really a condition. Rather, it's like a postulate in stochastic thermodynamics. So this is actually a postulate, OK? Again, this is if you see stochastic thermodynamics as some sort of phenomenological theory. Probably the postulate would be that the Markov dynamics would be maybe the first postulate, if you will. The other postulate would be that I would say the second is probably that the transition rates fulfill generalized state balance, which again is not really a mathematical restriction. Rather, it's a relationship between transition rates and physical parameters like chemical potential, temperature, external forces, and so on and so forth. And probably the other one would be the definition of entropy, which we are going to discuss also here together with generalized state balance, OK? But let's think about this model. And now what I want to do is to just explain what's generalized state balance for this model, this particular model of a single enzyme, OK? So we are going to do this for the single enzyme case. And it won't be so hard to imagine what it should be in the general case. The other thing I want to do is to see that the definition of entropy change of the environment, the stochastic thermodynamics for this particular model of a single enzyme is going to be consistent with the entropy that you learned or you should have learned in your thermophysics course, OK? And so that's what I want to do in this section. So let's first think about what does it mean to be out of equilibrium. Being out of equilibrium means that there is a bigger likelihood of doing this thing, OK? There is this one that I'm calling going from left to right all the time. So and then there is the other one, which would be E plus P going into EP. That would be the reverse reaction, OK? So again, the black one would be the fourth reaction and the purple one would be the reverse reaction, right? Now, if I am in equilibrium again, the likelihood of both of them is the same, meaning I have no current, OK? If I have no current, then I'd neither go in the clockwise direction, not in the anti-clockwise direction in this figure of, in this cycle here. So remember E, OK? So if going from left to right in the chemical reaction is the same as going from left to left, then that means there is no current there in this cycle. OK, so what is the physical thing that makes me go from left to right? Well, it's the chemical potential difference. So if mu S is larger than mu P, mu B in the chemical potential, OK? So if mu S is larger than mu P, then of course, I mean, I tend to take the more energetic molecule, which is S, and I tend to release the less energetic molecule in the solution, all right? And so if mu S is larger than mu P, I'm more likely to do the chemical reaction from left to right that I am more like than the reverse one from right to left, OK? And equilibrium here would mean that mu S is equal to mu P, OK? That would be equilibrium, all right? So I could imagine a case where mu P is larger than mu S, but then it would be strange to call P a product. Probably would call PS, and then it would call the SP, OK? The name substrates typically some if you consume, and product is something that you produce, all right? And so just to have an idea, if we are talking about ATP again, that would be delta mu equals to mu S minus mu P. So if you are talking about ATP in physiological conditions, so if substrat is ATP and product is ADP plus PI, that is going to be something like 20 KBT, OK? More or less, I mean, that's something, that's a good order of magnitude for what you should get for ATP. So if you are in physiological conditions, then the thing that drives you out of equilibrium, so if you put an enzyme there with ATP and the PMPI in concentrations that would correspond to, more or less, what we have in our body, the delta mu would be something like 20 KBT. And again, KB is one in this course. I'm just writing KB here because it's convenient, but most of the time, again, KB is one, OK? So that would be the delta mu for typical value of ATP, OK? And that's the force that drives you out of equilibrium, OK? The delta mu. Delta mu is the force that produces a current in the cycle, OK? Delta mu is the thing that's going to produce the cycle current here, all right? Now, the question is generalized state balance is something that's going to give you a relation between delta mu and the Ws, OK? I mean, the important thing here, delta mu, the chemical potentials are a property of your external reservoir. These are physical thermodynamic parameters, these standard thermodynamic parameters, OK? Like chemical potential, temperature, and so on and so forth. The transition rates are these kinetic parameters you have, but the transition rates cannot be anything. They're related to these thermodynamic parameters associated with the reservoir, OK? And the generalized state balance condition establishes these relations. It's a post-latent thermodynamic that connects the transition rates with the thermodynamic parameters of the external reservoir. Again, remember in Stochastic thermodynamics, my system can be out of equilibrium, but my reservoir is big, OK? My reservoir cannot be small. The system can be small. And the reservoir must be in equilibrium. And the relationship is that if I do a cycle, so think about the cycle E going to E s, going to E p, going back to E, OK? So I imagine this cycle here, OK? So there are two ways of drawing that. This is one. The other one would be, again, when I'm doing that, I'm not doing the chemical reaction thing. I'm just drawing the states, OK? So I imagine this cycle here in the clockwise direction. And now I'm calling this state 1, this is state 2, and this is state 3, OK? So if I think about the product of the transition rates, which would be W12, W23, and W31 in the cycle, and then I think about the reverse cycle and the product of the transition rates for the reverse cycle would be W21, W32, and W13, OK? So 1 is the product of the transition rates for going from the cycle 1 to 3. That's what I have above. And what I have below is the product of the transition rates from going through the cycle 1, 3, 2, 1, OK? And that must be equal to e to the power of beta delta mu. This thing here is what connects transition rates with the thermodynamic parameters of the reservoir, OK? That's the thing here is the generalized detailed balance, OK? So again, very important relation. It's typically called the condition. It's not really a condition. There is a more general way of writing it, OK? But it's not really a mathematical restriction in your stochastic process. It's more, again, a physical interpretation of transition rates. It's a physical connection between thermodynamic properties of your reservoir and transition rates of your system, OK? So the transition rates, apart from that, as long as transition rates fulfill this relation, they must fulfill this relation. But I mean, there can be anything. I mean, there are lots of kinetic parameters that can influence transition rates. So if I can have similar chemical reactions, or I can have different enzymes, if I, for example, different enzymes, but they are operating the same solution, the delta mu is determined by the solution, right? By the concentrations of P and S in the solution. So if I do two different enzymes, the Ws will be constrained by the same relation, but they will be different because of kinetic parameters, OK? Because the kinetic parameters can depend on many different things, all right? And just to be clear, if I am in equilibrium, if I have the same balance, then, of course, if I have the same balance, I would have Wij over Wji is equal to e to the power of minus beta ij minus ai, OK? That is not a balanced. That's what we call detailed balance, OK? That would be the case of equilibrium. And clearly, if I have the data balance, I mean, this product there, the W12, W23. I mean, from this relation here, it's very easy to show that if I do W12, W23, W31, W21, W32, and W13, that's going to be equal to 1, OK? It's not hard to show that. I mean, if the ratio of the transition rates is just the exponential of energy difference, then when I complete the full cycle, I start that one, but I go back to one. I'm going to have nothing left, right? That's going to give me e to the power of 0 because it's always an energy difference, OK? If it's not always an energy difference, then I'm going to get something that's no 0, OK? So again, the rest of the balance is not really the detailed balance, though those are different things. The detailed balance means that if I look at any cycle in my system, the product of the forward rates divided by the product of the backward rates is going to give me 1, OK? And generalized data balance is just a relationship between transition rates and the thermodynamic parameters. Now, if you have more cycles, then each cycle is going to have a different thermodynamic force. They might have the same force. It depends. But this will be true for all cycles in your network, OK? They're always going to be related to e to the power of some thermodynamic force, OK? This thing here, the delta mu or the beta delta mu, this thing here is typically called a force or a thermodynamic force. Or another name is affinity, right? Those are things. Those are the things. Those are the physical parameters that drive you automatically, OK? If delta mu is 0, the system is in equilibrium. If delta mu is different from 0, the system is out of equilibrium. And in general, I mean, the main difference that will happen in general models is that they will have many, many different cycles, OK? They don't have to be just a single cycle, like this restrict model. It's very simple. But they can have many, many cycles. And again, if I think about a system with like 10 enzymes, for example, which is still a very small system, just 10 enzymes, or if you think about 100 enzymes, then the number of cycles is going to become something really complicated. There will be many, many different cycles. But still, this rule is to be true. I mean, if I do a cycle, if I am in equilibrium, these products are going to be, this ratio is going to be 1. And if I am out of equilibrium, it's going to be given by each of the power of the affinity associated with that cycle, OK? OK, now let's think about entropy. And again, I discussed the generalized balance between the three-state model. And I also just want to discuss entropy. So this is 6, right? 6 entropy change of the medium. When I say medium, I mean the external medium, OK? That's the entropy change of the external reservoir. So let's say, again, I go through the cycle. So I go from, let's say, I go through this sequence of transitions. I start at E plus S, then I go to VS, then I go to EP, and then I go to E plus B. Now, what is the entropy change of the medium, OK? So what I'm going to do here is I'm going to calculate this entropy change of the medium using standard thermodynamics, because the medium is in equilibrium. So standard thermodynamics should be valid for the medium, like, you know. And then I'm going to connect this change. I'm going to show that this change of, or I'm going to calculate this change of entropy of the medium in terms of transition rates. Because what I really want to do in stock cost thermodynamics is to have a formula for the entropy change of the medium in terms of the transition rates, OK? And all I'm doing here is that if I assume that the balance is true, OK? If I postulate the balance, then this entropy change of the medium is going to be consistent with standard thermodynamics. That's what I'm going to demonstrate here, OK? Which, again, is very rarely demonstrated in papers in literature. Very few papers do that. It's not so common. Now, let's think about what's entropy in standard thermodynamics. Remember, this formula, we have dS is 1 over T dU. I hope you have seen this formula. Otherwise, it's going to be hard to follow. P of T dV minus mu over T dN, OK? S would be a function of u, d, and n, right? And this means that the derivative of S with respect to u is going to be 1 over T. The derivative of F, we suspect, is going to be p over T. And the derivative of S with respect to n is going to be mu over T, or minus mu over T, OK? OK, so that's the formula for S, right? In our case, we have 2n. So we would have to write minus mu p over T dN p, OK? Again, this is a standard formula from thermodynamics. If you remember your course in thermodynamics, you saw this kind of formula, OK? OK, so let's think about the cycle above. What is the delta S of the cycle above, OK? So for this cycle here, and I say this cycle here, I mean this cycle, so if I went from E back to E, taking a substrate at the cost of P here, of course, the dU, the energy change, that's going to be 0. And this term also is going to be 0, so I don't care about that. So let's think about the delta S. Delta S is going to be minus mu S. Now the delta N S is going to be minus 1, because I took an S from the solution. So before I had an S and I took an S from the solution, then I have minus mu p. And the delta S of the p is plus 1, because I added, so this here is delta N S, and this here is delta N p, OK? So my delta S, and that's delta S of the medium, is going to be simply divided by T, I forgot. Remember that KB is equal to 1 here, OK? So my delta S is going to be simply beta mu S minus mu p. Again, the reservoir, the medium is in equilibrium, OK? So this formula is true, because the system is not in equilibrium, but the medium is in equilibrium. So if the medium is in equilibrium, standard thermodynamics must be true for the medium, OK? So that is the, I mean, I can just use the standard formula of thermodynamics. And what I conclude by this standard thermodynamics is that if I go through the cycle, then the entropy change of the medium must be this thing here. But at the same time, from generalized balance, what we saw is that w12, so again, the cycle is 1, 2, 2, 2, 3, and 3, 2, 1, OK? 1 is E, 2 is E S, and 3 is E p. And the generalized balance condition was that w23, w31 over w21, w32, w13, that was equal to each of the power of beta delta mu, OK? But as you can see, this is exactly the same as the explanation of the delta S of the medium. So basically, if I do this cycle here, 1 back to 1, the delta S M associated with this cycle would be ln of w12, which is equal to 1, 2, 3, 2, 3, 2, 3, 2, 3, 2, 3, 2, 3, 1, 2, w, 2, 3, w31, w21, w32, and w13, OK? So that's something that is not commonly done in the literature. I'm justifying the relationship between delta S and I mean, how do you justify the formula for the entropy change of the medium in terms of the transition rates? And that's the formula for the entropy change. So in general, I mean, if I go from state i to state j, the entropy changing associated with going from state i to state j is delta S M, we can call this ij. It's gonna be ln of w ij over w ji, OK? So again, that's a formula that is post-late in the stochastic dynamics, but the reason for the formula is that, if you assume the state balance or generalized state balance is true, OK? If you was, which is a post-late, again, of the theory. So this is something that you can verify in experiments. I mean, you can measure these transition rates in principle, in different kinds of experiments and you can check whether this post-late is gonna be true or not and it is true, OK? So sometimes in chemistry, people would call something like this or not exactly like this, but something that will lead to this mass section law, again, it's something that people have observed in chemical reactions with enzymes. But also in other things, you can observe it and it's always true. But if you assume generalized state balance is true, then this formula here for the entropy, OK? This definition of entropy change of the medium, which is a very important formula in stochastic dynamics, how I define the entropy change of the medium from my state, going from my state tie to state J, it's consistent with the standard definition of entropy in thermodynamics, which is this one, OK? That's what they did. So I start with the standard definition of entropy in thermodynamics, not even in stochastic dynamics. So I know my hazard and why it's in equilibrium. So that must be my entropy change. Then by using generalized state balance, we arrive at the conclusion that delta S must be equal to that. So I did not really assume this equation is true. I derived this equation here, OK? And basically this equation here is consistent with the definition of entropy, which is just ln of wij over wji. So if I've written anything about stochastic dynamics, you have seen that the rate of entropy production, sigma is going to be the sum over all states, ij, pi, wij, ln of wij over wji, OK? That's the steady state case, OK? So that would be the rate of entropy production. And of course, this formula comes from this thing here, right? Also, I know that for each transition ij, the entropy change is ln of wij over wji. If I want to calculate the average entropy change, I must make an average over all my transition rates and multiply by the respective stationary probability i, right? That's the rate of entropy production at the stationary state, OK? And for the particular model here, if I calculate the sigma, that's going to be simply delta mu beta delta mu multiplied by the current, j, OK? Remember that j is equal to the j12, right? The current between 1 and 2, which is equal to the j23, which is equal to the j31, OK? It's not very hard to go from this formula to this one, OK? They are equivalent. This is a more physical formula, right? The rate of entropy production is the entropy change associated with a cycle multiplied by the rate at which j gives the rate at which I do cycles, OK? The net rate at which I do cycles. And this is the entropy change associated with each cycle. And by entropy change, I mean the immediate entropy change, OK? So that's the definition of entropy, so-called thermodynamics. Again, this is a formula that you've probably seen somewhere. And what I did here is a physical justification for this formula, showing that it's in complete agreement with the standard definition of entropy in standard thermodynamics, OK? And what is really important for this definition to, I mean, this ln of wij over wji should be correct or to be really an entropy, a physical entropy, is that you assume that generalized state balance is true, which is, again, a post-late of the theory. It's typically called generalized state balance condition. It's not really a condition. It is more like a post-late. It's more like a relationship between transition rates and thermodynamic parameters. All right, so, OK, that's the formula for the average introduction. Now I want to move towards talking about the fluctuation theorem. And for that, I will have to talk about stochastic trajectories. I'm not sure I will be able to finish this in this lecture, but let's start with stochastic trajectories at least. Anybody want to make questions about generalized state balance or anything before it starts stochastic trajectories? Don't they take the last question? Yeah, I have one. Yes. Can you hear me? Yes, I'm coming. Can you listen? Yeah, I can hear. OK, sorry. I got lost in the, when you wrote the second equation for the entropy production, the one that is more physical, is the last equation you wrote. Which one? For the entropy production, the rate of change of momentum, sorry, sigma. This one here? Yeah, I got lost with the j. And you repeat what was the j? j was p1s w12 minus p2s w21. That was the j, which is equal to j23j31. They are all the same. You are not in Bosnian equilibrium there? Sorry? No, OK, OK, no, OK, OK. Nothing, nothing. Thank you. OK. I mean, I would have to demonstrate how to go from here from there to there, but that's something that's probably it's good to leave as an exercise. You should try. So it's a good exercise to go from here to there. All you have to assume is this. So if you assume this thing here is true, OK? If you assume this is true, then again, if you assume the balance, you should be able to go from this equation above to the equation below. OK, so. Sorry, one question. Yeah, sure. So I just want to make sure that I understand. So j, the current can be either sign, right? In non-equilibrium. It could be their sign, yes, yes. But yes, tell me. So the rate of entropy production could be positive or negative? No, if the j is negative, the delta mu is also negative. So the j can only be negative if the delta mu is also negative. Oh, I see, OK. So it's correlated. OK, I see, OK. Yeah, the sigma is always larger than 0. That's for sure. OK, so. But the reason I don't talk about a negative j is just that I mean, you know, when you have two chemical things like a solution like this, OK, if you want to call something a substrat and something a product, typically calling something a substrat means that the chemical potential of that thing is going to be larger than the other one, OK? You know, if I was, if I was to think about a negative delta mu, I probably should call the PS and the SP, all right? So, but you know, if you want to think about a negative way, also everything works the same way. It doesn't really change much. OK, so I'm going to talk about stochastic trajectories. For that, the first thing I'm going to do is to discretize time. So we need to do discrete time to make our life simpler. All right, so let's think about the master equation in discrete time, OK? So what I'm thinking about when I tell you that I want to discretize time. So the master equation is dp dt is equal to wp. And so if I discretize time with a step delta, OK, that's the size of my step, my discretization of time, I can write this equation as p of t plus delta minus p of t over delta is equal to wp of t. And you know, I can write this equation as p of t plus delta is equal to identity matrix plus delta wp of t, all right? And you know, the limit of delta going to 0, I will recover continuous time, OK? Now, the reason I'm discretizing time is that if I do stochastic trajectories with discrete time, it's just they are simpler than doing the ones with continuous time. But if I prove something to be true for discrete time, then it's immediately true for continuous time. So discrete time is kind of a more general case, OK? So if a proof works for discrete time, then for sure it must be true for continuous time, OK? Assuming the limit is low behavior, which is going to be the case anyway. But you know, we are going to do stochastic trajectories in discrete time just because they are simpler. And the rational idea here is if I prove, for example, the fluctuation theorem is true with discrete time trajectories, then it must be true for continuous time, OK? And so I can call this matrix here m, OK? And OK, so remember how the matrix will look like this. So I have 1 minus delta r1 delta w1. No, that's 2 1. 2 1 delta w3 1 delta w1 2 delta w1 3. 1 minus delta r2 1 minus delta r3 delta w1 3 delta w2. No, that should not be 1 3. That should be 3 2, right? And that is 2 3, all right? OK, so this is what my matrix is going to look like. Now this matrix, you know, remember that the matrix w, the sum of the elements in a column is 0. For this one, if I sum the elements in a column, I'm going to get 1, OK? So if I sum, what I mean is if I sum this one with this one with this one, I'm going to get 1, all right? That's also called a stochastic matrix, but that's a stochastic matrix for discrete time, OK? And basically, if I look at the transpose matrix, so m transposed from i to j is the transition probability from i to j, OK? Before it was a transition rate, right? The wij is a transition rate, so the wij has dimension t to the power of minus 1, t being time, OK? While the mij transpose, the transpose of the matrix is going to be really a transition probability, so it has no dimension, OK? It's just really a transition probability. And so, you know, as long as I make my delta small enough, this is a consistent thing. And if I do my delta, my delta must be small such that all diagonal elements are positive, OK? The diagonal elements cannot be negative. So the minimal delta or the biggest delta can do is such that all elements in the diagonal remain positive, OK? But if I'm thinking about the continuous limit, I imagine some sort of small delta limit, OK? And yeah, that's just the equivalent description. But basically, the elements I'm going to use to construct a stochastic trajectory in discrete time are going to be these elements here, which give the transition probability from i to j, OK? Now, let's imagine a stochastic trajectory in discrete time. Again, I mean, the matrix changed a little bit, but it's pretty much the same as the old story. So the maximum gain value of this matrix is not going to be 0 anymore. It's going to be 1. But then it's also the home property matrix, OK? All the gain values have a smaller gain values. And then it's the same kind of rationality that I explained before with continuous time matrix, OK? There's some sort of exponentially decaying solution. And you start at some initial probability, go to the end 1. By the way, the stationary probability of this matrix is going to be the same as the continuous time 1, OK? If I build it like that, and so on and so forth. OK, so we discretized the time now. We're going to think about the stochastic trajectory. And a big point in stochastic thermodynamics is that things like entropy, heat, work, whatever, they are all defined as functionals of a stochastic trajectory, OK? So that's why I'm talking about stochastic trajectory here. After I talk about stochastic trajectory, I want to define entropy as a function of the stochastic trajectory. After that, I'm going to prove the fluctuation theorem, OK? That's the order of things here, OK? So the first thing you're doing is to simply talk about stochastic trajectory. So what's the stochastic trajectory? It's just a sequence of transitions, OK? So let's say stochastic trajectory, gamma. So that's going to be stochastic trajectory. It's going to be equal to x0, x1, x2. Up to xn, OK? So it has n transitions, OK? If I think about the time, the time associated with this stochastic trajectory would be delta multiplied by n, OK? That's the total time, n is the number of jumps. So the total time is fixed at x0. So remember, I have states. I'm calling states ij, right? And they are equal to 1, 2, up to omega, OK? That's my system. That's the notation I use it. So I have omega states. So for example, for the enzyme model, I had 1, 2, and 3 as in states, right? And so this x0's are states, OK? So x0 could be either anything from 1, 2 omega, x1, also, and so on and so forth, OK? Again, if you want to think physically about the stochastic trajectory, so if you want to think about the molecular motor, it could be simply the position of the motor. And also, you know, it also might have to account for conformational changes in the motor. If you think about the colloidal particle, it's simply the trajectory of the particle in space. If you think about the enzyme that I'm talking about, so that would be just a sequence of states, right? It would be from E to S to E, P. Maybe I stay the same state and so on and so forth, OK? So that's the stochastic trajectory. So the x's here are just states, OK? They are just i, they can either be i, j, or whatever. OK, so that's the stochastic trajectory. All right. Now, we can also think about what's the probability of a stochastic trajectory. That's very simple. It's just the initial probability, let's say, P of x0. Then I have the transition probability, which is m transpose. Remember this thing here, it's the transition probability from i to j. So I have m transpose from x0 to x1. Then I have m transpose from x1 to x2 up to m transpose from xn minus 1, xn, OK? Or, I mean, this is equal to P x0, product from n equals to 0 up to n minus 1, mt of xn, xn plus 1. OK? That's the formula for a stochastic trajectory. Maybe next lecture I will talk about what happens in continuous time. But I mean, the reason I'm using discrete time is that if I was to use continuous time, I would have this sort of transition probability of transition rates from one station to another. But I would also have contributions from waiting times, OK? There are no waiting times when I discretize times. That's why it's much simpler. Now, let's think about something else first. Let's think about the reverse trajectory. The reverse trajectory simply means that you start in a state xn. Then you go to a state xn plus minus 1. And then you go to a state x1. And you finish in a state x0, OK? That's the reverse trajectory. So that's the reverse at gamma. OK. So that would be the reverse at gamma. And the probability of gamma reversed is simply going to be p of xn product from n equals to 0 up to n minus 1 transpose of xn plus 1 to xn, OK? I guess everybody can accept that. So I would have the probability of going from xn to xn minus 1 and so on and so forth. Now, you know, in general, in this stochastic dynamics, this transition probabilities here can depend on time, OK? But for simplicity here, I'm assuming they do not depend on time. So I'm just going to write that empty could depend on time, which would mean that for the continuous time process, my transition rates depend on time, OK? So my Wijs would also depend on time, OK? But I'm assuming that there is no time dependence just for simplicity. Things change a little bit if there is a time dependence. But I'm assuming that nothing, the transition rates or the transition probabilities do not depend on time, OK? OK, so that's the probability of the trajectory and that's the probability of the reverse trajectory. And now I want to connect, I want to think about definitions of entropy in terms of stochastic trajectories. Well, let's create a new section for that. So basically, I define what's the trajectory. One thing I should say is the following. So I mean, if you think about this matrix here, OK? That's the matrix for the case of a three-state model. Now, if delta is very small, then of course, the diagonal elements are much larger than the off-diagonal ones, OK? So these probably, these three numbers here, if delta is small, these number here, these number here, and these number here are going to be close to 1, while the diagonal, the off-diagonal elements are going to be close to 0. So what does this mean for a trajectory? For a trajectory, it means that it's much more likely that I do not change states, that when I go from max 0 to x1, I stay in the same state, then that I would change a state. So typically, it would take me a certain number of jumps to calculate, to change a state, OK? But of course, if I don't change a state, well, we are going to talk about this later. But again, when I go from one state to another, x0 to x1 for x2, the more likely situation is that I do not change a state, OK? After some number of jumps, I'm going to change a state, all right? And so in order to see a substantial number of changes in state, I have to take this n to be very large, OK? It's simply going to be the case, right? If delta is very small, then I need a very large n to make a finite time. Like let's say I want my time to be 10 seconds, whatever, then if delta is small, I need a very large number n to arrive at 10 seconds, if my time is 10 seconds, all right? OK, so that's the probability of a trajectory and that's the probability of a reverse trajectory, OK? Those are the two formulas that we will need going forward. OK, so now we want to talk about entropy as a functional of stochastic, OK? So my definition of entropy was that the delta s environment associated with ij was equal to ln of wij over wji, OK? And that, of course, is going to be consistent with ln of m transpose ij over m transpose ji, OK? If i is equal to j, that's just 0, OK? If i and j are the same state, which, again, if I look at the stochastic trajectory as I told you when I go from x0 to x1, I have the same state. The entropy change when I do a jump in stochastic trajectory is simply going to be 0. So if i is equal to wij equal to j, there's no such thing as wai, OK, ii, that there is no such thing as a transition rate from my state to itself. But when I do discrete time, then I can think about transition probability of staying the same state, OK? Those are a little bit different issues. If i is different from j, of course, it's going to be the same, right? These would be delta wij, and the other one is going to be delta wji. And so they are the same, OK? So the delta s of the environment associated with a jump from i to j is simply going to be the log of either wij over wji or of the transition probability from i to j divided by transition probability from j to i, OK? Which is the transpose of the matrix m. Now, if I want to write the delta s environment of the whole trajectory, gamma, OK, that's simply going to be ln of m transpose of xn plus 1 over m transpose of xn plus 1 xn, all right? And then I can sum from n equals to 0 up to n minus 1, OK? So that's the definition of the entropy change. And I wrote the environment. I should keep the notation as before, which was medium. Sorry. So that's the entropy change of the external medium. So again, I want to think about the stochastic trajectory for each trajectory I have, OK? So these, of course, this entropy change of the medium can be negative. While the average one or the average rate of energy production that we discussed before was always positive, this one here could be negative, OK? So for example, my trajectory, if you think about the single enzyme, if my trajectory would be a cycle, not in the clockwise, but in the anti-circuition, then the entropy change associated with that particular cycle, which likely would be negative, OK? So for a particular trajectory, the delta sm can be negative, all right? Now that is the entropy change of the environment. What's the entropy change of the system? Well, the entropy change of the system associated with gamma is simply going to be defined as minus ln of p xn plus ln of p x0, all right? And if you think about, if I take the average of delta s of the system, all right? Average means, so here, average simply means some, so average of something will be something that depends on gamma, p of gamma, OK? p of gamma being the probability, so I should call this. Let's just make this clear here. This means sum over all gamma delta s system of gamma probability of the trajectory gamma, OK? That's what the brackets mean there. So I mean, I'm defining my entropy change of the system in this way, and all I want to say that this is simply the channel entropy, OK? If I take the average, this definition, this is going to give me sum over all states, possible states xn ln minus, and I'm sorry for this notation. It's not ideal, but p xn plus 0 p x0 ln of p x0. So my point is that this definition here is consistent with channel entropy, OK? And I hope you have heard about channel entropy in your life. But I mean, again, the system is a lot of equilibrium system and pretty much the most natural definition you can imagine for an out of equilibrium system of entropy. I cannot define entropy in the thermodynamic sense that I had before, like ds equal to tdu, or 1 over tdu, and so on and so forth. That doesn't work anymore, my system's out of equilibrium. And so my definition of entropy of the system is that the entropy of the system is just the channel entropy, OK? And so if I define the trajectory entropy like this, if I take the average of the trajectory entropy, what I'm going to get is the channel entropy at the end minus the channel entropy in the beginning. So it might be convenient to write this as minus, minus, OK? Well, you know, this would be the channel entropy initially, and the other one is the channel entropy in the beginning. So if you don't know, the definition of channel entropy would be this thing here, OK? That would be the channel entropy at the end, all right? And this one here would be the channel entropy in the beginning. OK, so now I have my entropy change of the medium. I have my entropy change of the system. And I also have this equation here for the probability of S4 stochastic trajectory and the probability of the reverse stochastic trajectory, OK? Now, if I think about the total entropy change as a function of the trajectory gamma, that's going to be the delta S of the medium as a function of gamma plus the delta S of the system as a function of gamma, OK? OK, so again, I have choose, let's call this equation here, 1, this equation here, 2, this equation here, I'm calling it 3. And this equation here, I'm calling it 4, OK? If you put all of them together, it's not going to be very hard to show that the delta S total of gamma is going to be equal to ln of the probability of gamma divided by the probability of gamma reversing, OK? That is a really, really important formula in stochastic dynamics. And again, I start with this definition of entropy. Entropy of the medium is the one that I justify with generalized state balance. Entropy of the system is just the channel entropy of the system, OK? And if we fit these definitions, what I find is that the entropy, the total entropy, OK? Tot means total entropy. The entropy of the medium, the change of the entropy of the medium plus the change of the entropy of the system is going to be given by the log of the probability of the fourth trajectory divided by the probability of the reverse trajectory. And now from this equation, we can derive the fluctuation theorem, which is the following. So let's say I want to calculate e to the power of minus delta s total, right? That's going to be equal to the sum over all trajectories gamma, the probability of a trajectory gamma, e to the power of minus delta s total of gamma, right? Now if I use the formula both, that's going to give me sum over gamma, probability of gamma, probability of gamma, reverse it over probability of gamma. This one comes with this one out. And of course, if I sum over all gamma, it's the same sum over all gamma, reverse it. For a trajectory gamma, there is only one reverse trajectory, right? There's one corresponding reverse trajectory, which means that if I sum over gamma, or sum over gamma r is the same, and this thing is the probability, so it must be normalized to 1. So basically, e to the power of minus delta s total is equal to 1, OK? That is the very famous fluctuation theorem. It takes different names, jazz-insk equality. I'm going to discuss this next lecture, the interpretation or the physical interpretation of this thing. But a main thing about the fluctuation theorem is that it's a generalization of the second law of thermodynamics. Now what's the second law of thermodynamics is that delta s total average must be larger or equal to 0, OK? So e to the power of minus delta s total equals to 1 must be larger than e to the power of minus average delta s total, OK? This is called the Jensen inequality, OK? But this inequality is true, all right? That's always true. It's called the Jensen inequality. It's something you can demonstrate. So basically, from this relation, OK, this is something I don't know yet, OK? But what I'm saying here is from this inequality, which is called the Jensen inequality together with the fluctuation theorem. So this thing here, 1 together with 2, are going to give me the delta s total greater than 0. So the fluctuation theorem is actually a generalization of the second law of thermodynamics, OK? It's a stronger statement in the second law of thermodynamics. Not only the entropy must be positive, but entropy, if for example, I calculate the average of the exponential of the time that must be equal to 1, OK? This is not really an inequality. It's an equality. It's a much stronger statement, all right? And I mean, this can be expressed in terms of the probability distribution of entropy. I'm going to discuss a little bit about that next lecture. I guess I can finish my lecture here. And you know, that's, I mean, also understand this definition, OK? Also understand this definition. The derivation of the fluctuation theorem is really one line, OK? It's pretty straightforward. And you know, this is the result that really made Stochastian dynamics. That's where the feud was born. Again, this was obtained in the mid-90s. Different versions of it. And that's the very famous fluctuation theorem. All right. Thank you very much, Alvin. Is there is some question? Yes. I'm asking you to do a little comment. Maybe come here. Maybe we'll listen better. Hi. Present you. Just present yourself. OK, so I'm Sandeep. And I'm quite new in this topic. This is not my topic. But I wanted to know why one should go with entropy reproduction rate? What information it carries in general for a system? I mean, if you think about the single one time, it tells you how much chemical work you are. I mean, first it tells you whether you are out of equilibrium or not, OK? If you are out of equilibrium, your entropy production rate is not zero. If you are in equilibrium, your entropy production rate is zero. Now, it tells you, I mean, in general, it tells you how much energy you are dissipating or free energy you are consuming. So for many things we have to do, we have to dissipate energy. And you can think about a lot of things in terms of if, for example, you do an engine, you want the engine to not dissipate too much energy, OK? Or you want to reach a very high efficient. So all the heat you take from a hot reservoir when it transforms into work, that would mean that the entropy production rate is small in a sense. So it's something that it quantifies how much free energy you are consuming, the average entropy production rate. And in many situations, there are many things that you can only do if you consume energy, like in biology, for example, lots of things you have to consume energy. But in many situations, it's not clear why you could have to consume that energy. And in which sense, for example, the energy consumption rate would be optimized. But ideally, you want to be able to do whatever particular task you are doing with the minimal energy consumption, let's say. And if you want to analyze this kind of question, you have to look at the entropy production rate. I guess that would be my answer. André, I saw also that there is many questions in the chat. Can you read them for me? Yeah, because you don't see the chat? I mean, I cannot. Now I can try to see. OK, can you repeat the idea when delta is small? So delta is small simply means that, I mean, I don't know if this question was asked when I was talking about transitions being. So one thing is that delta is small. We will correspond to the continuous time limit. OK, so I discretize this time. And the other thing I said is that if I look at this matrix here, that's the matrix. When delta is small, I mean, it is more likely that I stay in the same state in a jump, right? 1 minus delta R1 is going to be close to 1, while delta, the off diagonal elements of my matrix are going to be close to 0. And the diagonal elements of my matrix is going to be close to 1, which means that if I am in a certain state, it's more likely that I stay in that state and I jump out of that state. OK, that's what I said about delta. I hope that's answered the question. Since I probably will reverse to the stochastic trajectory, I have the same structure as this stochastic trajectory that is time symmetry happening. I mean, I would say that having an entropy that is positive means that there is some break of time symmetry, in a sense, right? Then the structure is not sufficient. The structure comes from Markov. Yeah, but I don't know if I fully understand the question. There is a time symmetry happening. I think that the question is related to the fact that. Ah, sure. I mean, they look similar. The forward and backward. So if it looks similar, it means that there is a time symmetry. Yeah, but then M being a symmetry, M is not a symmetric matrix. That's a different story compared to symmetry in the trajectory. Well, I mean, both the forward trajectory and the stochastic trajectory can be written in terms of a product of the transpose of the matrix M because it's just Markov process. That's one thing. The other thing is a symmetry time, which typically people would call symmetry time having no inter-production, right? I mean, that's what people would call time symmetric. But those are different things. But yeah, I mean, the structure of the, I mean, the fact that the probability of the expression for the probability of the forward trajectory and the expression for the probability of the reverse trajectory have the same structure, if you will. It's just because it's a Markov process. That's it. Nothing else. I think in the sense that you are asking. Is it answer to you? People which are in the chat? Well, yes. OK. And the people which ask that the cheat will be in the website. Yes. So I think that we put a PDF version. We will put a PDF version of all this, of this lecture of André in the website. Lazar, you will do. I can just send it. Erika will do. Oh, I will do. Don't worry. OK. And there is a question. Yeah. And I was just saying that we will do that. Yes. Yeah, that is a simple question. Present you first. Present yourself first. OK. Hello. I'm Jose. My question is, in what cases can you give like a thermodynamic interpretation for the channel entropy of the system? Yeah, yeah, that's. Here you relate the change in entropy. With the change in the channel entropy, no? Yeah. I mean, you can do that always. Sure. I mean, it's just that. So the system is not an equilibrium thing. OK. So I cannot talk about, I mean, for example, this relation, dS is equal to 1 over TdU minus p over plus p over TdV minus mu over TdN. For the system, OK, that's simply not true. That's just not true. This doesn't work. It's an unequilibrium system. So this is only true in equilibrium, OK? And so, you know, I mean, you want to think about some definition of entropy in the system. You know, you might try to think another one, which is not channel. Maybe it works. Maybe not. And by it works, I mean, you know, why would we define entropy? So what's the point of entropy in thermodynamics, if you will? The point of entropy, I mean, why the reason behind it is to define something that tells you the things that you can do from the things that you cannot do, OK? The fact that the entropy change is larger than 0 will tell you in thermodynamics, even. It will tell you just the stuff that you can do from the stuff that you cannot do. For example, you cannot, like, you know, you think about taking the elevator. Instead of exercising to lose weight, you just take the elevator, OK? You would like transform gravitational energy to burning calories with efficiency 1. That's not possible. I mean, the first law conservation of energy will tell you, OK, you can do that. But the second law, that does not work. It's impossible, OK? So the reason we define entropy is to tell is, I mean, if you come out with a quantity that there is an inequality and that will tell you things that you can do from things that you cannot do. And that does happen if I define the system entropy as the channel entropy. I do get something like that. So that's the reason for this definition. I would say that's the justification for it. But, you know, if you, I mean, what I would call a thermodynamic entropy is something like this, OK? And since it's a non-equilibrium system, that's not really, I mean, this equation is simply not true anymore, OK? And that's just the definition that we use. And it's a definition that makes sense, the sense that we end up with some quantity that can tell us things that we can do from things that we cannot do. That's how I would explain it if it answers your question. OK, OK. Thank you. Is there is another question? I must assist also for the next talk in this week on the next week, you can ask all the questions that you want. You don't need to have shame. If the question is stupid, it's not a problem. Well, it's normal to have a question. It's normal to have a stupid question. So is somebody have a question in the chat? Yes. Is there any stochastic thermodynamic generalization of the third law of thermodynamics? Yes. Not really. No, third law, not really. I will go for a no for this one. Third law, maybe people have thought about that, but third law, not really. Is that because zero temperature limit is in conflict with the assumption that the system is stochastic or something? I don't know. I never thought about it. But I mean, the system is stochastic, but you can always take a zero temperature limit. Even in a normal thermodynamic system, it's a good question. I don't know. I never thought about third law in stochastic thermodynamics. I don't think this is the case. I think you might be able to do something like that. I don't know. But to be honest, I never liked the third law very much. But even if the system is stochastic, I can at least take a limit. Temperature is there. It's related to the traditional rates. And I could, in principle, try to take some limit of T going to zero. So that should work. And I would go to my whatever ground state or whatever. That should be possible to do whether you get something like that law or not. I never thought about it. But something that haven't been discussed, maybe you write a paper about it in the future. But I don't know. I don't know. So we have maybe a last question in the room. Hi, my name is Dana. And I was just wondering if the internal entropy production is interesting in any way. Like you kind of leave it out, right? Which entropy production? Like the internal one, you say there's one of the medium. But what about the small mesoscopic system? Like that also has. Yes. So in a steady state, that's just zero, right? I mean, I didn't really show that. But if you think about the steady state, which I was talking about most of the times here, the steady state, it's just going to be zero. Yeah. So for a steady state, it's just zero. If it's not steady state, then it might be it's a relevant contribution that will show up in your total entropy change. OK, thank you. And Edgar said that someone wrote a paper about the third law, which I was not aware of. But I will read. So in the next lecture of Andre, if you want, you can start by asking a question. Because now you know you have probably you must work tonight for understand what all you do. Because this is a basis of stochasticity. In a sense, it's important to listen. So I advise you to work a little tonight. And then tomorrow at the beginning of this lecture, you can ask some questions if you don't understand. And the question can be naive. It will be nice. No problem. So thanks, Andre. Now we will go to the next section, which is this all on section. Thank you. Thanks, Andre. Have a good afternoon. Thank you. Bye. All right, now we should continue with our all-hand session. Let me just stop this.