 Now I need to do something which is, okay, continue, play. Good, okay. We started this lecture on thermodynamic uncertainty relations with some different approach, the blackboard approach, okay? But so today I'm trying something different. We are going to use slides and they're also all gonna be uploaded on Slack channel. And you can then give me feedback on like the ways that you prefer to learn and so on and so forth. But we are going to actually start with just okay. We had this master equation and so on and so forth. And I'm using David's notation. So you can, I think just like this K for this, K to encode the transition rates between states and so on and so forth. So we can keep track of this. So some things to actually emphasize again from yesterday, we're interested in an equilibrium state of states. If we are interested in them, we are using multiple heat spads. If you are not, we are good to go with one spad like a particular reservoir or a heat reservoir, okay? So that's good, that's great. So from yesterday, what we did was that given this setup that we can describe by a master equation who is evolving under a master equation, we can define a trajectory, right? So now I'm gonna go there, I think it's okay. Is it okay with you? Okay, I'm gonna show it like that. Okay, so trajectory is basically something that encodes the set of successive states in the discrete state space that the system is visiting. So initially, let's say that the system is set to state x zero at time t zero, for example, t equals zero. And the heat bad, the coupling to heat bad induces some stochastic dynamics over the system that actually causes the system to make a state jump, to make this state jump from one to another. Is it okay? Is everything okay? Okay, perfect. So we say that, okay, starting from this x zero, then it jumps to x one, at t one. But we also actually discuss that there are, for example, different effects of different heat pads. When you have multiple heat pads, you need to account for the jumps that you incur due over to the different couplings of different heat pads. So when you are defining a trajectory, in the most general form, if you have one heat bad, you don't have to do this, of course, because you know why the system is making the transitions. But in the most general form, you encode a trajectory in a way that you can keep track of the heat path that actually induces these transitions between some tj minus one to tj, okay? So everyone is clear with this definition of a trajectory. Perfect, perfect. So one thing that we would like to write is express is the total number of transitions from one state to another that is induced by this heat, like by this coupling to some heat path, okay? And yesterday, we wrote it in this way, okay? It makes sense to everyone. You're basically, yes, we are using this chronocard delta to basically keep track of this trajectory jumps, okay? And so this is some quantity that encodes a total number of transitions from one state to another at a time that is induced by some heat path. So one thing that we wanted to define, for example, if you recall our discussion from yesterday when we started the lecture, we talked about the fluctuation theorems and we talked about uncertainty relations. So fluctuation theorems constrain the statistics of the entropy of production or some relevant random variable that quantifies the statistics of the thermodynamic behavior of your system. But thermodynamic uncertainty relations, for example, actually make use of things like random variables in the form of entropy production and also currents. But we didn't, up until this point, we didn't define mathematically what a current means, right? So what we're doing right now is to define this current. So you're basically taking this term that counts a total number of transitions and you're multiplying it with an anti-symmetric increment term, which basically says that a general stochastic accumulated current is a weighted sum of the number of transitions that you're having from one state to another. I think in a more like a summarized form, I like this word hop, hopping between states, okay? You're like hopping between states. So this general stochastic accumulated current is a weighted hops, okay? It's a term that expresses the weighted hops. So this is crucial. So as you see, this is a pretty general definition of a current, right? So I mean, it can be anything that satisfies this. If you have an anti-symmetric increment function that can be actually related to some thermodynamic quantity, then what you have is a, okay, you're good to go. You can write down some general like current. So for example one, I think example that we gave yesterday was this. This is the energy that is ascribed to state X. This is the energy that is ascribed to state Y. And if you put this in to this formula over here, what you get is the net heat flow into heat paths, okay? Something like that. If you just, you can write something more simple where you're basically, let's see. Let's say that you're starting from some state X zero, jumping to another one through some bath VJ. If you just use this, then still, you're not doing anything. It's not actually like a weighted, weighted hop, but it's basically you're counting the number of transitions between states, the total number of transitions. Yeah. Sorry, good. Two things. So first of all, this is would be the energy flux from X to Y or from Y to X? Because essentially in the definition of an X, say X is the starting state. Oh, you're right. Yeah, that's a good point. Yes, exactly. So that, yeah. Exactly. Thank you very much. Second thing is this dependence on new J is not clear to me. So because on the right side, on the left side, you have something that just depends on new. Okay, yeah, okay. I mean, it's a, I think it's just like a mathematical, just how to say that. I think this expression is clean, but I can ask David if it's... So it's a kind of a strange notation. She's actually using notation that you will see in the literature. What they basically mean to say is that there's a delta function saying only consider those transitions which happen to be the ones that are mediated by that specific, in this case, it's a V. The N here, yeah, I guess you could put in the N instead of the D because the D function, I think actually you don't need it there as long as it's in the N. But it's a conventional way of writing. Yeah, so they duplicated it. I mean, there is sloppiness in the literature, and so, but yeah, but that's all that it means. As long as it's clear what we are doing. I just have a question, I also have a frontation. Like if thinking about the Korean as now this dynamic between, because now with N with yours considering like the number of transitions, then the D should impose like a rate, sort to say, or like the D is taken into account like the rate of these total number of transitions between one state or another. We could say that. Watch me roar, okay? Put that on a T-shirt, yeah. I think this tree of life is much more cool than that, but in any case, by the way, I've got really cool T-shirts. So, here is my increment function. And so this is three, because it's three steps. And that's it, that's all our current is. Where the V is saying that that particular set of times that David did it, that was due to the reservoir in him called Turkey. There are other times that he could do it where it's due to the reservoir called fish, there's some where he could do it that's due to the reservoir called chocolate. I'm being a little bit silly, but I think I hope you get the idea. So that's all that a current is, nothing more. That's the accumulated current in that total amount of time that it took me to act like a buffoon was three steps. And then the D is saying, what is the actual current we're measuring? We couldn't be measuring a different type of property along that sum by projection, okay? Yeah, just like this one. If you just take this one, then yeah, you recover what you have as they, you know, like this net number of transitions from states. But if you use this one to put it inside, seriously, like to do expression, you will see that it's gonna be the net heat flow. And actually, one thing to think of, I think it's really not trivial. I actually didn't see it the first time, but as an exercise, just think if you can write the entropy production itself as a current by using just an increment function, just a suitable increment function. I'm not gonna tell what the increment function is, but I can give the spoiler that, yeah, you can write it down as a current. So all interesting things are currents, basically. Okay, maybe most of the interesting things are. So your second example is not symmetric in X and Y, right? Anti-symmetric, sorry. Anti-symmetric. It's not anti-symmetric, I mean if you exchange X and Y, that is symmetric. So, okay, I think we tried going into the, yeah, so think about how we actually describe these rates, transition rates between like, if you go from like, for example, in the master equation. So we had these conditions imposed on like, one transition rate from I to J and to J to I. So it's implicitly anti-symmetric in that sense, but in this given form, it's not. Yeah, okay, okay. But now I think we understand what a current is, right? Like, David gave a great presentation of what a current is. It will be on YouTube forever. Okay, perfect. Okay, I'm continuing. If you have any questions, you can just interrupt. Okay, Navi wrote this, do you have questions? Yes. Yeah, maybe you said them before, but I missed it. So if I have a generic K, how do I know many buffs? Oh yeah, it was in yesterday's lecture. It says, okay, so when you write something like this, the same cause of transition rates, right? Which is like the probability to make a jump from the state Y to X per unit time. So when you have multiple heat baths, by the way, this is something that I mean, I think the question can be more refined in the following way. You're not actually knowing if you're given a master equation just like in this form, if it's like just this form, you're not knowing if there are multiple heat baths. But if you want to have multiple heat baths, you can write this transition rate in the following form. What's the algebraic, what are the algebraic condition on K? So let this be your transition rate matrix. Yes. Okay, I don't know, like put some numbers in it, okay? This is by, this is encoding the effect of the coupling due to V1 heat bath. Take that, sum up with the transition rate matrix effect. Yeah, no, this is clear. So is it that the single heat bath K nu satisfies the detail balance? Oh yeah, yeah, yeah. Whereas KXY does not? Exactly, okay. Yeah. Yeah, you write this for all of them. Yeah, exactly. The minimum number of heat baths? No, of course not. I mean, this is, as I told you, this is just like a mathematical construction that you use to describe a physical system. And for example, if you want to describe your physical system to be in an unequilibrium steady state, then you know that you must use multiple heat baths. And the reason is that you have something like this, right? But if you have only one heat bath, there's only so much you can do. The temperature that is imposed by this one heat bath is going to be the temperature that is actually going to be fixed on the system itself. But if you have multiple heat baths that are fixed at different thermodynamic potentials, for example, different temperatures, then the system can go out of a equilibrium and just like stay at the steady state, find itself a steady state. But I'm not sure if I'm like understanding your question beyond that point. I mean. Is this P superscript news that Gilger wrote? What she wrote there is shorthand for what the stationary state would be if there were only the reservoir V. So that's the Boltzmann distribution for like the reservoir V will have its own temperature, its own chemical potential and so on. What she is getting at here is that if you have different reservoirs which have different equilibria distributions, P sub V, P sub V would be the stationary state if the rate matrix were only K superscript V. But if you have multiple K superscript Vs being combined like in the equation she wrote, what that means is that you don't have any one of those PVs, none of them is actually a stationary state of your summed rate matrix. And can I make just like a comment on it when he says like a stationary state due to one heat bath, we're always thinking about the equilibrium one because if you have one heat bath you cannot have a non-equilibrium stationary state. When you have multiple heat baths you have non-equilibrium stationary state. So everyone, for example, I have two friends and they're basically, they just want to sort of like impose their own, I don't know, whatever the analog is like equilibrium steady state is, but because they have different equilibrium steady state, I have a non-equilibrium steady state, okay? So I tried to capture this by saying yesterday that conflict is why we have so many interesting things in life because everyone is like stretching and trying to impose their things, okay? And the question that you asked, it also included an element like asking what is the minimal number of heat baths, for example, when you are given a rate matrix? Well, just, I don't know, if I give you some like a matrix that transition rate matrix, just for mathematical, I don't know, like fun reasons, you can decompose it into of course, like as a sum, you can rewrite as a sum of different rate matrices, but they will describe different physical systems. If you write just like as it is a one matrix, you know that it's gonna be, it's not gonna be symbolizing some non-equilibrium behavior, but for non-equilibrium steady state, you need at least two different rate matrices that actually simulate this, you know, encode this behavior of two heat baths that are fixed at different temperatures. So you can have like 10 heat baths that are fixed at the same temperature that have the same chemical potential. Well, it's not gonna make a difference, you're not gonna be in non-equilibrium steady state. Where does the temperatures and the chemical potentials or the different heat baths appears here? When you write the equilibrium stationary state? Yes, different case. So the temperature, Yeah. Yes, but... And the Hamiltonian term, of course, yeah. Maybe. Yeah, please, please, please. Two heat baths at different temperatures, beta and beta prime, okay? Then what we have here is local detail balance. I'm actually gonna do it this way. Beta sub A, beta sub B. So case of A of X, Y, that times the Boltzmann distribution, the Hamiltonian of whatever your system, I guess this would be a Y. This is running it out all explicitly. And if I get this right, it will be amazing. Very, very bad. Okay, so this is for one of them, that's for bath A. Bath, so this is an equilibrium, so if the only reservoir were A, then you adjust, the system would just settle, and the rate matrix would just be case of A, of the global rate matrix, the actual rate matrix. And so the stationary state would be this equilibrium. However, we've also got another reservoir, B. So B, it's got its own rate matrix, which obeys its analogous equation. Same Hamiltonian, but the Boltzmann distribution is now different, okay? So then as Gilger wrote down there in these equations, the actual rate matrix that governs the true distribution is the sum of them. And so the stationary state of this, it's not that Boltzmann distribution, it's not this Boltzmann distribution, it's not some of them. It's actually a funky kind of a thing where you've got probability currents cycling all the way through. In the equilibrium, you don't have that in the stationary state, but here you do because exactly as she was saying, it's all about conflicts. Two of them are pushing in different directions. Okay, does that help for them for people? Well, it's the same thing that they're chemical potentials or if there are little pieces of bread you're getting from a reservoir, anything it is that the reservoir is giving into the system that's exchanged, it's going to have in general its own term in the sum of the rate matrices. Can I make another comment commenting that just like connecting this? So we talk about this sort of like this more algebraic form and we can talk about in the form of like this matrix algebra, okay? And then in a baby sense of a matrix algebra, nothing serious, but when we have something like that, how do we find, for example, the steady state when we are given a transition rate matrix and the initial probability distribution over states? Sorry? Exactly, how do we solve the master equation? But if I give you just the transition rate matrix and I want to find the stationary state, it is a linear algebra. Come on. Sorry, what? Yeah, oh yeah. We solve the equation transition matrix times probability vector equal to zero. Yeah, exactly, but how you do it is basically, so there is something called, you can take a look at it. This is the parent for a Bernoucet theorem, okay? It basically says that, okay, I see heads nodding, so you know that, yeah? Okay, so you have an irreducible, a periodic, Markov chain, and in these kind of situations that we're dealing in stochastic thermodynamics of jump processes, these are fairly mild assumptions to have, so we say it's irreducible or periodic, so we can invoke parent for a Bernoucet theorem that says us that, it's a mathematical result, that says us that when you have something like this, if you satisfy these conditions that I just counted, then you will have a unique steady state that you are going to relax to and it's time t goes to little t goes to infinity, okay? And how you find is like, yeah, exactly as you said, you're computing the right eigenvector of this, of this transition rate matrix, so you can do it for both, for example, the equilibrium scenario and the non-equilibrium scenario, okay, so when you, just a second, when you find the equilibrium, sorry, a steady state distribution for the equilibrium scenario and the non-equilibrium scenario separately, there must be something that tells you about the difference between these two scenarios because you are using the same mathematical tool, right? As David said, in this situation, what we are checking is to see if there are fluxes in the system, non-vanishing fluxes, currents, and the way that we identify these non-vanishing fluxes is entropy production, okay? Use parent for a Bernoucet, obtain your steady state, you just still want to understand if there is a difference between this equilibrium steady state and the non-equilibrium steady state, and what you do is to compute E p. If E p is zero at the equilibrium, if E p is zero at your steady state, then you are in equilibrium steady state. And if it's not, then you're in NESS, okay? So this is how we use matrices because we are gonna use them, okay, when we are computing like these steady states and so on and so forth, because it's actually in one of your homeworks, okay? So, and this is how we use them, and this is how we can check if it's equilibrium or non-equilibrium, yes? Yes, is it, okay? The matrices k, x, y, we say that the column sum to zero, right, so I think parent for Bernoucet wasn't for stochastic matrices, where IQ... No, no, no, but this is the difference between the continuous time microchain and discrete time microchain description. So when we, okay, when you have a continuous time microchain and you write this transition rate matrix, what you have is something like this, right? In this way. But when you describe the time evolution, what you're doing is not actually taking this matrix and then multiplying it with the P zero, the probability distribution over the initial state, but take the exponential of this matrix, yeah? And by using this thing that he mentioned, the ordered exponentials, you can take this, you know, like this CTMC and then rewrite it in terms of a discrete time microchain and under appropriate conditions and so on and so forth, yeah? The exponential satisfies the problem. Guja has several times pointed out that we yield EP is equal to zero if and only if you add a equilibrium. Can anybody actually prove that? Let me help. Notice, exactly so. But, but, but, wait, someone wants to explain, yeah? Go for it. Bingo. Exactly. Whereas if you're at a stationary state, you're not gonna have that ratio be one or any, for that P, the P with stationary state and the sum of the page rather for some more. Okay. All good, great. Yeah, I think that's good. Yes, great. Thank you. So all I was saying was that if we have multiple rate matrix, we have, remember all this material up here, that if we have actual multiple rate matrices that have different stationary states than the stationary state of the overall system, which is the sum of the rate matrices is not going to obey that. That's only going to, this is local detail balance. So if you, if we're, so this, if there were a V up here, then the equilibrium, the stationary state of just that reservoir V would obey this, but that's not what you're going to have as a stationary state of your global system where you sum up the rate matrices, okay? Okay, so if you're ready, we can proceed. Yeah? Okay, that was a great discussion, by the way. I'm really happy right now. Thank you. Yeah, because I feel like you're more, I think connected right now to the class so it makes me super happy. So yeah, let's go like that. Thank you. This is my way of saying thank you. Okay, now, oh God. Okay, thermodynamic uncertainty relation. So we talked yesterday that, okay, again, going back to, okay, this, we are sort of like looking for this universal balance that actually allows us to say about the resource constraints and the physically allowable, physically realizable processes or phenomena, okay? So thermodynamic uncertainty relations is like a set of bonds that actually say something really nice about them, which is that, let's say, for example, you have an observable quantifying the output of a biochemical process, it's just like an enzymatic process and this observable or this, by the way, I'm going to use observable and current interchangeably, okay? They can mean different things, not every observable is a current in the mathematical form that we gave it, but for this lecture, I'm going to use it for you, there is no harm in thinking that, like, there is no difference there to say, at least for the stage, okay? So let's say that I want to say some things about the, for example, the net rate of the produced or consumed enzymes or substrates in a chemical reaction, okay? And I want to be more, I mean, as precise as possible, but not just as an external agent, like a resource or guji that I want to do this, but biological systems seem like they want to do this, because for example, remember this current picture, trajectory picture that David created in real time? So currents are things like this, okay? They're like, they go like this, they're like all jiggling and so on and so forth or like this and so on and so forth. Okay, okay, think of it as like, for example, like a record of the number of the substrates that you are producing or consuming at a time. So you're a biological organism, okay? You're far from equilibrium, but you have some stability, right? This is something expected, because if you don't have this kind of a stability, then you're probably going to die or something like that, okay, or your system basically, your, you know, this mesoscopic system, the cellular system, it will stop operating. So you want some certainty in your behavior. You want some precision. You want something like, for example, I want my average, you know, number of like consumed, average rate of consumed are produced enzymes or substrates to actually look like that. I want smoothness so that I can actually sustain my biological behavior, okay? This is the idea. So what thermodynamic uncertainty relations does is take that kind of a concept of precision and try to see if you can bond it by the energetic costs of sustaining that kind of a biochemical process. Can we find the fundamental relation that describes this relation between the cost, thermodynamic, energetic costs of sustaining this biochemical process and the precision that is related to this biochemical process. Again, the main point is that I want to be as precise as possible, but to be more precise, I'm going to pay with some thermodynamic cost, which is the entropy production, okay? So the first formula that you see here, by the way, I was talking about the, oh, yeah, one thing that is important is that currents fluctuate. We know that one source of sort of like this uncertainty comes from the fact that when you have a trajectory, again, as David described, you can jump like this, like that. You're basically zigzagging and so on and so forth, but there's another source of uncertainty. This is just for intuitive reasons that I want to describe. And it is, for example, the jump time that you're making these transitions, okay? If you remember when we encoded this trajectory, we also included these little t terms, okay? So there are mainly two sources of uncertainty. The state transitions, they have their own uncertainty, like this sort of like a position-wise trajectory, and also the jump times have uncertainty. So this is why we have uncertainty, okay? So yeah, the way that we define precision is actually this term on left-hand side. It's the variance to mean square ratio of the current that you're computing in your system. And this is the inverse of the average entropy production that you incur to watch this thermodynamic process throughout this biochemical process, where you're having these reactions, for example. So this is the first TUR that was derived 2015, and it was, again, derived in the context of biochemical reactions by using a really actually not familiar tool. So we are not gonna be talking about how it's derived. We're gonna be talking about how this is derived. So again, we have this precision term on left-hand side, but now we are changing some conditions and we are driving a different TUR. I'm going to also be, in one slide, I'm going to be talking about why we have different TURs. I talked about it yesterday, I'm going to emphasize it. And this is the, it also goes by the name of generalized thermodynamic uncertainty relation, but we are going to be driving, as they did it, from a joint fluctuation theorem, including turbulence and entropy production. And this is called a fluctuation theorem uncertainty relation. Now, so I told you yesterday, this is something that, okay, exactly, perfect. I'm coming to this, thank you, Francesco. Okay, yeah, Francesco is a name that I really like because of my mentors. Okay, so, yes. We have a question on, so, Francesco, yeah, ah, yes. Yeah, I already, yes, I'm coming to that in one minute, yes, exactly. There's a question. So he asked me if there's like, how universal these TURs are and what are the physical conditions that we are imposing on our system when we want to have these TURs to be applicable. So we're going to be talking about this in this slide and the subsequent two slides, okay? So I told you yesterday that this is a field of subfield of ongoing research, thermodynamic uncertainty relations. And yeah, I was serious about it when I said also there are like, I don't know, 50 thermodynamic uncertainty relations. And we have those different uncertainty relations just because of the reason that Francesco mentioned. There are different physical conditions that we are imposing on the physical systems that we are describing, okay? And these different conditions that we impose, this is actually, yeah, I like this word a lot because it's so familiar from just classical mechanics and so on and so forth. Constraints is actually what allows us to do physics, right? And in this sense as well, these constraints, different constraints, they will allow the existence of different flavors of thermodynamic uncertainty relations. So based on that, here's a timeline of what happened. So I told you that in 2015, they derived this first original thermodynamic uncertainty relation and then right after that, they use a joint fluctuation theorem of entropy, production, and currents, okay? And then, of course, I told you something about this word observable and current. So the difference is on an intuitive level, for example, first passage times, do you know at least on an intuitive level what a first passage, okay, perfect. So basically, for example, in this kind of a scenario, what you want to have is that you're accumulating a current and you would like to say something about the statistics underlying the first time that you reached a certain value or threshold of this current, okay? So this is an observable in this situation, first passage time is an observable and we care about this because most of the stochastic processes, they have really interesting results about first passage times, okay? So you can drive TURs for first passage times, for instance, and you can, for example, in these equations that we saw here up until this point, you know that these are just for singular currents, right? But if you have multiple currents in your system that is contributing to the entropy production that you want to account for, then this is a covariance matrix, okay? If you take the diagonal, you're going to get the variance over individual currents, so this will basically boil down to this one. And this is another sigma which is reduced to good old average entropy production in the case of an unequilibrium steady state. It's not important this slide. It's only important because I want to show you that there are so many of them that is corresponding to so many different scenarios, okay? And so again, reflecting, I'm always going to jump back to Francesco's question, okay? So we will see in one slide in two minutes that these TURs that are derived so far up until this point is actually they're applicable to really systems that are evolving under strict conditions with strict conditions also imposed on the initial probability distributions. For example, you need to start at nonequilibrium steady state. It's a strict condition, but is it possible to derive a TUR for systems that start evolving from an arbitrary initial state? It is, and they did it. This is, oh yeah, this is, did I put it? Okay, I will send the reference. Sorry, yeah, Liu I guess, yeah. So I'm going to send all of these references to select channel, and then people, people are people, they derive TURs from field theory and so on and so forth, so this is how the field is going right now. But one more thing, some of these people, and I think I'm just going to touch upon it, because we talked about statistical inference. How many people know about Kramer-Row-Band, at least on an intuitive level? Sorry, sorry about that. Okay, you don't have to know that formally, you won't need it for this course. It's just that I was thinking to myself when I first learned about the TURs, it's they're always having this weird form and why do we define the precision as a variance to mean square ratio? I mean, yeah, it makes sense when you see it. It's a measure of this version of the random variables or the precision of the random variables and so on and so forth, but why? So mathematical form, I think the one that actually justifies this is something that comes, is a proof that comes from statistical inference by using Kramer-Row-Band, and I wanted to mention this. If I have time, I want to also discuss this for one minute. I have one slide devoted to it without going into mathematical details, because remember what we did in the previous lectures, what David did was to take E.T. Jane's works from statistical inference and drive the Boltzmann distribution without making any kind of an underlying thermodynamic assumption, right? So people, clever people, they actually took another method of statistical inference, which is the Kramer-Row-Band, and then they derived the thermodynamic uncertainty relations. Okay, I'm gonna talk about it just in one slide. I want to. Okay, now, so we are gonna start driving this uncertainty relation, the FTUR, by using a refined form of a fluctuation relation, okay? So let's start by recalling this one. So David introduced you to this, right? This is an integral fluctuation theorem that you use if you take the Jensen's inequality and apply to this, for example, you will see that mean of this random variable R is satisfying this. One thing that I need to say is that this is applicable to, you know, certain random variables that satisfy a very, I think, again, another mild condition that is described in Esposito and Van Demberg. If you can describe this random variable as the log ratio of probabilities, as in the entropy production case, or yeah, little delta SI for the stochastic interaction. This is Esposito and Van Demberg. Then you can basically derive this IFT. And if you have an IFT, you can automatically satisfy what is called a detailed fluctuation theorem, okay? This is basically informing you about how exponentially more probable it is if you're running the movie for this and you are seeing an increase in entropy compared to running the movie backwards and seeing a corresponding decrease in entropy. Sorry, just for completeness. So here R would be KBT times entropy production, right? So I actually, so this is a stretching. The reason that I also put this here in this form is that it can be applied to, you know, a random variable R as you wish that satisfy this kind of a condition. So in Esposito and Van Demberg, when they give a duration of the detailed fluctuation theorem for the probability distributions that are characterizing the entropy production as a random variable by itself, they're not just completely coming up with this. They actually have this formula and then they are using the fact that, I mean, it's a mathematical proof of like five, so six, I don't know, maybe, yeah, components. But yeah, it's Esposito and Van Demberg and then they are kind of coming up with this kind of a proof. But instead of actually thinking about just the stochastic entropy production, you can think of another random variable that can be written as the log ratio of these forward and reverse probabilities that are defined over the forward and reverse trajectories. And you can still kind of write this integral fluctuation theorem and detailed fluctuation theorem. So the reason that I actually sort of started to write the IFT and DFT in terms of a random variable R is this because I want to emphasize the fact that it doesn't always have to be entropy production, but for us it is the entropy production because we are interested in the entropy production. So this is something, this is remarkable in the sense that it is applicable to any kind of initial distribution. It is applicable to any kind of physical setting, arbitrarily far from equilibrium. Okay, this is the universal. This is again, this is as universal as you can get. This is your universal result. Okay, now what we want to do is to consider a specific setup where you have a system, isolated system in equilibrium state. Okay, so here is the assumption. I'm going to be really incredibly attentive in just emphasizing the assumptions. You are starting in equilibrium state. And you are being driven to another state. So be careful about the initial state. So going back to Francesco. Okay, in this lecture we are always going back to Francesco's question. We need to be super, super, super aware of the assumptions that we are making. And here one assumption that we are making is that your initial state is equilibrium. You're driven to another state by varying an external control parameter. For example, Carlos yesterday, he said that, for example, what can be this external control parameter? External force, apply magnetic field, okay? So you're using this external control parameter, lambda. This is what we call in stochastic thermodynamics, this is a mathematical definition of a driving protocol. It's basically a sort of functions of time, this little p, because this magnetic field, this external force that you're exerting on a system, it is time dependent, it might be time independent. So these construct different functions of actually that describe mathematically the way that you manipulate the system externally, okay? So you have an external control parameter evolving under this driving protocol. And now, another assumption, this is the second assumption that we are making. Imagine the system being driven to a time-reversed protocol that satisfies this, okay? You're going to have, so we know that the difference of sigma is, of course, we see this between the entropy of the final state and the initial state. And this is going to say something about what? This change in the entropy. Entropy production, right? So we are going to compute the entropy production and we are going to compute this precision term under these assumptions, okay? So first step for derivation, always be aware of assumptions. So are we clear with the assumptions? Okay, perfect. Now, this is the gist of the fluctuation theorem that we are going to use, because let's also take a look at this. In all of these fluctuation terms that we saw so far, we say that you have this probability distribution that is defining this, you know, like this forward trajectory when you run the movie backward, and you have this backward probability distribution defined in David's class, and also given, you know, in more details form, actually they have a discussion in Massimilano's Esposito-N1 number x paper on this. You have this backward probability of, you know, observing this decrease or in the random variable or entropy change when you run the movie backwards, okay? But what we are going to see in the next slide is gonna be a different form of this, and it's gonna be a different form because if you assume that you have a time inverse particle, then you are assuming that you have time symmetry, okay? So the way that you run the movie backwards externally, and the way that you run the movie, sorry, forwards and backwards externally, they are going to be identical, okay? So symmetry. Initial distribution, driving protocol, and how you run the system. And in this case, we have symmetry. It's gonna make things much more easier. So I'm not gonna talk about its derivation, but it's not so just hard to intuitively accept it. If you have this kind of a thing, then you have what we call a strong detailed fluctuation theorem. All these fluctuation theorems that we have seen so far, they're just straight, boring, I'm not super boring, but yeah, detailed fluctuation theorems. This is a strong form of, because you want some more from this guys, this is a strong form of fluctuation theorem that we call to be a strong DFT, okay? This is the standard form, this is the one that we are using in this proof of a thermodynamic uncertainty relation, okay? So now, just want to remind you something about this from yesterday. This is a two, right? Yeah. Okay. This form was the 2015 non-equilibrium steady state, okay? We say that if you have this thermodynamic uncertainty relation, you can apply it to physical systems that are in non-equilibrium steady state that have a time-independent driving protocol so that it will have an average constant entropy production. Constant average entropy production, okay? So this is strict, because most of interesting things in the world they're being driven for a firm equilibrium by a time-dependent driving, okay? So when you have a time-dependent driving as in this one, you can get a formula like this, the FTUR, but still there are assumptions that you need to satisfy. You need to have, for example, to satisfy this kind of a, let me go back there. Can you see, okay, here's a question for you. Can you tell me what is like a suitable setup where you can satisfy this condition and this condition? Thank you, the equilibrium steady state themselves. It's actually trivially satisfied, right? Mathematically speaking. But also, if you have a periodic driving, you can satisfy this. And in physics, we are really interested in periodic driving. Even in Hamiltonian systems, there is a, I don't know if you know it, but let me get you excited. This is not a part of the course. There is something called, it's a really interesting part of dynamical systems theory. This is like Flucca systems and it's basically describing dynamical systems that have a time-dependent, sorry, periodic Hamiltonian, okay? You use it in semiconductors. You use it to understand biological systems. You use it in quantum mechanics and so on and so forth. So that's why we are interested in time-dependent driving protocol that is periodic. So we are going to be using this fluctuation theorem for systems that are, for example, periodically driven and satisfying this kind of a condition. You have time reversal symmetry and one more thing. Your initial and final states, they are the same. So let me write it down because it's the most important thing and I didn't want to put it in the slides directly because I think it's something to just talk about and then look into your eyes and ask if it's okay, if we understand it. These are the conditions for fluctuation theorem and certainty relation. No, big T, of course, okay. These are the same thing actually, but yeah, this is a tilde, okay? So just taking one step back. We want to constrain the statistics. I mean, we want to constrain the behavior. We want to understand the behavior of entropy production under some constraints. And we want to be able to talk about precision of the currents or the precision of whatever we are interested in the system. But how are we going to do that? Okay, you give a system, you model it by master equation, everything's fine and so on and so forth, but you want to also be able to sort of characterize this thermodynamic process itself in more detail, to be able to say something actually useful about how you can bond the precision of a current that is occurring in the system, that is a part of the system. And to be able to do that, to be able to come up with useful mathematical results that describe the physical situation precisely, you need to be able to say something really exact about the conditions that allow you to use, for example, any kind of a thermodynamic uncertainty relation, which takes us back to Francesco's point. When we are talking about bonds and resources and information theory in computer science and physics and specifically stochastic thermodynamics, one thing that we must be doing always and always and always is to make sure that we are aware of the assumptions so that we can make sure that the bonds, the mathematical bonds that we're driving, they're not useless, they're physically meaningful. And that's why when these bonds, for example, thermodynamic uncertainty relations, they are verified in many different experiments, ranging from these realms of quantum physics to actually biophysics and soft matter, they are verified experimentally. So what we're seeing in the class today is actually not just like mathematical, but serious physical results. If you want to bond precision, you're paying with dissipation. And this is of interest to also practical matters to engineers because we want to design more efficient, more efficient, more efficient computers. We want to understand biological systems. We want to understand so many interesting things. You want to understand the dynamical evolution and there are, when a system is dynamically evolving, there is always a thermodynamic energetic cost associated with it. It literally tells you what is physically allowable or not. Okay, so this is one point. Now, are we taking a break or am I going for a break? Yes. Okay, so the system is an equilibrium. Then there is driving control parameter dependent on time that makes the system go from stage, let's say A to B, for X to Y. What is lambda tilde? It is the reverse protocol that you apply the way that you externally manipulate the system when you run the movie backwards. So when... For you to be able to do that, implicit assumption. Exactly, this is why I was able to write this. The probability distribution, describing your initial state and the final state there, they must be identical. In order to run the... And then it says imagine the system being driving through a time reverse protocol. So I am imagining now I am starting in T and then I'm going to zero. What I am measuring is the entropy production between the state in zero and then in T. Exactly. And after that, I said that the time reverse are symmetric. Exactly. Okay. So the average, this is the point at exactly. The main point is that you're measuring the average entropy production during your time evolution. Even though these states are, they are identical, initial and final states and they might be in equilibrium states. What you're measuring is the average entropy production that is incurred throughout this thermodynamic process where you drive the system out of equilibrium. Perfect. Yeah, I mean, if you want it, yeah, we can take five minutes. Okay. So five minutes break. Perfect. Yeah.