 Yeah, I need you. OK, we can do it like this. And then again, like this. A little higher. A little higher, like this. Perfect. Thank you. Yes, exactly. Thank you very much. OK, so we talked about the assumptions. Now we are going to proceed with the assumptions. So one thing that I want to emphasize before we are going to use a mathematical trick. So in this course, I think up until this point, OK, David gave a derivation of his work, his results on this, you know, KL divergence, the difference between KL divergence and the mismatch costs. But yeah, this was a derivation that was actually smooth, I think, in a sense. In the sense that we didn't have to do really any mathematical tricks, and so on and so forth. This is the first time in the course that we are going to make a proof. And we chose to make this, because I think it's really, I mean, yeah, it gives you an understanding of how we make proofs in stochastic thermodynamics, OK? And so we are going to walk through the proof together. It is, I think, the most comfortable proof of any of the TURs if you don't know statistical inference. And I didn't know statistical inference when I actually started learning about a TUR. So I hope that we will make it true. So yes, we talked about this time reversal symmetry, OK? One of the things that we actually want to do right now is to sort of come up with a mathematical trick that captures it. So we are considering this good old fluctuation theorem, theorem, this strong fluctuation theorem, OK? Now one of the things that we can do is, let's take this, here are your assumptions. Let's go. This is one thing. And the other thing that we, I mean, another way that to express this kind of a thing is just this, right? So when we are dealing with fluctuation theorems, we consider entropy production as a random variable that can take both negative and positive values. Now we talked about time reversal symmetry and so on and so forth. So we are asking, what if we come up with a mathematical trick that actually allows to only consider the positive values? Because we have time reversal symmetry and blah, blah, blah. We need to make some sort of a connection, OK, in between. So based on this kind of an expression, we are going to try and come up with a probability distribution that is different like transformed but identical to the original one, probability distribution that actually describes the behavior of your random variable as entropy production, OK? And the way that we do this is, we know that this, this is normalized, right? This is equal to one. OK, one thing that we can express this, one way to express this is take the integral from minus infinity to infinity, split it into two components that goes, one of them goes from minus infinity to zero, the other one goes from zero to infinity, OK? Take this, so this is still one, right? So now, what we can do is to, what is going on? OK, this one is the iPhone. So now what we can do is to actually use this expression over here and put it here. Because we know that one thing to actually, so these are describing this, you're taking the integral from minus infinity to zero where you have the entropy production random variable where it can take the negative values. So it's actually describing the behavior of P of minus sigma, right? And how we can write P minus sigma by the strong fluctuation theorem, we can write it in this form. So I'm gonna take this, OK? And put it over here, inserted here, OK? And when I do it, we will see that we can do this. But actually I'm cheating here because now the way we wrote this is actually just considering the values of the entropy production. It goes from zero to, that considers like this, all of the like the positive values, OK? So this is like what I'm getting at. If we consider the steps that we have taken so far, we're actually manipulating the probability distribution that originally characterizes this entropy production. And we're trying to obtain a new modified probability distribution that characterizes this entropy production. And the way that we can write it is as follows. You see it on the screen as well, OK? Do you see it? Sorry, Ben? What? From zero to infinity, the limits. Oh, Jesus, yes, of course, sorry. Thank you very much. Exactly so. Thanks a lot, yes. So this is how you get to this expression that we see on the screen. And this allows us to define a new transformed probability distribution, Q of sigma, that is equal to this one. And this is one. So you took your random variable entropy production that can take values from minus infinity to infinity and you have a probability distribution over this random variable that characterizes it. And you took it and then you modified it, you're now treating it as if it's a random variable that can take values between zero and infinity and there's a probability distribution, the Q that we just defined on the screen that allows you to investigate the behavior of the statistics of this random variable in a way that is identical to the original probability distribution, P of sigma. Because what we did here is just mathematical manipulation, nothing else, you're physically describing the same behavior, OK? Let's say that we define this equivalent random variable and so on and so forth. One thing that is important to us in thermodynamic uncertainty relations, it's the average entropy production, right? So let's compute the average entropy production. Now, in terms of this modified probability distribution, Q of sigma. OK, so I'm going to walk you through the steps. What you're doing here is you all know how to write this average entropy production, right? Is there a question? No, OK, it's there. OK, so you're writing this and we saw that basically in a equivalent way of expressing this first part over here is to actually, so let's, OK. Now maybe I should actually use the board here. Sorry about that. This is my mistake on my side. Yeah, one thing that is dangerous is that if you do one thing for like 100 times, like 1,000 times, because I had to do it in like, I don't know, a year ago or so, more than a year ago, I don't know. But yeah, once you do it, it's it looks so trivial to you, but it's when it's not actually so trivial, OK? So yeah, let's take it, split it into two. There's one thing that I want to ask you. So also let's write the remind you this, OK? When we didn't have the sigma term over here, we didn't really have to account for like this minus sign and so on and so forth that would actually follow from the fact that we are considering the random variable that can take values from minus infinity to 0. I mean, the random variable can take values, but the probability distribution itself is like just still the probability distribution itself, so you don't have to account for any kind of like this negative-valued behavior, right? So can we do the same thing here? We have a sigma term. We're not going to be able to do the same exact thing here. Now we need to account for this negative behavior that is like shown by the fact that we're taking the integral from minus infinity to 0. So the way to express these things now, it's a bit different. So let's think about it. No, it's not going to be like that. OK, now I'm going to manipulate it into this one. But yeah, it will come out like. We have this identity 1 from the integral from 0 to infinity, right? But now what we're going to have, because the sigma is taking on the negative values, do you see the difference? There's a minus sign. This is the computer, yeah. OK, I think I can do it. I'm a big person. OK, it's good. OK. OK, so what we did is actually replicating the steps over there. But when we carried out these steps over there, again, we didn't have to account for the negative values that the sigma entropy production can take. But now we have to account for that, because when we're computing the mean sigma, every sigma, we have this sigma term over here that we're multiplying. And it needs to be accounted from this integral, because you're taking this integral from minus infinity to 0. This makes sense, yeah? OK, great. But of course, what we want to always do is to go back to this transformed random variable that is described by this new probability distribution, q of sigma, right? So algebra, OK? Is this good? OK, just multiply. Just put q sigma here. You will recover this one. And the way to write this is hyperbolic. This is hyperbolic tangent. Do we care about it? Not really, but mathematics. OK, did we recover this one? Yes, we did. OK, everything's good. I will make these slides available on Slack. OK, of course, in the same way, you can compute things like, for example, the second moment. And one of the coolest things is that if you're computing the moment n and n is even, you can basically write this. The average, if it's computed with respect to p, like the second moment, if it's computed with respect to p, it's going to be the same thing. It's computed with respect to q. Because when you say, if you don't have a sigma term and if you only have the sigma term with like this nth, what is this? Oh my god, my brain. Order, yes, nth order. Then what you can do is basically take the average with respect to b. It will be the same if you take the average with respect to q. Sorry. OK, so this is good. So now we have something else, one more thing. And then we're going to, things are going to get more, I don't know, more familiar. This is, I don't know, this is not super easy to prove. Again, the main, sort of like the gist of this lecture when we provide this kind of a derivation is not to actually sort of impose on you, oh, you should know how to drive a joint strong fluctuation theorem that we're going to see here, but to know how they are using fluctuation theorems to actually derive some different bonds in thermodynamics, in this case, thermodynamic uncertainty relations. So one thing that we had, remember the other strong fluctuation theorem that we had? Oops. It's this one. Each one includes one random variable. Each one includes entropy production, right? So there is a class of complementary fluctuation theorems that actually describes the statistics of joint random variables. And in our situation, the joint random variables of interest is they actually define, they come in pairs. And it's always mostly like the entropy production and another current. And the reason that we're interested in this is that, again, one thing that I should emphasize is how do we characterize, for example, non-equilibrium behavior, non-equilibrium steady state to start with? Gulce, we have another question from Francesco. Francesco, yes. Why does the average over Q should be any better than the average over P? Any way different? Any, why do you prefer the average over Q with respect to the average over P? Well, yeah, when I started this derivation, I told you that, yeah, there are two things. It's a mathematical theory that we're considering that comes with the second part with an intuition. We are using time reversal symmetry that allows us to speak about the statistics of the entropy production and the values that it can take. So, and when we talked about it, entropy product, we see something, the entropy production is a random variable that can take both negative and positive values. Now we want to treat it because there is time reversal symmetry because it's interesting, because it's working in the proof, actually, mainly. Just as in the, I mean, I take it in the same way of Shannon's typical set. Sometimes you prove things because they work, okay? And with an intuition, of course. Shannon has an intuition about this typical behavior. We have an intuition about how the behavior of the random variable should look like. So what we are doing is defining, modifying this probability distribution so that this entropy random variable can take only positive values, okay? Does this answer your question, Francesca? Okay. Oh, it's a, it's a, it's a cold, okay? Okay, okay, I mean, yeah, thank you, Francesca, okay. So yeah, there is this, did I talk about, okay, we talked about this flag, what it is? Okay, send me a message from the Slack channel. Do you have an insight on this, David? You don't? Okay, he doesn't. We can talk about it. Okay, so yeah, we talked about the stroke fluctuation theorem and I emphasize that now we are going to take care of this. Oh yeah, we had non-viennishing currents to characterize non-equilibrium behavior, non-viennishing currents, they generate entropy, entropy production. So I keep also saying that in stochastic thermodynamics, the core concept that you're interested in is it's not, you know, like, oh, entropy production is a thermodynamic quantity itself and so on and so forth, but like the statistics, the probability distributions and the shape of the probability distributions, how they behave, how they do not behave, we are caring about this. So the reason that people came up with joint stroke fluctuation theorems was to sort of understand the behavior of the probability distributions that are characterizing the joint behavior of currents and the entropy production that is resulting due to these currents. Okay? So, okay. The proofs of the TURs, I don't know of any one of them, maybe you do, that's actually intuitive. They tend to be really messy. This is by far one of the simplest, cleanest ones. In particular, the original ones and many of the subsequent ones use what's called level 2.5, large deviation theory, and it's a horrible mess. You can see how Gould is reacting to it. We're not gonna be talking about any of that stuff. It's basically just, there's a phrase that sometimes applied to certain scientists that they're called algebraic terrorists, that they're able to come in and just spew this stuff and blow everybody away. And that's what these actually almost are. It's amazing that you go on through pages and pages of algebra using all kinds of mathematics that you will not have encountered before to finally, at the end of the day, magically, who knows what the intuition was. You end up with these things like TURs. This is one of the simplest. Believe it or not, but there it is. Thank you very much actually, yeah. Because if I say something like this, it's risky. Because, yeah, I think he's the important person in the room that you, yeah, exactly. So, yeah, thank you very much, yeah, okay. So, yeah, but David is right. I think one of the, yeah, they're all messy. They're not really likeable. One of the, I think the most beautiful ones is actually how you drive from statistical inference from Cromerabandt, because then you're treating the fluctuations in the entropy production as if they are errors in your statistical estimator. So, you're even without inserting any kind of thermodynamics. You can find the thermodynamic uncertainty relation. And, yeah, the way that this precision term made sense to me just for the first time ever after months of actually working on TURs, one thing that I realized is that, ah, actually, you know, this variance term, it's coming from the, you know, it's sort of justified by the Cromerabandt itself. So, sometimes to get the sort of like this, this main oldest idea, the profundity that is underlying, it's not really working. If you take one class, two classes, it's not really working. You can feel working on it for six months. Sometimes it's just that someone out there for an ongoing research project, they need to really come up with this, for example, something like a Cromerabandt that actually may renders things even more intuitive and profound. So, don't expect to understand everything in this class. Don't expect to understand most of the things about the TURs. Expect to understand the main idea and some of the points that I'm going to make at the end of the lecture, that's all. Okay, how many minutes though? Oh, I cannot see. Quarter, 15 minutes. Okay, Max, I can do it, I can do it, we can do it. Okay. Yeah. Okay. So, now, believe me when I tell you that these joint random variables, yes. Oh, okay, perfect. Believe me when I tell you, I'm going to put the proof, I mean the article that drives the strong fluctuations and I'm joined strong fluctuations, theorem to slack, but now believe me when I tell you that this is true. This is a fluctuation theorem that works perfectly. If you have this, now what you're doing is you're running the movie forward, but you're keeping track of not only the entropy production, but the currents that are associated with this entropy production, okay? And then, basically, by the same logic of the fluctuation theorems, it's always exponentially more probable that when you go in this direction, you're going to see an increase in this entropy and so on and so forth in this current by this value. Okay, it's exponentially more probable and the exponent is given in terms of entropy production. So, this is satisfied. If you make these assumptions, these assumptions, okay? This is why we have these conditions for F2UR. I see how that it went. I actually told you that this F2UR is working, you can apply this to any kind of a physical system if you satisfy these conditions, but the reason that we can start listing these conditions is just because that the derivation of the F2UR is invoking a fluctuation theorem that imposes these conditions, okay? So, we jumped back. So, anyways, if you satisfy these conditions, one thing that, oopsie, one thing that you know is that if you run the trajectory backwards and you're trying to come up with this, you're trying to understand how the entropy production behaves, so you need to put this minus sign in front of the entropy production value that you obtain when you run the movie forwards, okay? If you satisfy these conditions, you're satisfying these. And the same for currents and it's not so surprising because as we talked, as we discussed, but didn't prove, it's a really nice exercise for you to do it. Entropy production itself is a current, so you can just write this, and this will be automatically satisfied, okay? So now, what we are gonna do, I'm speeding up, but it's not gonna be, I think, bad or something like that because we got everything that we need up until this point, now we can speed up. So remember, we introduced this new probability distribution cube, modified one, describing the same behavior of the entropy production, but we introduced this cube just for one random variable. So let's remember this. It was the entropy production, yes? Now do the same thing by carrying out the, I actually had the steps carried out here for you, okay? Now do the same thing for the joint probability distribution. Now I have a typo over here because sometimes we use phi, sometimes we use j, these phi's are j's, okay? I'm going to change them before I upload these slides to Slack. So we are defining a new probability distribution, and what we are doing is basically carry out the exact same steps that we just did for this baby cube probability distribution that is characterizing the entropy production for this joint random variables, okay? And if you remember what we had was basically this. For the mean one, yeah, for the mean one, we had this kind of hyperbolic tangent term. So basically if you take this with respect to p, it is transformed to this with respect to q. If you take this with respect to p, this transform, it is the same as the mean square of it, sorry, the second moment with respect to q, okay? So the same thing over here. Now we are doing the same thing instead of for entropy production, we're doing it for the currents, okay? Just see it. This one is equal to this one. Now there is an implicit phi over here, okay? Just if they're identical. And yeah, it's over here, sorry, I didn't even see it. Okay, good. Now one thing, what is the form of the TUR that I erased? It includes the variance of the current, right? Mean square of the current and the average entropy production. So by this formulation, we can compute the average entropy production and we can compute the mean square of the current and just the second moment of the current and then we can compute the variance of the current. So we got all the materials that we need. These are the ingredients that we need to actually see if we can write down the TUR, okay? Now what we have to do, there's gonna be a strong jump. There are lots of mathematical boring steps that we're not gonna carry, I'm giving you as an exercise. So two things, if someone gives you something as an exercise, either it's because it's really easy and it's boring or it's really not easy, but it's sort of boring. I think, I think so. Otherwise they give you as like questions or problems, but if it's an exercise, yeah. So this is the second one, second type of an exercise. So we listed all the quantities that we need. Second moment, mean square variance average of also the mean of the entropy production mean of the current, right? So one of the things that we can do is that always keep in mind that we have a thermodynamic uncertainty relation. We want to bond variance or we want to bond the variance to mean square ratio. So one thing to do, again, this is, I think David just feels free to chime in any time you want to comment on how proofs are done, but you're basically playing around to see whatever you can bond. You're always looking for a bond, okay? And one thing that you can do is to, because you're, I mean, the way that you're expressing variance itself as a mathematical term, it's in terms of the mean square and the second moment, right? So take this mean square it and you can bond it by this term. How do you do it? Do you know how do you do it? Do you know what this is called? Exactly so. Thank you, Carlos. Okay, so you bond this one and then after you bond this term over here, here's your exercise. You need to show that this is possible, okay? Now that I have like about eight minutes left, you're not gonna talk about it. You can do this. I'm going to send a proof in any case on Sunday. Let's say Sunday so that it will be realistic, okay? And once you do this, it's a mathematically marvelous thing. You try to come up with bounds, blah, blah, blah. If you're lucky, you're finding something really succinct and pretty just like this TUR, okay? So I told you that there are two exercises. This is the boring and the hard one. This is the boring and the easy one. Once you have this, you can basically use all of the ingredients that we presented during this lecture that are found on this slide to actually come up with this term over here. And you know how to write the hyperbolic tangent. We did it. And basically just two mathematical operations that you need to carefully carry out. And then you're gonna get this one. What is it? It's a TUR, so yeah. That's how you derive a thermodynamic uncertainty relation, okay? So now I have to talk about just like a review of what we have done so far and what is important for you to learn because there's so much material. So like it's really, it would be crazy, nonsense and so on and so forth to actually expect that you're taking all of this with you. It's not realistic. So what is important? Zero. All the difference between non-equilibrium equilibrium in terms of currents. Know how to write down the current, okay? How to express the current mathematically. I'm saying this is the TA, not myself, okay? And then know what the assumptions are for the TURs. For example, the non-equilibrium steady state and the FTUR and so on and so forth. And just understand why these assumptions matter, okay? And then one thing that you can do is I will send on the Slack channel, the complete slides but also the derivations. Understand how you use fluctuation theorems as resource constraints, as expressions that very formally describe the resource constraints to derive different sets of resource constraints. This is what we are doing here. Think always and always in terms of resource constraints. Okay? These are the take-homes, like messages. That's all. One more thing, and then I'm going to finish. So this is one thing that you're not responsible from in the exam or throughout this course and so on and so forth. But we discussed that, oh, you know, there is something that you can do. There is a method of driving TURs. Might come as a surprise, might not come as a surprise. This kind of a thing is actually a derivation of a TUR from statistical inference. It was then in 2020 and it was then done in 2021. We are in 2022. Like, this is ongoing research, okay? And in the first six years, like in the first five years, where people worked on TURs, I think they also didn't know why we're working with this kind of a precision term of variance to mean square ratio. Where is it coming from? But there is a derivation of TURs from Cromer-Rauband, which basically sort of tells you, I mean, you should probably Google it by yourself, but which basically tells you where this kind of a precision term is coming from. And so there is an interesting history of uncertainty relations that actually can be derived by Cromer-Rauband. It basically tells you how statistical inference might be useful. Good old Heisenberg uncertainty relation, for example, it is something that we can drive from Cromer-Rauband, even though it is really different than thermodynamic uncertainty relations. And I have been thinking that it's really not fortunate to have thermodynamic uncertainty relations as uncertainty relations, because they're not presenting uncertainty in the sense Heisenberg's uncertainty relations present them, right? But still, if you use statistical inference, you can drive both. So this is the part that I'm not going to talk about, but I'm going to upload it on Slack. And if you have a problem with the intuition, we can discuss about it. This is out of the course. This is not in the course material. This is a surprise, okay? But if you want to discuss about it, to understand more about the resource constraints and how we use them and how we drive uncertainty relations, we can talk about them. So that's it. Thanks. Can I make a comment? Of course. Oh, you're going to tell me if I'm bad or not, right? Yeah, from yesterday. No, of course not. No, the comment is that the Cauchy-Schwarz inequalities and equality on differences. So I just want to say that if the integral was in terms of p instead of q, it is not that obvious that you can make such kind of inequalities. Exactly. So in terms of q, it looks like a measure. In terms of p, it wasn't really ugly integral. Exactly. Just a comment. Exactly. This is a perfect comment on why you use some mathematical tricks, because actually they are useful just for you to prove it. And it's exactly as you saw it. Okay, thank you very much. Okay, thank you. So see you at 4 p.m.