 Okay, so this is where we left, right? We talked about this heat generated. It's actually Delta S, but yeah, it's unfortunate, but it was derived like that in year 2000, we are keeping track of it because we are going to study that paper and so on and so forth. So David described it. Okay, we talked about degrees of freedom. We talked about the main assumptions and we introduced the mathematical basis. So now, a few more things about the forward process as we did, for example, for the derivation of thermodynamic relations and the backward process or the reverse process. And then we are ready for a derivation of the detailed fluctuation theorem from the Hamiltonian formalism. Now, this is something that is not making me the happiest person on earth, but we are going to be looking at the statistics of this distribution, okay? So normally what we do is to look at the statistics underlying the marginal probability distribution of one relevant thermodynamic quantity such as the entropy of production, right? It was just right there, the fluctuation theorems or multiple random variables as we did for the strong joint fluctuation theorems. Again, in the most recent lecture, we say that, oh, we can actually derive the FT for the statistics of the underlying entropy of production and the currents. Now we are going to be attempting and hopefully succeeding with a derivation of a fluctuation theorem that considers this conditional probability, okay? This is basically a joint probability distribution defined over the following one. Let me not try to say it in one sentence or it's going to be super messy, okay? So let's remember the picture. This is the initial state, microstate, okay? Your dynamical evolution starts from this initial state. We are only considering the SOI. These are the assumptions, some of the assumptions. Oh, trajectory. You stop evolving. This is time t0. This is time t equal tau, okay? This is what we use as the Zp, this vector or the bolt, okay? This is the final one, microstate. It then goes to the final state that you are in. And what this joint distribution defines is this joint distribution over this process that takes you to Zp. And by generating this delta S of heat, I think it's okay to call it the heat generated instead of the entropy generated, so we will be able to characterize this distinction between how we defined the entropy production last week and this heat as the change in the expected Hamiltonian of the heat path, okay? So basically joint distribution over this Zp, this final state, and delta S, generation of the heat, given that you start from an initial state, Za, in your phase space, okay? So we are not interested in just, for example, P of delta S or P of sigma entropy production as we did last week. So now I would ask why, I mean, what's your problem? Because this is really ugly, right? I mean, why do we do that? Well, in the paper, I swear in the paper that's written. So it's not like my lack of experience and so on and so forth. In the paper, Chris Jardzinski, the person who derived this, he writes this sentence explicitly. This is quite like a peculiar form of a probability distribution, and there is no other reason that we are using, I mean, there is no other reason than the fact that it is useful that we are using this distribution to derive a fluctuation theorem. He literally says that it's peculiar, we use it just because it's useful. Again, mathematical tool that doesn't actually sort of satisfy our mathematical intuitive preference functions, but they work, okay? We are going to see it. And on top of that, what happens is to year 2000, if you ask? Well, I mean, you can manipulate this a bit algebraically, and you can derive the detailed fluctuation theorem just based on this for the entropy production. That would be a homework question, okay? So, yeah, we are going to be talking about this. And one of the things, again, we defined the thermodynamic process and the driving protocols, right? So, again, as we did for the fluctuation theorems and TURs, let's recall that we have a forward process, forward thermodynamic process. And this is basically how you establish or break contact with thermal reservoirs as your system dynamically evolves. And also, have your work, sorry, lambda, this driving protocol, it behaves as a system dynamically evolves. Have you manipulated the system as a system dynamically evolves, okay? So, this is basically this arbitrary process, pi plus, I'm going to use it. And right now, what I'm doing is fixing notation so that we will be able to actually understand what's going on in the next few slides, okay? So, this process that I just described, you start from this ZA, and you're running the movie forward. You're going to denote it by pi plus, which is, yeah, forward process. And it has a time-reverse counterpart, which is basically the reverse process. And in the reverse process, what you're doing is basically, you know how you established or broke thermal contact with the system and the reservoirs, you're doing it. But what you're doing is actually taking the time reversal of it, okay? So basically, you still establish or break contact, but you're taking time reversal. And the same thing, we'd have you manipulate the system by applying an external magnetic field, okay? Okay, so this is why I asked you if you remember this general universal form of a detailed fluctuation theorem. Again, why do I keep erasing it? I have no idea. Well, this was the thing that we used, right? The strong joint version of a fluctuation theorem in the derivation of the TUR. So, we should know that with our hearts. Okay, he drives another fluctuation theorem. But now, considering this probability, this peculiar probability distribution that he just introduced. And we are going to try to also sort of imitate that, okay? So basically, it tells you the same thing. What it tells you, do we understand it? Can someone actually describe by using one or two sentences? What does this tell you? The probability of going from zeta a to zeta b producing some delta s divided by probability of going backwards producing minus delta s is e to the delta s. Okay, that's perfect. And delta s is not the entropy produced as we did last week, but more like the heat generated, okay? So one thing that I realized that I didn't put here as a part of notation is that when you take the time reversal, what you do is you reverse the momenta coordinates. And notation that I just pointed out is that this asterisk that we're gonna be using, it means that you're taking this collection of coordinates and then you're reversing the momenta in this process pi minus that we just introduced, which is like this counter, which is like this reverse process that we described. Okay, is everything fine? How are you doing? Okay, yes, the star? The asterisk that I just described, this one, right? Okay, it's basically some notation that we introduced to not say that every time that we see the time reverse process, or we are reversing the momenta. So we introduced this z vector as something that, yeah, in this way, right? Okay, so this is a star or an asterisk or whatever you wanna call it. Yes? Could you please develop and explain a bit better what he answered? Because I did not catch the idea. Okay, so then to do it even better, you were here in the classes in the last week, so I'm going to start with it, okay? So this is the familiar one that we had. This is the familiar one that David showed. This is the universal one that's derived, and I suppose it turned random broke. One thing that I should actually talk about. Thank you very much for asking this question. Let's talk about it, and then I'm going to emphasize something, the assumptions that we're making while driving this kind of a thing. I think it would be a great start, but yeah, okay. Could you tell me what this tells you? So it relates the probability and the time-reversed probability by the entropy production sigma. Okay, so I think the intuitive way that we described was that you have this time dynamic process, dynamic evolution that's running forward in time. I don't know like this, right? And when you go from here to there, for example, you're generating some, you're having some entropy production, right? Sigma, positive valued, okay? Let's say, yeah, depending on the sign of the sigma, it might change. But yeah, let's say that it's the positive valued one. And we are now running the movie backwards, okay? So just like for the mathematical consistencies, you can just go back to suppose it turned random broke. But what you do is basically done compute the probability to observe a corresponding decrease in entropy over the same time dynamic process, but now run backwards, okay? And if you compute that probability, and if you take these ratio, what you're gonna get is that, oh, it's exponentially more probable to see an increase in the entropy. If you run the movie forwards, they're running the movie backwards. And seeing a decrease in entropy, okay? So the same logic over here. Now. Thermal reservoir, yeah. Do you want to explain it again, I think, maybe? Okay, yeah, so one set, how do I best do it? You can, one way to actually derive this, you can find it in Takahiro Sagawa's PhD thesis was turned into a book. And it actually, I think it's chapter four of that book, he goes through this. And basically what you do is you actually derive, I'm not sure I can do this on the fly, but so this is the probability of a heat flow, of an entropy flow to the reservoirs. Given that you start in state ZA, this, you're now doing the time reversed process. And you're saying what is the probability distribution of achieving the negative? So in other words, this is the probability distribution that if you just change all your engines to be going backwards in time, that you actually see the negative process, and. Can I just ask something? So I actually told you in this lecture what it is, I defined it, but just as a question, how do you actually time reverse the thermodynamic process? Okay, yeah, so basically you're visiting this successive set of states, but the way that you draw, okay, and also establishing or breaking thermal contact with reservoirs. If you establish or break thermal contact with reservoirs, you're still doing it. But from starting from this distribution, okay, just wanted to check, okay, great. So this is the key, we're talking about trajectory level entropy. So remember that the trajectory level, what's called the sarcastic entropy. The entropy of any state Z for any distribution is just logarithm of it. So if you take this formula and you basically use it to clear the ZA and the ZB by multiplying times P of ZA up here and dividing by P of ZB down there. So you see, yeah, so you take the log of both sides, let me do it this way. So what you can do with that logarithm is if we then add and subtract these two terms, those are going to clear out these conditionals. So what you'll instead get is a joint, P of Q, ZA, P tilde of minus Q, ZB, and you will have add and what you'll have then is going to be equaling the entropy production rather than the heat flow. So this is basically all coming from the fact that the EP, I'm sorry, this is the change in the sarcastic entropy, sorry. This is my definition of delta S and then the EP is equal to delta S minus Q. So this is the change in the sarcastic entropy. The EP is that change of entropy minus the heat flow. You plug it in here and you're going to basically get that formula over there. This one, but Chris Drozinski missed a few tricks frankly in his paper. I will be sure because you'll notice he still got an extra Z on the left-hand side of the conditioning bar. So it was the first paper and it's an important paper. But he's got that Z on the left-hand side, so one can't really just do this. Turns out that there are other mathematical tricks and I'll show some tomorrow in the context of quantum processes, density matrices. We can actually get rid of that extra Z on the left-hand side of the conditioning bar. Does that, okay, helps. Helps is not the same thing, it's completely fixed. But it's more understandable now. Okay, great, thanks. Okay, just for also historical, I don't know, reasons. Why we call this, for example, the way that the fluctuation terms were derived in 90s, they just came with the name of fluctuation theorems, okay? They didn't have, we are always calling them right now, detailed fluctuation theorems and DFT and so on and so forth. This is this paper that we are, Drozinski 2000. This is the paper that derived the detailed fluctuation theorem. And I think, yeah, it is because he says so as well in the paper. It is the first paper that actually uses this word detailed in a fluctuation theorem and it's exactly due to one of the reasons that David mentioned. You have dependence on the initial states and the final states. So it reminds you of this very beautiful relation between initial and final states that we just kept discussing last week, which is like a detailed balance condition, right? So he started to call this detailed fluctuation theorems. And since then, even though we derived so many, I mean, researchers derived so many different detailed fluctuation theorems, this is how you start calling something DFT, okay? Okay, so now, one more thing before we start taking integrals, okay? So remember, this was the Hamiltonian that we wrote down, okay? Internal system dynamics, you also include this driving term here. If it has time dependency, then it is due to driving term itself, okay? And then this is the reservoir contribution to the overall Hamiltonian. And then this is the interaction Hamiltonian, okay? So one thing to actually take care of is that we are using Hamiltonian dynamics, reversible dynamics, right? Invertible dynamics, how we would like to call it. So what we are going to say is that, again, this asterisk basically corresponds to the fact that when you have this overall coordinates, you're reversing the momenta because this is how you actually define a forward process and a reverse process, mathematically, or the result of it, okay? So for all the systems that we are considering, we are assuming time reversal symmetry. So now it might be a bit tricky to apply magnetic fields and so on and so forth, but still there are lots of things that satisfy this. You have Hamiltonian dynamics, you're good to go, okay? Another reminder, this encodes a trajectory for better notation. Actually, you knew it. It's the average, the change in the average Hamiltonian, the mean Hamiltonian, this heat term that we're introducing. But I think it's not a big deal in this framework. And one more thing. So again, we kept emphasizing the definition of the thermodynamic processes. And we also defined, as formal as possible, what a driving protocol is. So again, this is something that we emphasize. This is the driving protocol for the forward process. Forward thermodynamic process is encoded in this set of functions. Functions of time, okay? And this is the one that we use for the reverse process and reverse driving protocol, okay? And yeah, so now in the following slides, when we're taking integrals, we're going to start using these hats. And it is just to emphasize that, oh, if you start from this initial state ZA in your face space, all these guys that you're describing, all these trajectories or the collection of points that you're labeling, they are a function of the initial state ZA. This is something that we used to keep track of, mathematically, no other reason, just a formal reason, okay? And this plus or the SOI is at, let's see, no. Because again, this is like, so if it was only the Hamiltonian of the system itself that we use to describe the changes in the states, the updates of the states of the system, the SOI, then yeah. But we also have an interaction term, right? And we don't know how, mathematically, how exactly they're evolving, but it's not deterministic mostly unless we're making an assumption like David said. Okay, so we talked about this and remember that this plus and minuses that we're going to use, they're just describing if we are taking the forward process, forward protocol or reverse protocol, okay? Apologies. Okay, step by step. So again, what we wanted to do was the following, right? We want to talk about the statistics of this conditional, this joint distribution, condition of the initial microstate, okay? Again, for no other reason than just because it works formally and we're able to drive a fluctuation theorem. And so we need to write down, again, mathematically. For example, this probability distribution, this joint distribution, conditioned on ZA. One thing that you can do is always make things complicated, okay? So here is the thing, let's try to keep track of this. So this is the probability that is described in the joint distribution of region to ZB from point ZA and this process generating this heat or Chris Jardzinski's own entropy of delta S, right? Okay, okay, so we know what it means. So for example, we use Kronecker delta functions when we, for example, in the first lecture of the TUIs and in the second lecture of the TUIs to basically count the number of total number of transitions. So it's not something curious for us, right? So we are basically doing the same thing. You're introducing some direct delta function to encode basically the fact that if you reach to this ZB state, okay? You're doing it as this final state of tau. And you start from, you start running your thermodynamic process from the initial states at A and you're reaching a final state tau. So basically this is just saying, again, it's making things complicated and formal. This is just saying that, okay? ZB is your final state, this first Kronecker delta, yes? Are we integrating over possible trajectories or space positions? I just didn't understand the y times P y. Wait, I didn't hear the second part. Are we integrating over possible trajectories or, all right, so it's like. You're integrating over the d y and we define d y as the collection of these points. So it's almost like a path integral over trajectories or. These are the, okay, let me go back to this slide and then show you this. Okay, this y, so this is how we encode a trajectory. That was course, yeah, it's clear, right? Okay, this is basically the collection of the face-based points of the reservoirs that you're considering. So you're integrating as I think, yeah, it's a concise way of saying it. Talking about it is that you're integrating the reservoirs over the reservoirs. Or you're integrating out the reservoir effects, okay? So we talked about the first Kronecker delta, this one. We talked about what we're integrating over and the second Kronecker delta is basically a twin of the first Kronecker delta. It's encoding for this probability distribution, the fact that when you're going from this initial state ZA to the final state ZB, you're generating this delta S, okay? Of heat, for example. Okay, now, one other thing that we can do. Again, why not make things complicated if you want to make a proof, right? By the way, for all throughout this derivation, because I also did the most recent derivation of this fluctuation theorem, there are some uncertainty relations in the previous lecture. If you have concerns, you can go to David about derivations because he's incredibly experienced, but he knows how, I think, better than me, better than us, how messy proofs are, sometimes for no reason. So this is, I think, a great example of this, okay? So please let me know if you feel lost at any point in time, okay? Okay, so one other thing that we can do is instead of taking this integral over this collection of the set of points of the reservoir components, ranging from this baby y1 to yn, we can take this integral over the trajectory. Now, you're right, okay? And the way that you can do is, okay, let's again try to keep track of the first Kronecker delta. Now, what you're seeing is, okay, it should be here. This is a head over here, is that now you need to encode the information that you're starting from the initial state, ZA, okay? And you're going to the final state, ZB, if you run the forward time and dynamic process through trajectory gamma. And this one is basically just the same thing. And remember the definition of gamma. So what we're doing is basically replacing some components. Gamma is something that encodes all of these face-space points over the universe. Okay, oopsie, where am I? Yeah, this one. Okay, so now, one other thing that we'd like to do is introduce another character, because why not? Now, you want to talk about this gamma prime. Can you read this to me, what this means? Gamma prime, this is the tau. So when we introduce this, we say that, for example, okay, don't see the heads, basically it's some unnecessary thing that we introduced to ensure that this tau, all of the trajectories, are the functions of some initial state that we know about, okay? So what does this say is basically, okay, at some tau you go to a final state and this is your gamma prime. This is your final state that you're encoding in the face-space of the SOI. I hated it. I hated it when I read that. I'm sorry, yeah. Oh, I'm sorry, Chris. Okay, yeah. Outflow font is just a single face-space position. And so that's just- Yeah, oh, thank you, thank you, thank you. Yeah. And so that's just saying that gamma prime is the final position and it's a function of where you start. So that's all of that saying. As opposed to the up above that, you see this integral, that one right there, that's an entire trajectory. Capitile, which one? It doesn't, yeah, okay, yeah. I think I could be more clear with that. Yeah, you're right. Yes, you're right. Because let me think about it. Do I use it anyway? Yeah, we can, I think we can just get rid of it. I think it's clear to get rid of it. Yeah, thank you. Let me change it before I plot it to slack. I don't know, not to self. Cool, thank you very much. Okay. I think this is the way that I use it, right? Yes. Let me just check the outer ones as well. Yes, we can take it like that safely. There is nothing that would hurt us in the previous slides or in the upcoming slides. Yes, let's take it like that. Yeah. Is it gamma or T that you're- No, no, the gamma is there. No, because, I mean, just, can you repeat, or can you write again the way that is the firm? The correct way- Oh, no, no, no. I'm sorry. This was a note for myself. Sorry, sorry. Yeah. Okay. I should have do that. Yes, exactly. Yeah, sorry. Yeah, I couldn't see any kind of a paper or something like that. This is just because he mentioned that there is a T dependence, big T dependence. Should we have it? No, we don't have to have it. Okay, now I'm looking at your faces because we have one more page left and then we're good. But, yeah. Yes? Yes. Or no. How do you feel about it? It's uglier than CTMCs, right? In this form. Exactly. Please, yes. So, what Chris is doing in this paper, do you folks have access to the paper, by the way? I can sign it to select- Yeah, yeah. I was just, I mean, I was kind of assuming that people would, that people would have already read it and that this would just be explaining what it is. But- You're making two minutes. So, here's the situation. You start, let's say that you start with the SOI in one specific state and the bath starts in its Boltzmann distribution. So, the SOI is right here, but the bath can be in whatever, that distribution. You take a sample of the bath, according to the Boltzmann distribution and then this right here gives you your entire system and you evolve to somewhere where the bath is over there and the system is here. Very, very schematically talking. What Chris then does, so this is a gamma, gamma-Boltz. Chris then says, okay, let me then run the reverse process in the following strange way. I'm going to take this point, ZB, which was generated under the forward trajectory, but I'm going to actually evaluate its probability as though the exact same as what it would be under the exact same Boltzmann distribution at time zero. Because of the interaction term between the system of interest and the bath, the bath is no longer at the end of the process distributed according to the Boltzmann distribution. It's evolved somehow or other. Chris is then saying, well, let's say that I sort of counterfactually consider a reverse process where it's going to be the Boltzmann distribution again, but I'm evaluating it for this ZB, which in fact was not generated by the Boltzmann distribution. It was generated by the forward process. Ignore that, run the whole thing backwards, and that's Chris's P tilde. So it's a strange beast that he's actually evaluating there. The way that it becomes crucial that we've actually got Hamiltonian dynamics is that when he does these integrals, these things right here, these probability distributions, because it's Hamiltonian dynamics, the Jacobian for changing the probability density function for going forward to going backward is one. Can you keep it there? Okay, let's remember this because we are going to use it in this third step of the derivation, third to fourth step, okay? Yeah, so this right here, this delta S is going to be reflecting the change in the, is it going to be the Boltzmann distribution evaluated ZB minus the Boltzmann distribution evaluated ZA. That's what's causing this delta S when you do the coordinate transformation. And whereas when you go from the PY to the PY prime, then you're going to be losing, that's the only thing that happens, the actual trajectories themselves, the Jacobian is one. So this entire term here is just due to this weird thing with the Boltzmann distribution at the end of the process being evaluated for ZB, counterfactually, the actual trajectories themselves, the Jacobian is just one. Okay, does that help people? I'm sort of, the reason for even describing this is it's not the easiest paper in the world to read. But if you keep these kinds of things in mind about exactly how he's generating the reverse process, that'll be very, very useful as you read through the paper. Okay, while we're at it, I think so we, yeah, David described how we get this, get this term over here, right? So what we're doing is just take this step over here and then write it in terms of not p of y's, but p of y primes. And this is the final, final state description, okay? We're collecting the points at the final state, okay? So I think it should be easy. We are not changing anything else, right? Take this chronicle delta just like that, just like that, just like that. The only change, this guy. We're writing p of y in terms of p of y prime. So that's why we are getting an e to the delta s term over here. Five minutes, four minutes, five minutes. Well, we are in the coffee break time. Really? Okay, I'm gonna be super fast. Okay, I will upload the slides to the Slack channel, but yeah, exactly. This is just the two, last two steps of the previous slide. So one more observation, just because of the thing that David told you that Liouville's theorem, Jacobian is unity, you can make this change of variables. If you don't believe me, so this is again the time reversal. So you're changing the sign of the momenta in generalized coordinates. This is the final state for the gamma prime and this is how we described another way of time reversal, but time reversal in the sense that you're not changing the sign of the momenta, but you're literally time reversing the time dynamic process. So it's a play of just reversing things. If you do this, if you take that operation, you will see that this is actually the same. So it means that, or your initial distribution and final distribution, they have this interesting connection, okay? And by doing that, you basically rewrite this integral that we see on the second step. I will put everything on Slack channel, okay? And done, what you're doing is taking integrals, okay? You're just taking integrals. Nothing else. Do I see nothing? Yeah, nothing profound. Yeah, once you get how you reverse stuff and so on and so forth, this is just mathematical algebraic work, okay? And by doing that, you get the first ever detailed fluctuation theorem, first detailed fluctuation theorem ever, okay? So one more thing, okay? This is actually, I know, this is something that I'm going to upload to the Slack channel again. It's basically, okay, let's discuss it. Maybe David will actually, David has great insight on this topic. Have you actually formalized dissipation in terms of the information theoretic quantities, mutual information between the system and the bets, okay? And this is something that gives you insight about this irreversibility of the SOI matter as we discussed. It's basically, yeah, it has information theoretical roots, okay? So Slack channel, that's all. Yeah, thank you. Thank you.