 for taking your seat. Thanks. Recording in progress. You mean for the final exam? Sounds good. Great time. Did I take it or not? He wanted me to do the homework thing. Voice, no. Is it working? Hey, yo. Okay. I don't know, yeah. Okay, so I don't know. Like, at least up until this one, in the last one hour, six people asked me about homework, grading, final exam, what is going on. So we all know, like the, okay, we received the solutions to homework one, okay? On the Slack channel, you can all receive it. You can compare contrast and so on. Homework two and final. So we are gonna have the final on Friday, right? And we are going to submit the homework two solutions on Thursday. So it's a bit tight, right? But so here is some information about that. Homework two is gonna include three questions. It's not gonna be super hard. It's not gonna be super easy. It's gonna be like homework one. But it will be more familiar and more like, you know, conceptual and so on and so forth. So I'm not gonna ask about full counting statistics. The reason that I did it in homework one is that serious to see what tools are being used and so on and so forth, okay? But in the final, there are gonna be four questions. And just please like remember the times where I like pointed out something and told you that, oh, I'm not speaking as Gujja, but I'm speaking as your TA. You should know this with your heart because I'm going to ask questions about that and expect questions similar to homeworks. One question only. It's gonna be different. I think it might require some like craft and so on and so forth, but it's not gonna be like the highest marked question. So you can feel comfortable with that and you can interrupt me or like and so on and so forth me any time. I don't know, 3 a.m. and you have a question. Just send me a message and I will reply, okay? So don't worry, everything's gonna be great. So was there a question from the chat that I should respond to? Yeah, so there's a question in the chat just to make sure I got the right interpretation, is the quantum measurement a particular way to open our system as without it, the dynamics is purely Louisville? To open our system. We haven't gotten to anything about open systems explicitly yet. It's just a handy formalization of what a measurement means rather than talking about eigenstates and eigenvalues and things like that. So it's an alternative. The first set of slides hopefully conveyed the lesson that I'm talking in terms of quantum measurements, quantum measurement operators is exactly the same as what people have already seen in terms of just eigenvalues and eigenstates and the Born Rule and so on just reformulated mathematically. So definitely without the measurement, the dynamics is Louisville? Without the measurement, the dynamics is a unitary, yes. Okay, ready for some more rock and roll. Let me see if I can get this thing out of my way. Okay, so now given all that, we're going back to this. We're going back to this. Remember, this is what I presented at the very beginning of today's lecture. This is what Gilger went through in detail yesterday where we have a system of interest, the bath. We're only paying attention at the end of the day to the system of interest. We don't get to see the bath. That's in essence the physical distinction between the two. And we have to worry about how the system of interest evolves. And in quantum mechanics, for us to be able to say anything interesting here rather than just an overall unitary dynamics, we must talk about doing measurements that are only on the system of interest rather than the bath. Everything I've presented so far, the measurement operator, just measurement on the state of the system. And here it's a joint system. So the measurement, just taking what I've shown so far would be something that's a measurement on the state of the system and the bath, or the system of interest rather, and the bath. But we want to have a measurement that's only being applied to the system of interest where you marginalize out over the bath. Well, how do we do that? And so this is a very, very crucial idea, is what I call, you do it with what I call partial traces. So let's say that we have two systems, A and B, system of interest and bath. And the joint basis states are, here's a tension product, AI cross BI. So AI is the states of the first system and BI is the state of the second system. Because we're uncertain of the joint state, we also have a density operator, but it's a density operator over the joint states of the two systems. So it's row AB. It's not just going to be a row A and a row B. There's a row AB. But as I was just alluding to here, we want to say what happens if we measure just system A. So the way you can do this, a convenient shorthand, keep hammering it, that's what all the density matrices are, is what are called partial traces. Basically, you do what the name sounds like. You don't do a full trace over A and B, but you instead trace out just system B. Okay, rather than going through the actualization of what a partial trace is, it might be easiest to just work through an example. So if A and B are both spins, so they can both have two possible states, let's say that they are entangled. That's what this means right here if you've encountered this in your undergraduate quantum, where the joint density, let's say that with probability, it can be one half, and either being both spins are up or both spins are down, and that's actually got a product density matrix. Okay, so this is a pure state. And so the way to confirm that is that the trace of rho squared equals one. But let's say that instead, you have what's sometimes called a bell state. So you can either both be up or both be down with probably one half. This one is mixed. The trace of rho squared is equal to one half. Oh, it's not working again, damn it. Oh, come on guys, this is not fun anymore. Okay, so I'm gonna make the tensor product symbol implicit. Yeah, okay, good. So that was a illustration. So that's what partial traces, and this is an illustration of taking a full trace in the case of a joint density matrix. You can also use density operators to define what's called the von Neumann entropy. And that is given by the trace of rho log rho. So you should have, of course, since Shannon entropy, it's going to be sum over PI log PI, but this is what turns out to be appropriate more generally when we have density matrices. And here's an example of some of the properties of the von Neumann entropy. If you're, in fact, if your density matrix is diagonal in a particular basis with entries PI, then it reduces to the Shannon classical entropy, as one might hope. If rho is a pure state, then its von Neumann entropy is actually equal to zero just with classical entropy. And using this right here, the von Neumann entropy, you can define a lot of analogs of the classical information theoretic quantities. For example, think about the relative entropy, the Colbeck library divergence. If you remember the standard definition of that via Shannon and so on, it's sum over i of PI log qi divided by PI. We have no idea what that division might mean for density operators, but so instead you rewrite it as a difference of two entropies. And well, a difference of an entropy in a trace term or cross entropy. And then you can use this to define the quantum mutual information. It's just going to be the relative entropy between a full joint matrix and the product of the two density matrices. Okay, so all of quantum computation and so on and so forth, these are very central concepts. Okay, so now let's return to the case of two spins, complete that example of partial traces. Recall that the impartial trace of a joint system is given by this formula up here. Let's consider the case where this is rho AB and it's a pure state, so therefore its entropy is equal to zero. Then recall we also had that right there. The entropy in this case of rho A is equal to log two. It's a mixed state. So this is now something very, very funny. In classical physics, in classical Shannon entropy, the entropy of a subsystem is upper bounded by the entropy of a full system. So if I've got a joint system P sub ij, the entropy of P i is upper bounded by the entropy of P ij. Quantum mechanically, that is not necessarily true. Here's a pure state and so its entropy is zero. We now take its partial trace, we get down to this so that's the analog of marginalizing and looking at only a subsystem, but now the von Neumann entropy has gone up. That's a purely quantum mechanical phenomenon. Okay, good. Now we're gonna be talking about open quantum systems. I'm sort of spiraling around, getting closer and closer to this. Is everybody with me so far? I expect I'm starting to get into waters that fewer people have had the pleasure of swimming in before today. So let me know if I'm starting to go too fast. Okay, so we have a system of interest and what in quantum thermodynamics, we call it a bath, but what in many other field aspects of quantum information processing is called the environment. And so just like we were doing yesterday with a good day and so on and so forth, we're gonna assume that the initial state is a product. So the initial joint density matrix is actually gonna be a product of a density matrix over a tensor product with a density matrix over B. It turns out that in quantum mechanics without loss of generality, all of the results will be the same if you actually take the density matrix for B to be a pure state in a larger state space. You can always play that trick in quantum mechanics. There's no real analog classically. It's called the purification theorem if you want to look it up. Here I'm just going to use it. We're gonna jump over the proof of the purification theorem and simply invoke it to say that sigma B is just gonna be a pure state B zero, B zero. Okay, but the system of interest, I'm gonna write density matrix that way. So combining, this is the density matrix of the full system. Question? Okay, I have a question from the previous slide. Where you compute the entropy of the pure state and the robust state. Since the entropy of C A is greater than the entropy of the global situation interpretation we have from those results. Sorry, I didn't understand. I mean the interpretation because the entropy of the mixed state is greater than the entropy of the pure state. I'm sorry, I'm not understanding. So because row is diagonal in a basis with entries P I, if you'll notice, the basis zero, zero, one, one rows diagonal, you have a one half and a one half of the two probabilities. So therefore when you look at P I log P I, you're just gonna get log two. You're gonna get one half log two plus one half log two. Okay, what I'm asking interpretation, like physically what's the meaning of the result? Oh, what is the meaning of the result? It means that if you have an apparatus that can only look at the probabilities of the state of system A and then you measure the probabilities of those and say what's the entropy of the result of this measurement apparatus, you're gonna get a log two. If instead you've got an apparatus that can be looking at both A and B simultaneously and ask what's the entropy of what I'm gonna get there, because that's a pure state you're always gonna get the exact same value so you would get entropy of zero. So think about it, I mean, it's really just the normal EPR paradox kind of a thing. You have two spins that are separated that are entangled with one another. They're in a pure joint state. So if I can measure the state of both of them, well, there's only one possible one so I know that it's got an entropy of zero. But if I can only measure the state of one of them, I don't know what it is, it's gonna depend upon the state of the other one. So in that case, again, entropy of log two. Hey, just a comment. I think this implies that conditional entropy cannot really be defined in quantum mechanics in the sense it would be negative, right? It can, let's see, it can be. I think it can be, yep. Okay, so anyway, moving right along, I'm gonna probably unfortunately have to be moving relatively fast this end part. So there's a question on, hence entropy is no longer extensive? We're not saying, so okay, so extensive is the way it's normally interpreted, it's an interesting question. The way it's normally interpreted is not due to partial traces just due to expanding your system. And these results don't necessarily say if it's extensive or not. But in general, you could have entanglements that mean it would not be extensive. You could imagine that kind of a thing in general. But I haven't in any sense kind of proved that. We're going the other way from a full system down rather than actually building up to a bigger and bigger full system. Good question, good question. Okay, so anyway, getting back to this open quantum system scenario, which is the case of a finite bath in our perspective. We start where we've got a product of two density matrices. We're invoking some voodoo, some magic of quantum mechanics to say that without us a generality, even if in one particular basis, it's a Gibbs density matrix. If I look in a bigger state space, I can view it as being in a single eigenstate of that state space. So without a loss of generality, that's what we're invoking. So therefore we're saying that the initial density matrix is given by this. The partial trace is giving us what we would want, that it's actually sigma A because we've got a product density matrix to start with. So it's sigma A, which is just the initial density matrix over system A. We now have unitary dynamics. So that's the analog of the Hamiltonian phase space preserving dynamics that Guruji was presenting yesterday. Unitary dynamics across some AB according to some unitary operator, which can be varying in time, like if the Hamiltonian is changing in time U of T. So therefore the density operator of just the system of interest, this you'll have to work through the algebra later on. I'm afraid I've run out of time and I won't be able to work through the linear algebra today. But if we're just looking at the density operator system of interest. So we're taking the partial trace. So we start with them of a row AB of zero. We hit it with a unitary operator. Remember several slides ago, we saw how to evolve a density matrix. And so that's what we're going to be doing here. And then we take a partial trace at the end. And so this is the actual, so you get this if you just worked through the linear algebra. And then I'm pulling that up here altogether. If you define the, doing some more linear algebra here, if you define this, these operators E sub I K, there's one of these for every possible state of the environment. What you end up is that the partial trace at a time A down to system, at time T down to system A is given by this sum. These E's, which you're seeing here, and I'm sorry, as I said, I don't have time to quite work through the algebra. Those are called cross operators or quantum operations. They are, this is the analog. Remember that if I just have a joint system, remember the formula for how you evolve the density matrix. Here we've got actually a very, very similar formula, but because we've got this extra environment and we're doing the partial trace, we've actually got in essence a sum over these different U's rather than a single one. That's the effect of the coupling with the environment. And all details of that coupling with the environment, all details of the interaction Hamiltonian across the entire process, they're all buried in these E operators. That sum is over the states of the environment. Okay, let's see. Yeah, so here I'm pointing out the analogs that just like if we have a joint system, here's how the dynamics goes, you just have the unitary. Similarly here, we're having instead, in essence, a sum of these different cross operators are acting like the unitary does there. And just like my unitaries, they are by definition, they're on inverses. Similarly, you have a similar condition with the cross operators. But just it becomes a sum because there's multiple possible states of the environment. It started in one particular state, B zero, but after the interaction, it can be in any one of its states, in some particular item basis, and each one of them, that's an index K. Okay? These maps with these E sub Ks, each one of these, these are called completely unpositive, they're trace preserving, they preserve traces. They preserve norms, that's what this is saying right here. Okay? Details of this in the book by Nielsen, which I don't know if I stole the PDF or if it's legitimate, but you can get it online one way or another. In any case, so now I'm going to try briefly to use everything that we just presented to derive an integral fluctuation theorem for quantum mechanical systems using this finite bath approach because that's in essence what I just worked through, I kept emphasizing what they call in quantum information processing the environment, I'm just calling it the bath. Okay? We have to be careful though about what measurements mean and how they occur. In the normal integral fluctuation theorems, if you recall in the Jarzynski version of the quote, the detail fluctuation theorem, we have all these different Zs, different times, ZA and ZB that you might be knowing the state of the system. In essence, we have to pay attention to what the analogs might be in quantum mechanics. And because of quantum mechanics, every time you do a measurement, you're hitting it with another one of these measurement operators, it's a very non-trivial thing. You can't just like we did in the classical implicitly and everything what Gilger presented yesterday, when you do a measurement, you're not changing the state of the system. But quantum mechanically, you do change the density matrix of the system when you do a measurement. So we have to be very much more careful about the kinds of measurements that we do. Okay? So this is what we just derived. So now we're gonna modify all that just slightly to do quantum thermodynamics as so. We've got a Hamiltonian of the exact same form I had before, a Hamiltonian for the joint Hamiltonian, which is gonna be determining the interaction between the density matrices of the system of interest in the bath. It's given by some of three terms, one of them depends only on the system of interest, one only in the bath, and then the last one is an interaction Hamiltonian. And for simplicity, I'm looking at the case of a single-mile bath, as I mentioned at the beginning today. So rather than invoking the purification theorem, just like in Jarzynski's derivation of the detail fluctuation theorem, we're gonna be saying that the initial density matrix of the bath is a Gibbs, according to the Hamiltonian of just the bath. Okay, that's like the Boltzmann distribution we were seeing yesterday. Just like with yesterday, and just like in the quantum information processing, we're going to have the initial distribution be a product. Sorry, the initial density matrix be a product of density matrices. And again, the analog of the Hamiltonian dynamics is gonna be unitary dynamics. According to a unitary operator, I'm just gonna have time go from zero to one, okay? And now we're gonna be, there's many, many different ways to get the fluctuation theorems and quantum mechanics. It's still ongoing research, frankly a controversial issue. I'm just gonna be showing what's called a two-time measurement approach, where we're actually going to be having a measurement operator that looks at the state of the system at two separate times, at t equals zero and at t equals one. It's going to sort of speak simultaneously to be looking at both. Sorry, David, so you are assuming that the interaction at time zero is zero. Well, there is, that's what's when you start, that's the initial condition. Sorry? That's the initial condition is that they are independent. Yes. At t equals zero. If the interactions don't see though that these are non-equilibrium states, right? Oh, this whole thing is, yeah, there's no notion of equilibrium here. Oh, okay. Because there's no infinite external bath. You could imagine these things getting to a stationary state when the density matrix isn't changing. And you can then imagine doing things like worrying about whether there's probability current slowing in just the system of interest, but the joint system is always going to be evolving according to a unitary. The system of interest can be doing funky things like getting to what you might want to call an equilibrium, but the joint system is always just evolving according to a unitary. So it's always invertible dynamics. So actually I think that means that you cannot have even a stationary density matrix of the joint unless it's always going to be starts for that way. Okay, you can't have converging because the unitary dynamics is invertible. You can't have two different initial density matrices that go to the same ending one. Not in a closed system, that's what a unitary. Unitary dynamics, it's invertible. So it's like Hamiltonian. Just like yesterday, the joint system of interest and bath in Krzysztov Zinski's set up, you can't have anything like a stationary state that things evolve to unless you start in that stationary state because the dynamics is invertible. And if I am going to the same ending state, dynamics is invertible, I can't figure out where I would come from. So everything can always be distinguished. Okay. All right, away we go. This is going to be strange. It's going to take hours and hours. Not necessarily before your final exam, but at some point, when you want to really start to grapple with what's going on in quantum mechanics in general, but certainly in quantum thermodynamics, to fully understand these weird kind of met, prestidigitation is the word. The weird kind of magic that I will be performing right now to actually get these quantum integral fluctuation theorems, okay? Away we go. As I mentioned, we're going to have two measurements at t equals zero and t equals one. The measurement operator at t equals zero, it's a projection onto the joint state at that particular time because we know we're in a Gibbs state where the xA zero diagonalizes the initial density matrix of the system of interest and where the Gibbs states, they diagonalize the Hamiltonian of the bath and therefore they diagonalize the Gibbs density matrix of the bath, okay? I'm just stipulating this at this particular point. This is almost a mathematical trick that will result in an actual physical prediction at this point, the mathematical tricks. Okay, then at t equals one, we're going to be still be using the exact same basis for the bath, for the environment. Recall that the Hamiltonian of the bath doesn't change with time, whereas the Hamiltonian of the system of interest can, so we know it's going to be diagonalized by the exact same basis at the end as it was at the beginning, but because the state of the density matrix of the system of interest is changing, I'm going to be using a basis for time one that diagonalizes its ending density matrix. So this is very, very funky. I'm actually going to be using two measurements where the measurements, the actual physical process, collapsing the wave packet, so to speak, at t equals one is going to be a function of the particular unitary. This means that if I'm going to be doing my experimental apparatus and I'm measuring at t equals zero and t equals one, I'm going to be making sure that the measurement I make at t equals one varies depending on some properties of the actual underlying dynamics of the process. It's not like I've got a fixed measurement that I'm applying to find out something about the process. I'm having to exploit information about the process to actually even define what that measurement at t equals one is. This is very strange kind of stuff. This is not my own work. This is actually, believe it or not, standard in the literature, but I just want to be using this to show you how we can get the integral fluctuation theorems. Okay, so just be warned. All right, this is now, let's see where we go. Okay, yes, so I just emphasized that point right there. All right, so here's the Hamiltonian. As I mentioned, the Hamiltonian of the bath is not changing in time, whereas the Hamiltonian in the system of interest we allow it to. And so the two-time measurement, because it provides us these values here, those are what are coming out of these measurements at these two times. That means we have all these values down here. We have xA of zero, xA of one, and xB of zero, and xB of one. Question? So I'm thinking everything in terms of the classical correspondences, okay? And in terms of the accessible and inaccessible degrees of freedom. So okay, the question is, I understand the first, okay, there are four, like this is a, yeah, quote, ripple of like this measurements. I understand the first three ones. So xA of zero, xA of one, xB of zero. I don't understand the xB of one, because I just taught that, oh, these are inaccessible. It's the exact same thing. No, I understand them mathematically. I just taught that we don't care about it. It's inaccessible degrees of freedom. That will be coming out when we look to the partial traces to get how the entropy of just the system is changing. You are correct. This is a weird thing. The measurement is going over both A and B, but we are at the end of the day only going to be looking at the entropy of system A, and we're going to be looking at the expected energy of system B. Wow. Yeah, but notice that in the actual classical physics, there's not even a notion of the measurement. That's right. There's an inaccessible, which you and I, nobody else here knows what that means, but that's got to do. Oh, we talked about it, yes. Okay, but that's got to do with the re-initialization and so on, but there's nothing saying when we do the measurements. There's also things here where the measurement even is. Okay, because the one thing that actually sort of seemed peculiar to me about xB one measurement is that I understand, for example, this integrating out the contribution from the bath, this inaccessible degrees of freedom in the classical one, but now how do you sort of like? Because we have a probability distribution over it once we do the measurements. The measurement is how we reduce things to classical physics, where we can actually now look at things like expected values of the Hamiltonian. Okay, okay, okay, okay. All right, okay. So we have our question in the chart just to make sure that the measurement is done on system of interest. How are you getting probabilities for the bath? So I understand that. We're still the measurement as Gilger was just emphasizing, it's actually done on both systems. Yeah, exactly. And so at time equals zero, the measurement is in the, remember the measurement, we're using projection operator measurements. And at time equals zero, the projection operator measurements are defined by in this basis XA zero, because that's what diagonalizes sigma A and XB Gibbs, because that diagonalizes the density matrix of system B. The diagonal, the exact same basis diagonalizes system B at T equals one because it's Hamiltonian is not changing. So if you diagonalize a Hamiltonian, therefore you're diagonalizing as Gibbs state. But because system A, it's Hamiltonian is changing, therefore what we're doing is we're using a different measurement operator, different projections we're using to actually measure its state at time one. The crucial thing is that what's coming out of this is these four values. We know XA and XB both at time zero and at time one. So there is another question which I think, well then after measurement system of interest and BAT collapse, then both system of interest and BAT evolve again. No, this is, well, you'll see. Let me go a few more slides and the person will see. But yes, there's a unitary between T equals zero and T equals one, that's correct. But the crucial thing is that because we have these four values, we also know these four values. We know the probability of system A at time equals zero and it's probably at time equals one, or being in particular states. We can get those, see what we can do is we can do the partial traces to get these probability distributions and now we know what the actual values are that we are evaluating them at based upon this measurement. This measurement is playing the exact same role of getting down to trajectory level quantities in classical physics. We've got the distribution by just doing the partial traces that's just like in classical physics and we're evaluating the distribution at a particular point, just like in the trajectory level formulation of with the trajectory level analysis behind the detail fluctuation theorems. Remember that everything there, the probability distribution over things like entropy production is the distribution over different trajectories and it's all based upon things like the trajectory level entropy, which is just was log of P i. So we had some particular distribution P over the system of interest and we were evaluating it at one particular point depending on which trajectory we're on. Here it's a similar thing. The trajectory we're on is instead in a quantum mechanical scenario being replaced by the values of these measurements. So those can be viewed as the specification of the trajectory, so to speak. And they are what's coming out of the measurement. So these values are coming out of the two time measurements. They're the analog of the trajectory and we're going to be feeding them into probability distributions, which are just like the probability distributions we were seeing yesterday, okay? So as I say, we've now got the probabilities of system A at the beginning and the end and we also have the Hamiltonians of system B at the beginning and the end. Okay, so as the person mentioned on the chat, we've got the unitary dynamics. We've got all these definitions of what these different basis states are. Two time measurements are giving us what we want. So the heat flow just like yesterday is going to be for this particular trajectory, for this particular set of four measurements, it's going to be HB of XB1 minus HB of XB0. And sorry, where's the typo? For the delta Q. Oh yeah, yeah, yeah, that should be a, it's one rather than a zero. Yep, yep, typo, exactly as Guru just said. This should be my log of P1 of XA of one minus the same thing at zero. Okay, so we've now defined what the EP is for the analog of a trajectory, quantum mechanically by using a measurement operator, which is something that was only kind of implicit at most when we were doing things in classical statistical physics formulation. Okay. This should be somewhat confusing because it's frankly a little bit. No. But. I think you said no. Because it's not true, but yeah. Okay. So this should be a little bit funky, but nonetheless, you should be able to follow the math when you sit down and go through the algebra a little bit later on. Okay. Is everybody comfortable enough that I continue or should I go back over some points? Silence is a scent. So. Okay. Well, maybe. So probably there is a little bit of confusion on what XA and XB mean because, so they are operators. They are the operators. No, no, they are the values of the measurements. They are the values of the measurement, but then. Yes, so the measurements. They are also using it as coordinates of system. And these are the coordinates that come out. Those are the values that come out. Okay. So a measurement is always going to give you a random value. Okay. The Born rule and so on. And we're just saying that we're doing measurements at, this is a mathematical thing. We're doing measurements at T equals zero and T equals one of the joint system. This is how we're defining trajectory. It's just like in the classical case where the trajectories were trajectories of the joint system. And then we saw what the implications of the distribution over trajectories of the joint system was for how the entry production of the system of interest changes. Here the analog of those trajectories over the joint system is, so you can have different trajectories over the joint system. Here you can have different quadruples of the initial and ending values of the measurements of the joint measurements of the states of the system and of the environment. Okay. Okay. So it's basically taking the classical physics quantities and thinking about what they would mean in a quantum mechanical formulation. I tried to reply, but maybe you should respond. It doesn't make sense that we're also making measurements of the bath because I'm not saying this. We're just translating, yeah, exactly. So like, it doesn't make sense to make measurements because that always evolves independent of the SOI, but that's not true, right? That's not true. Because interaction Hamiltonian also affects the. Yeah, yeah, yeah, look right here. Yeah. So that's the whole point. If the bath were evolving independent of the SOI, there would be nothing to be done. But he also raised another point that I was thinking about five minutes ago. Then you convinced me like 75%. So I didn't say anything, but yeah, I think this, for example, when you come from a classical point of view, when you think in terms of defining the bath as a composition of degrees of freedom that are inaccessible to you, then it really doesn't make sense as Colin say to make like measurements on the bath from that perspective. The measurements are just like in the classical formulation. We can look at trajectories of the joint system of interest and the bath. We're only going to the end of the day as engineers be able to access the system of interest thermodynamically, but to derive the IFTs. It's a mathematics to actually derive the fluctuation theorems. And yesterday we were saying, well, we've got trajectory, the sum trajectory over the system of interest and the bath, some joint trajectory. And there it is, and that didn't bother us at all. And here, when we're translating that into quantum mechanics, simply saying, well, that joint trajectory is instead this sum pair of, this quadruple of values. So it's kind of related to that zen tree falling in the forest business. That just because we do, because the measurement is almost like a mathematical convenience, it's a way for us to be able to say, what is the joint trajectory of the system? Because at the end of the day, we're still only interested in what we're gonna be calling the entry production, which is the change in the entropy of the system of interest minus the change of the expected Hamiltonian of the bath by itself. That's the quantity that the engineer can actually access. We are going to be figuring out the properties of the distribution over that delta S by considering trajectories, which in this domain is actually quadruples of these measurement values. Nobody should be comfortable. If you're comfortable, then it's kind of like there's a cliche that if you think you know what probability means, you don't. If you're comfortable right now, then you're not understanding. Yeah, I'm uncomfortable, like very much. Good. Yeah, exactly. The one thing that comforts me is try to not go from yesterday to today, like classical to quantum, but to reverse from today to yesterday, from quantum to classical. You could do that as well. Yeah. Because this realm is always more comprehensive, so. Yeah, so we're not playing favorites when we define the measurement process, which is defining trajectories. We're only playing favorites when we then actually say, what is entropy production? But measurements were agnostic between the system of interest and the bath. Just like we are, we have no problem talking about a joint trajectory in classical dynamics where at the end of the day, you're only going to be looking at the marginal distribution of that down to a system of interest. You can still define the dynamics of a joint probability distribution over the system of interest and other things. Okay, I just want to remind you of something. Right now we are doing finite baths, Hamiltonian formalism, and one of the things that followed from yesterday was the emphasis on the fact that following your command, we don't right now consider baths as like this idealized huge infinite reservoirs, but we are considering them as like finite quantum system, so we don't make any kind of like this underlying assumptions as in your command or as in the infinite bath formalism. I don't know, whoever the questioner is, you should feel uncomfortable because that's what I was emphasizing from the beginning. All of today, just like yesterday, is about a finite bath, nothing infinite. Yeah, it wouldn't make sense to talk about like this kind of measurement. So let me go on now. Okay, so let's see. Okay, so we've got the value of the EP from the two-time measurement, and then the, so notice though, that this requires us to know those probability distributions which are the same probability distributions that we assumed that we knew before. These are the probability distributions being evaluated at this quadruple of values which we got out of our measurement, out of our two-time measurement. The probability values, the probability distribution, we get by just doing the traces in the normal way. So this is the EP on a trajectory, and the joint probability distribution is given by, this is basically Bayes' rule, the joint distribution at time zero times the probability at time one conditioned on the state at time zero. This first one, we know is going to be the distribution over state to the system times a Gibbs distribution over the state to the bath. And the second one is just going to be given by the fact that we've got this underlying unitary over the joint space that's evolving the initial state. So you've got, this is the initial state, you hit it with the unitary to get you up to state one, and then you take the projection with these values of the joint state at time one, and this is just applying Bayes' rule to give you the joint distribution over those values of the quadruple. Okay, unfortunately the fonts didn't, I don't think they came out quite clean, but the important point is to notice there's a conditioning bar right here. This first line is just Bayes' rule, it's got nothing to do with physics, with quantum mechanics, the quantum mechanics that's coming down there. Okay, everybody good with that? So if we, here's the actual definition of the reverse process, question. Sorry? Yes. There's the same typo. If that's the question, there's the same typographic error. Okay, yep. Okay? All right, so we now have to define the reverse process, and we do that essentially the same way that Gilder was talking about Chris doing it yesterday, or that she was yesterday talking about how Chris did it. We start, I mean the reverse process, we start with the system in its ending state after the forward process, with this probability distribution, which is given by the forward process. We evolve according to the time reversal of the unitary. I'm not going to talk today about what it means to have a time reversal operator in quantum physics. It's an anti-unitary operator, and you have to know a little bit of what's called Wigner's Theorem to understand it. At this point, just sort of take my word for it. And then what you're gonna have is, if you have a forward trajectory that went from that initial state to that final state, we're gonna say that the probability distribution of the reverse one, the conditional one, is going to be given by, which you do all this, for that definition, for the fact that you're going backwards with the anti-unitary, basically if you look at the joint distribution going forward divided by the joint distribution going backward, you're just going to be getting the ratio, why did that not fix? Those A's and B's should have all been lower cases, my apologies, I'm not quite sure why that didn't happen. You're just going to be getting the ratio of the Boltzmann terms, the Boltzmann distribution terms, which are the Hamiltonians, and of the logs of the initial state and the final state. Here it's actually in the numerator, because this is not a negative EP, it will be either the negative EP if you were to flip it around, this is just E to the EP. Okay, and as I say, I've got more typos here, my apologies. And then we're almost to the end, once we're given this, recall that in what Gilder presented yesterday, there was the exact same thing, there was an E to the EP term that came out that was purely due to the fact that the reverse process, you are starting at a state that is given by the forward process, but you're evaluating it under a Gibbs distribution which is the beginning one of the reverse process. Okay, so why did that not come out? Anyway, so plugging that formula in, what we're going to be getting here is just the integral fluctuation theorem. If we then just take this, multiply through both sides to get the probability of the EP, take the logarithm and multiply both sides by the, probably the EP, just in the usual way of going from a DFT to a IFT, we get a conventional integral fluctuation theorem. So the crucial thing is that this very, very weird reverse process, which is the P star, that actually doesn't occur. So DFTs always generically have this reverse process probability in the denominator. That is not going to be meaningful unless you do an experiment where you run this reverse process. If you're only interested in the forward process, you can't be using a DFT. You've got to actually instead convert it to an IFT which involves integrating out over all possible outcomes of the reverse process so that you're only considering the outcomes of the forward process. And in IFTs, this EP always refers just to the forward process. Okay, so I apologize, there was a lot there at the end. If there had been, if this were a three week course rather than a two week course, I would have let this push on into tomorrow's lectures to go over this a little bit more carefully. But the details are there in all the slides. I'll be posting them to the Slack channel very soon. And this way we'll be able to tomorrow get into doing a little bit of computer science. But the important point for today was that you can take everything that Chris Czarzynski did as channeled through Gildja yesterday for a classical domain. And when you just use standard quantum information processing with partial traces on density operators rather than marginalizations on probability distributions, you can do essentially the same kind of a thing where you've got to have this weird two time measurement which is the analog of knowing what the trajectory is. And you can derive an integral fluctuation theorem which has to do with the EP, which is change of the von Neumann entropy of the state of the system of interest subtracting from that the expected, the change in the expected energy of the actual bath. Same stuff goes through, OK? Any remaining questions? People should be moderately confused. If you're completely confused, please ask questions. If you think you know it all, well, it's one of two possibilities. Either you do, low probability, but possible. Or no, that actually means that you don't understand. OK? That's a good question, Zoom. Yeah, I wanted to ask. I don't see it. I'm not sure the question is OK. So this is one of the advantages of using the measurement operator formalism rather than talking in terms of a, this is one of the advantages of using a measurement operator formalism rather than talking in terms of an operator with eigenvalues and eigenstates. And you project down to one of those eigenstates. You can define two-time measurements. We don't have to say what they are physically. At the end of the day, we were only using this as a mathematical process to derive that formula. And one way to think about this experimentally is that you have one person can measure the state of a system at the beginning. Somebody else can be measuring it at the end. So there's no transfer of information between them. And then that's going to be defining your quadruple. And then we can have a third person who looks at what are the values of the EP that come out of it. You can be more careful about this. You can show that it's consistent with the first law of thermodynamics that this EP is actually dissipated work that cannot be recovered. But that's one way to think about it. This is all done in the two-time measurement stuff. There is a paper that I think was sent around by Ueda, Funo, Segawa, and some others came out a couple of years ago on quantum integral fluctuation theorems, the two-time measurement approach. So it's done in that particular paper. And also, actually, Edgar, well done, with actually, no, Gonzalo, not Edgar, has done things related to this. OK? So as I say, I'll post these soon. I would also look at Nielsen. And there's other paper by Ueda at Alia. I'll post these after fixing the typos. Can I just logistical? OK. So this is the foundations of our reality and so on, so forth. But we are not going to have it in our homeworks, OK? This is just, yeah, I just wanted to tell you this. But know how to turn traces into sumps. No, but with your heart, that's it. But can you talk about tomorrow's lecture and the other day? Yeah, yeah, yeah. Good point, yes. So some people have been asking about this. So as you've probably noticed, the first time this course being given, and so on and so forth, that we haven't necessarily timed things out so we'll be able to spend as much time on the computer science part of it as would have been ideal. Tomorrow, though, we're going to jump into that. So to begin with, Gilger, we're presenting the simplest computational machine in the Tromsky hierarchy. Though there are other ones that people consider in computer science. And those are called deterministic finite automata. And maybe also too probabilistic finite automata. I'm presenting some of the computer science theory. You've probably heard about questions like, does P equals mp? That involves Turing machines. The analogous questions for deterministic finite automata have actually all been answered. And for that reason, actually, computer scientists don't find them as interesting these days anymore compared to Turing machines. But for our purposes, where we're just trying to figure out the stochastic thermodynamics of computer science problems systems, that makes them actually ideal. Because all the computer science issues or a lot of the computer science issues have already been figured out. Then tomorrow afternoon, I'll start to present some stuff on Turing machines. Algorithmic information theory, the most profound philosophy that humanity has actually established to date. The only philosophy I would say, which is in its own way, reflects the lessons of Everett, which it says, screw you humans what you think actually truth is. Here are some proofs. Screws up with your nation of what reality really is. Good. You change. Reality won't. Anyway, getting off the soapbox and the philosophizing. That's Turing machines and algorithmic information theory. That'll be the second half of tomorrow and then the beginning of the last day, Thursday. And then, hopefully, time allowing, by the end of Thursday, I'll show a little bit about some work going on now for applying stochastic thermodynamics to computational machines, which are actually formalized in terms of finite paths. So to give you a very, very quick idea, the system of interest is going to be your computational machine. The set of inputs it gets is going to be generated by the bath. And then the hazing ritual, a.k.a. exams on Friday. Can I hype people? Am I allowed to hype people? I should excite them. If you're interested in computational complexity, building up all of these things, Thursday, I think, one of the things that David will present. I think it's fair to say that it's the first time that you will see a result in stochastic thermodynamics or in any other thermodynamics, actually, that can be related to computational complexity in computer science terms. So if you want to witness a first in history, yeah, you will like Thursday. OK, this was just for hyping, but it's true. Yeah, I don't know if I'll be able to actually get to that particular result, hopefully. OK? All right, thanks, everybody.