 ... about quantum thermodynamics. And given that, and that's certainly a pleasure in the privilege, but given that I'm not entirely sure that the term quantum thermodynamics is correct, or that I agree with that formulation, I'd rather talk about, say, a slightly rephrased title, no? So non-equilibrium thermodynamics of quantum processes, because I believe this is more faithful to what we actually mean when we discuss quantum thermodynamics. Of course, it's a shortcut in notation, so to say, but then it's fine to use it. And this is, so this series of three lectures should be understood as an opening invitation to get your hands dirty in this field with these problems, with the set of issues that we are going to go through together and start wondering about what are the thermodynamic implications of quantum dynamics. OK, so just to give you not the time to get used to my stupid accent, let me spend a couple of words about the place I come from, or at least the place where I work, which also provides some motivations, so to say, for being interested in questions of thermodynamic nature. OK, so we're here in Belfast. Belfast is, I don't know if it can be seen from that. So first of all, I need to check. Can you hear me OK at the back? OK, I usually shout, I'm not necessarily... I don't know if I need that microphone, but anyway. So Belfast is there, is this red spot. I hope it's visible to everyone. And so the green spot all around the red one is what is called Northern Ireland. Northern Ireland is geographically speaking Ireland. Politically speaking is UK, but it depends who you are talking to, right? And then, say, despite what the present climate, say political climate claims, UK is in Europe. This is a statement, and there stays. OK, so as far as I'm concerned, that's where it is, and that's where it remains. Now, in Belfast, say, my university is Queens. So the university where I work at is Queens University, so this is the building. And Queens was the house of a few of quite well-known, quite well-known physicists. Some of them you might have heard of. Some of them probably are new. Larmor, of the Larmor frequency, I'm sure you have heard of him, was a professor there for a bit of time. Then there are these two guys, which I'm not entirely sure. So well-known, at least not to these audience, Harry Massey and David Bates. They were very well-known computational atomic physicists. And in particular, David Bates single-handedly basically founded the department that I'm working at now. And then finally, this is a very old picture that puts together both Massey and Bates. Bates is here, and Massey is somewhere. I cannot really, yes, it's a young Massey here. And then there is the final, probably the most well-known, or one of the most well-known persons coming from Belfast in link to Queens, which is John Bell. So Bell was a student at Queens. He never actually worked in Queens, at least for, OK, unless for a very brief steent as a lab technician. Then he was, say, his line manager, as they would say right now, a supervisor, recognized his potential, so he recommended him for a PhD, and then he moved on. But Bell is from Belfast, actually, from a few hundred meters from the university. That's where his birth house is. This is, say, uncommon, good day in Belfast. So this is the campus of the university. And at the campus, you find this thing, no, a blue plaque named after Bell, and put there by the UK Institute of Physics. And if you have nothing better to do on the 4th of November of each year, then we celebrate Bell on the 4th of November. So that's institutionally the John Bell Day. Why the 4th of November? Because that was the day when he submitted these big paper. And the Royal Irish Academy, in collaboration, which is basically the equivalent of the Royal Society in Ireland. The Royal Irish Academy, in collaboration with Queens, celebrates every year John Bell. So if you fancy a little bit of physics discussions, some good beer and bad weather just come along on the 4th of November to celebrate the big man himself. Yet he's not the biggest, or at least not the only big figure that we have from that place. Lord Kelvin. Lord Kelvin was from Belfast. So I think his family moved to Scotland in 1804. No, later on. So he was born in Belfast in 1824, and he moved to Scotland when he was 80-year-old. So around 1832. And if you come and see Belfast, so for the John Bell Day, then you can enjoy a kneel in front of the statue that is in Botanic Garden at the back of the university. So basically the university is here, and this is the statue named after Kelvin. And somehow the geographical proximity of Kelvin to the office, no? I mean, it's a few hundred meters from my office. And the fact that Bell was born in Belfast put these two things together. And then you start understanding why. Migropy is so interested in working at the interface between quantum and thermodynamics. So to say, it's all the fault of these two persons that you have to suffer my bad English. And unfit slides. So I have a lot to leave up to. Martin gave a fantastic lecture on the blackboard. I'm going to use heavily my slides, but at some point we are going to stop and go through a few calculations together on the blackboard, hopefully without any mistake from my side. Now, why thermodynamics? And why should we be interested in exploring, which are the implications that thermodynamics has for quantum dynamics? Or vice versa, how quantum mechanics can influence the development of a thermodynamic framework for microscopic processes. Motivations of values, no, and somehow variogate. Historically, thermodynamics was born because of the Napoleonic Wars. So Karno wanted to understand why the army, the French army, got defeated. And he thought, or he could pinpoint, in the inefficiency of the French weapons, a reason for, or at least one of the reasons, for the Napoleonic army to be defeated. So if you want thermodynamic historically as a very hands-on sort of origin or motivations, these guys wanted to build better weapons and realized that what they knew about the steam engine could have helped. And then they developed a framework that was so important for the development of the Industrial Revolution. So very technology-oriented motivations for the development of a fundamental framework that has been denused until now in many, many different contexts. And that has evolved, has evolved in time, in a quite ample way, quite ample breadth of ways. So rather than focusing on the various facets that the evolution of thermodynamics has undergone so far, in these three lectures, starting from today, we are going to go through the framework for non-equilibrium quantum processes, so the framework for non-equilibrium thermodynamics at the quantum level. Why? Because when you deal with a microscopic system, so these days we are, say, the hype linked to quantum technologies is growing. I mean, the interest in that is growing as well. When you are interested in the development of a framework for quantum technology, you are interested in kicking your systems considerably out of equilibrium. You want to process information. By doing that, you want to subject your register, your computational register to transformation that change, manipulate their state, the quantum mechanical state. So to say, you are really interested in genuine non-equilibrium processes. And therefore, this is the natural scenario to consider when trying and build up an interface between thermodynamics and quantum dynamics. The second motivation, or at least the second direction along which one can go, is more fundamental. And this link to the following question, say, are the concepts of work, heat, entropy that we use in macroscopic standard thermodynamics, if you want. The one that you are used to when you open, or the one that you have seen, by opening the book, when opening the book by Kalen, or the book by Zemanski. I studied on Zemanski. My thermodynamics course was based on that book. Now, you open that, and you have definitions of what work heat entropy is from a macroscopic thermodynamics sense. Are these concepts still valid, or do we need to reformulate them when we are interested in generally quantum mechanical processes, when the objects, the media upon which we reform the transformations that we are used to implement in a thermodynamic sense, are implemented on microscopic systems. And even, say, digging even more, say, into the foundations, Martin has introduced the concept of quantum resource and resource theory. And we're going to that probably. I don't know if this is the case, but I think you are going into resource theory for coherences towards the end of your lecture. So can we study, can we use what we know of resource theory? What we know about quantum coherences, quantum correlations to understand and to expand the domain of thermodynamics and make use of them to design better devices, thermodynamically inspired devices for the processing of information. These are the, somehow, not the three directions along which you might want to go when you address thermodynamics of quantum processes. To me, what is, as any lectures, this is a highly subjective presentation of a field. So just take it as my own perspective. The study of thermodynamics and interplay with quantum processes is interesting for two reasons. On one hand, it provides a very natural framework for the assessment of what we call the quantum to classical transition. So the process, according to which, or due to which, any quantum system loses its quantumness, its quantum nature, and get, basically, much more mundane, becomes describable through the laws of classical physics. Now, this diagram here should be taken as highly qualitative, so that's why I don't have anything very clearly specified along the axis, but what I want to illustrate with that is the common belief that if you look at the quantumness in a given system, and I'm very fuzzy about that, so I'm not defining what I mean by quantumness, take it as your preferred indicator of deviation from classical dynamics. For instance, the amount of quantum correlations that you have in a given register. And on the horizontal axis, they have an equally undefined or not very well-defined concept of complexity, which you can take as the size of your device or the number of elements that pertain to it, the number of particles that you have in your box or the mass of the systems that you have. What is commonly believed is that if you increase the degree of complexity of your system, then seeding quantumness in that or maintaining the quantum features of such a system becomes a much more difficult task. I mean, we might agree or not about that, I mean, I think it's a highly case-dependent problem. But say, without any request for a canonical nature of this plot, just take it as it is. So thermodynamic is an inherently many-body theory. I'm going to contradict this statement in a minute by addressing the dynamics of a single, from the thermodynamic viewpoint, of a single particle. But if you think of thermodynamics, what you have in mind is a gas or a piece of material across which you want to let some heat go. So macroscopic systems, complex systems in a very loose sense. So it's a perfect scenario to, say, adopt if you're interested in precisely this point in characterizing how quantum features are lost due to the growing complexity of your device. And more technologically oriented, well, just like I said, at the beginning, thermodynamics was somehow originated to improve the working principles of weapons and then machines. So one can take inspiration from that and dream of building thermodynamic machines, cycles and genes that might make use of elements of quantumness to boost their performance. Now, whether or not this is the case, we still don't know. There are steps towards that that are currently being taken. So this is just to give you an idea that both at the foundational, say, level, the fundamental level, and more technology-oriented level, there are open questions when you address the interplay between quantum and thermodynamics. And that probably makes the study of this field worthwhile. Now, a long introduction to something that is structured as follows. So this is a rough schedule of the discussion that we are going through in these three lectures. So I will start with the redefinition or the introduction of concepts like work, and in the second lecture we go to heat when you deal with an explicit non-equilibrium framework for thermodynamics and when your process, the transformation that you are dealing with, is inherently quantum. The second lecture, so roughly speaking, tomorrow, if I stop talking too much about motivations and so on, we'll deal with Landauer principle. And our Landauer principle can be intertwined with quantum-open system dynamics. Thirdly, and this is where the crux of the discussion is, now we are going to talk about irreversibility and entropy production. And they stay here in closed quantum system, but we are going to have a, say, to visit also the open dynamics, the open system dynamics framework. And if I have enough time, I'll go to a very brief discussion of, say, the interplay between the theory of quantum correlations and the framework for the characterization of quantum coherences that maybe Martin will talk about, and thermodynamics. So this is a rough schedule, so don't take it as a contract. So I'm very much likely not going to stick to this detailed plan. I'll be happy if we address irreversibility at some point in this discussion. So, without further ado, we start with the introduction of the non-equilibrium thermodynamic framework at the quantum level. And the objective here, the goal, is to go to what are called fluctuation theorems. Now, before we start, say, getting, not warming up and discussing really the technical aspect of the framework, is there any question at this point? Anything unclear? Anything you want to ask? Okay, great. Feel free to stop me anytime. Let's see how it goes. So I recommend you to have a look at this paper, okay? If you want to somehow to get a flavor of the possible discrepancies, the differences between quantum formulations of non-equilibrium thermodynamics and classical ones, well, this paper provides a very nice introduction. And the title, although I have taken, say, the abstract and the title that I have taken from the archive version of the paper, I think it's faithful to the published version of the paper itself, which is in PRE 10 years ago now. So the title illustrates exactly, I mean in a sentence, what the crooks of the messages, no? Fluctuation theorems, work is not unobservable, full stop, right? So this is some form of death sentence, right? So don't even think about the possibility of going into the lab and devise an experiment that directly provides you, directly provides you the amount of work that you can perform on a given system as, say, for instance, the eigenvalue of a given observer, right? That's not the case. According to this paper, you have to be a little bit more inventive and devise smarter ways of assessing how much work you are doing on a system or how much work a system is doing when you address work from a dynamic work from a quantum mechanical viewpoint. It appears somewhere in the, already in the first column of the paper. Yeah, yeah, yeah, already in the first, in the first. It's also, yeah, I mean, the fact that you are, you have observable there somehow suggests the framework that you are addressing. Okay, so let's see what these guys had in mind when stating this, okay? So am I going to quantify the amount of work that you are performing on a system that evolves unit highly, okay? So this is the frameworks for now. And it will be that until the end of this discussion and probably for a little bit tomorrow's discussion, okay, tomorrow's lecture. So I have a quantum mechanical, I have a system, I implement a transformation on it that let it evolve unit highly. And then you might wonder, okay, I'm changing the energy of this system. How much work am I doing on it? Or how much work can I extract from the system itself? And this paper provides you with an operational way of actually going into it and providing an answer to that. So, let's start with a familiar scenario. You have your, say, your favorite quantum system characterized by its Hamiltonian H.I. And a lot earlier than the start of your experiment, you have put it in contact with a bath at a given temperature. And you waited long enough for the system to thermalize with the bath itself, okay? Such a way, so this T smaller than zero means that I have done it as a preparation stage. So before the actual experiment starts. So at T equal to zero, that is when the experiment starts, what you do is that you detach your system from the bath. So now your system is in a thermal state at the temperature of the bath. It was at equilibrium with the bath. So I've detached it from that, assuming that the touching process didn't alter the equilibrium, say, the state of the system itself. And what I do is that I perform a measurement on it, okay? So what I do is that I ask myself, okay, what is the chance, what is the probability that I find the system at this instant of time in the nth energy eigenstate of this Hamiltonian H.I., okay? So I have a well-defined energy. I'm asking, oh, I have a very well-defined Hamiltonian. And I'm asking what is the probability that my system is in the nth energy eigenstate of this Hamiltonian, immediately after I detached it from the contact with the environment, okay? And this probability, I'm going to call it P and not. And the not here, so the zero here, is because this is the instant of time T equal to zero. And then the A is because I'm asking how much is the probability of finding the particle in the nth, or the system, in the nth energy eigenstate. Now it's the, so far, very, somehow very passive, right? So you observe. We now dig into the active bit of the protocol. Okay, so I'm now taking a wrench, I'm changing something in the system. So what I'm doing is that I'm changing the Hamiltonian of the system itself. So I have a parameter in principle a time-dependent parameter that I decide to change in time. And I do it, given that my system is completely now isolated from its environment, right? So besides the bath, there is no external word, so the system is completely isolated. And from this point on, so from step before onward, there will be no environment whatsoever. So the dynamics that is induced by my modification of the parameter characterizing the energy of the system induces a unitary evolution. Fair enough. So my system is evolving in time according to this time evolution operator U. For a time tau, that I decide arbitrarily. At instant of time tau, I have the end of my protocol. Okay, at the end of my experiment, which I conclude in the following manner. So as I said, I've changed the Hamiltonian of my system from h i to h f. So h f is the final Hamiltonian. And now I do again what I did at t equal to zero. That is, I measure, no? I project onto one of the energy eigenstates of the final Hamiltonian, so of h f. And I wonder how much is the probability? And this is a conditional probability. How much is the probability p m known n? So this is the conditional probability to find the system at time t equal to tau in the nth energy eigenstate of the final Hamiltonian. Given that t equal to zero, the system was found in the nth energy eigenstate of the initial Hamiltonian. Does it make sense, guys? It's more difficult to explain than to write down. It's easier to understand what that is in symbol than to explain it. Are we okay with that? I measure d t in nth energy eigenstate of the initial Hamiltonian. The guy evolved. Then I measure again energy in the nth. This time I have a different set of eigenstates because my Hamiltonian has changed. And this is the conditional probability that I can get. Fair enough? Now, question. Suppose that I repeat this very same experiment a zillion of times. And for now, let's concentrate on the first step, so on part b only. So I do it many, many times, right? Will I get always the same eigenvalue if I measure the energy at t equal to zero? Objuzno, no. Because my initial state, the state of my system was a thermal state, right? So these probabilities will follow a Boltzmann distribution. I have a thermal state, that's what I get. So every time I repeat the experiment, I get a different outcome of energy. And the probability to get the nth energy eigenstate will follow Boltzmann distribution. So I have an element of thermal randomness in this experiment that is of an entirely classical nature. No quantum mechanics is concerned at this point or is involved at this point. Then again, now I have my zillion copies of the same experiment. I'm repeating my experiment many, many times. And they perform my second measurement. By the way, this protocol is usually called the two measurement protocol and you understand why, right? So you are doing two measurements, right? So will I get the same, no? Will I get the same energy eigenstate? No, will I find the system all the time in the same energy eigenstates at the end of the protocol? No, because my system has evolved quantum mechanically. And we know that if I perform a measurement, right? In quantum mechanics, the outcome is only probabilistic. So I have a second element of randomness in this process, which is, somehow, enforced by quantum mechanics. So if you repeat your experiment the zillion times that I mentioned, what you end up with is a distribution of values. It's not a single valued experiment, no? You have a distribution of values. You have many values of PM, no men. You have many values of PM, not. So what do you do with this distribution, right? You build the work probability distribution. So you immediately find out that according to this framework, work becomes a stochastic variable, right? How much is the work that you are doing in this, following this process? Well, the system was isolated. I've changed its energy. So this change in energy must all go into work. What is the change in energy? Well, it will be the difference between the energy of the final eigenstate that I'm finding the system into minus the energy of the initial eigenstate that I found the system into when I'm performing the first measurement. Does it make sense? Yeah? Please. Okay. Very good question. If I heard it properly, I mean, it's fine. There is a bit of noise down here. No, no, I think I have heard it. So the question is, how about the cost of actually, not changing, right? Changing, not wrenching my system, not changing it. Yeah, we are going to ignore that cost for the discussion. You are absolutely right. I should account for all possible sources of work done on the system or possible welds for the work that I'm doing, right? On the system itself. The framework that I'm going to use at this stage is and set by that paper is such that basically all the thermodynamic costs or the thermodynamics relevant change in energy, energetics are all in this energy difference. In principle, I should reformulate in a way to put in the actual agent that changed the Hamiltonian itself. I might go farther and say, yeah, I mean, I'm performing two measurements here. Is this cost free? So if I perform a measurement thermodynamically, how much does it cost? This is a perfectly legitimate question that has been addressed. So there are papers working out or trying to work out the cost of general measurements, both projective and generalized. And it's a much more involved framework that I want to stay away from for now. Okay, I understand it's not a very satisfactory answer, but say, let's go step by step, if you allow me. Any other question? Costless at this stage? Costless, say, if you. Okay, so as I said, we have this bunch of numbers. What are we going to do with them? Well, I want to organize them in a distribution, in a probability distribution made out of delta peaks. These delta peaks are sanctioned precisely at the amount of work that I'm doing, by changing the energy of the system. Okay, so by changing the energy of the system from en to em dash, these are the two outcomes of my two measurements. How tall is each peak, right? It's delta peak that I have in this distribution. Well, the height of each peak is given by this product, by this product, right? By the probability to find the path, the system in the nth energy against data t equal to zero, and the condition of probability Pm known n, at time tau. W is the work that I'm doing, and this is simply the difference between. Average of hf? No, no, no, no, no, average. No, no, no, it's simply defined as the, if you want the point-like difference between em dash and em. So, there is no other, say, we are excluding all other possible costs, and we are excluding any environment, so the energy, the change in energy of my system all goes into the work that you are doing on the system, or that the system does. Sort of. Is there a construct of perpetual modeling? Aha, correct. I don't think you can build a perpetual motion machine from that, but, no, no, no, of course. Of course, say, a ratio should be there. A ratio should be there, so the reset step has to be included. And that costs, of course, that costs. I'm not resetting at this point, say. There is a, yeah, so the question was, whether I can, if any measurements, so if all measurements in this setting are cost free, can I build a, basically, a perpetual motion-like device through that? And the answer is that I think you escape from it by putting in your framework the necessity of resetting your machine. At this stage, we are not focusing on that. Okay, so cost of resetting will be introduced tomorrow. No, no, no, this is a single transformation. And if you want, even if you want to re-use the machine after the first transformation, then that is, you start a new, and you proceed from there. Say, at the start of my assumption is that the initial state of the system is a thermal state, so I have to re-initialize my system into such a thermal state. Okay. So we now have a probability distribution. We are now sort of agreeing on the fact that this work becomes a stochastic variable and is distributed according to this probability distribution. Now, there is a label here, F, which is going to be, so the presence of this label will be clarified in a bit, but this basically stands for forward. So this is the process according to which I'm changing the Hamiltonian of the system from HI to HF. Okay, at some point, possibly tomorrow, given the time, 12.30, right? I have to stop at 12.30, am I right? 35, okay, so 10 minutes, 45. Oh wow, okay, so we have time. So at some point we are going to introduce a label B, which will stand for backward process, and that will be the process that takes the Hamiltonian of the system from HF to HI. Okay, so it's the time reversed, it's the time reversed process. Now, I'm acquainted with T-Shum by training, so you give me a probability. Fully Hamiltonian process. Fully Hamiltonian process. Completely reversed. Completely reversed. But again, so you can understand what I'm going to clarify, but say, take two different sets of experiments. One where I initially give you HI, and you are asked to change it into HF, and one where I initially give you HF, a completely set different set of processes, and you change it all the way down to HI, okay? Mind you guys, I've not been prescriptive about how you are changing the Hamiltonian of the system. So, I didn't say anything about, say, first of all, how quickly you are changing this Hamiltonian, right? So, it might be extremely slow in time, right? So, you can be very close to the paradigm of quasi-static processes, which is what you read in standard thermodynamics book. You have to be quasi-static to be able to define at every instant of time thermodynamically meaningful quantities, heat work. Or you can be very quick, no? You are impatient like unusually, so you change the energy very quickly, no? Non quasi-statically. I didn't put any constraint on that, so you are free to change, or actually Fabrizio is able to, is free to choose any way of implementing such transformation. This, of course, has consequences, but we are going to see that for things like fluctuation theorem, it's actually highly immaterial, so to say, the way you implement the transformation itself. Now, what I was saying is that, if you give me a probability distribution, given my background, I'm naturally inclined to take the Fourier transform of it and look at what is called the characteristic function of it, okay? So, let me introduce the characteristic function for work distribution, which is what I'm going to call chi from now on. U is the conjugate variable for work there. And this is defined as the Fourier transform, the complex Fourier transform of my probability distribution for work. Now, why am I making it more complicated than what it was already? Why am I introducing this characteristic function? Because sometimes, sometimes it's easier to work with the characteristic function than with the probability distribution itself. So, there are problems that are more easily addressed by means of the characteristic function than the probability distribution. And vice versa, okay? So, it's, again, a very problem dependent, case dependent problem, okay? But now, and this is when I want to, I want to do the first calculation of this set of lectures. Let's try and work out together this nice expression here, which is the general form that the characteristic function for work distribution takes when you implement precisely the protocol that I have explained so far. Just to remind you, U is my time evolution operator. U dagger is obviously, no? The emission conjugate of it. Lambda is the parameter that I'm changing in time. So, it's the guy that defines basically the process itself. Lambda goes from lambda naught to lambda tau. And if you take h of lambda naught, that is my, I had some chalk here. So, h of lambda naught, you can assume, is your h i, and h of lambda tau, on the other hand, is your final Hamiltonian hf. Is my handwriting historically bad. People complain every time. Is it okay? Can you, yeah? Whenever you don't understand what I'm writing, it's my fault, and you tell me, okay? So, just to warn you, this is my x, right? So, when you don't understand what I write, that's very much likely an x, yeah? So, what is a rho g here? Rho g of lambda naught is my initial thermal state. So, this is the initial state of my system, which you can write like that in terms of the initial Hamiltonian of the system itself. And lambda and z, z will be used to label the partition function. So, the trace of this quantity of e to minus beta h of lambda naught. The last parameter that I need to define is beta. Beta is one over kT. So, it's the inverse temperature of the bath. Okay, so, I'm throwing this expression on this slide. Let's see how we get it, yeah? Let's see how we get it starting from the actual definition of the work probability distribution that I've given you here, okay? So, again, so, you will have to suffer my handwriting. Let's see how it goes. First of all, so, let's write down what the probability distribution for the forward processes. So, as I said, these are sum over n, and then, right? The two labels for initial and final eigenstates, so, initial and final energy eigenvalues of my measurement, p of m, known n, at time tau, times pn, at time zero, times my delta peak, centered at the difference between the final energy, em dash, and the initial energy, em. Okay, makes sense. Okay, what is this guy, right? We define the initial and Newtonian hi as having eigenstates and eigenvalues, en, and n, okay? And I'm going to say that hf, the final and Newtonian, has energy eigenvalues, em dash, and energy eigenstates, m, okay, right? Right, so, what is pn naught in this notation? Pn naught is e to minus beta, en over z, right? Z of lambda naught. So, I'm going to set this notation, z of lambda naught. So, the partition function associated with the initial state of my system, z naught. And I'm going to call the partition function associated with lambda tau at z tau. So, pn naught, easy peasy, that's what we have. No, ga, see, Boltzmann distributed. No, okay, let's call them em dash. These are two different sets of eigenstates. If you are commuting a Newtonian, becomes very, very boring, you can do it, of course, but it becomes extremely boring. I'm assuming that these two sets, no, are different sets of eigenstates, okay? So, the two Hamiltonians don't commute. Okay, what is pm known n at time tau? Well, no, let's follow the prescription. This is the probability to find the system in m, given that it was in n at time t equal to zero. So, I found the system in n, I've implemented my process, my transformation. So, I've implemented my time evolution operator u, until time tau, and then at that time, I've performed my measurement, my second measurement, getting my state em dash. So, let's call these guys em dash, dash, dash. Well, there is already a dash there, so I don't need to dash anything else, okay? And this must be square modulus, to get my probability. Okay? So, I'm now going to put these two guys into my probability distribution for work, and I'm also using the definition for the characteristic function for work distribution. Are you okay if I delete these two guys? Is it clear what h i and h f are? Okay, so I'm going to get rid of them, and I write explicitly that chi f of u is equal to the integral over all possible values of work, of my forward probability distribution for work, e to the i u w. No, I'm not, and this is extremely, no, and this is actually, so basically giving up the, so, discovering who the mother is, okay? Because we are going to, we are going actually to, I'm going to stress the fact that in principle, your process doesn't take you to equilibrium, and this has very, very strong implications at the thermodynamic level, okay? So, no, by no means, I'm at equilibrium. Okay, so I now put in my definition for probability distribution of work into this integral, and I'm using these two definitions there. So, I'm going to get a sum over n and m dash of an integral over the work w of e to minus beta em en over z naught times this guy, no, the square modulus of m dash u tau n square modulus. Then I have a delta of w minus em dash plus en. And then I have the final guy, which is e to i u w, which is set, no, is enforced by the fact that I want to do this Fourier transform. Okay, now I use the, no, the nice properties of Dirac delta sum, the integration. So, this gives me, this gives me the sum over n and m dash of e to minus beta en over z naught. Now, I'm going to split this square modulus into one term, and it's complex conjugate, right? So, I'm writing this guy as m dash u tau n times n u tau dagger m dash. And then here, in this exponential, I have w, so I'm going to replace w with em dash minus cn. So, e to di u times em dash minus cn. Okay, now a little bit, we are basically done, because what follows, simply a rearrangement of things, a trivial rearrangement of things, okay? So, I'm going to rewrite this expert, everyone okay with that? I've just opened up the definition of the probability, the conditional probability, and use the property of Dirac delta under integration. So, now I'm doing a sum over m dash of, well, I can take this guy here, and I can take this guy up there, right? And there are c numbers, so I can play with them as I want, right? So, I have m dash u of tau n, then I have e to minus, I have a sum, sorry, the sum is over m and n, okay? M dash and n, then I have e to minus beta en minus i u en over z naught, then I have an m u tau dagger m dash, and then e to the i u em dash. Did I forget any term? I think I didn't, should be okay, yeah? Now, guys, I can do another trick, right? So, I can actually, and I'm doing it because I can, I'm doing it on the blackboard, it should say, should this be an undergraduate lecture people would have killed me, so I'm doing it right now, you forgive me for that. So, what I'm doing is that I'm taking e to minus i u en, which is a c number, and I'm plugging it here, okay? I can do that, so that is perfectly legitimate. So, I just move a little bit my time evolution operator u, and rewrite here e to minus i u en, and delete it from there. And I remind myself, I remind myself that this object here, right? This is an eigenstate of my initial Hamiltonian, with an associated eigenvalue em, yeah? So, I can reinterpret this object, just these two guys as e to minus i u hi, and make sense, yeah? Okay. So, now I have, I want to continue in this direction, so now I have a sum over n, and m dash of m dash, u of tau, e to minus i u hi, staj 10, e to minus beta en, over z naught, staj 10, then I have u of tau dagger, staj 10 dash, and I'm doing now the very same thing I did here, this time, on these two terms, okay? So, I'm taking this object here, okay? So, I have too much space, u tau dagger e to di u em dash, and I remind myself again, that this guy was an eigenstate of the final Hamiltonian, with this eigenvalue. So, I can take these up, these two guys, and reinterpret them as e to di u h final m dash. Make sense? They can do something else, no? Even more unfriendly for you that writing, right? So, I can say, well, I have a double sum over n and m dash, this guy does not depend on n, this guy does not depend on n, the only guys that, this guy does not depend on n, the only guys that depend on n are these three terms, so let me move this summation inside, and I want to isolate these three guys together. Why? Because this is simply, the spectral decomposition of my initial state, rho g of lambda naught. I want to follow the same notation that is there. No, this is my initial state, my initial thermal state, nothing fancier than that. So, now, I'm getting things that look a little less unfriendly, no? So, I end up with a sum over m dash of eigenstate m dash, then I have u tau, then I have e to minus i u h i, then I have my rho g of lambda naught, u tau dagger, and e to the i u h f m dash. And this is a trace, nothing else, right? I'm taking the trace of this object. Make sense? Which is a representation of my characteristic function for work distribution. So, I think this is a good point to stop. We go back to what we need, so which sort of use we make of this object tomorrow, and we extend the framework from closed system dynamics, so from unitary processes, all the way down to open system dynamics, characterizing in the very same way we did for work, the distribution for a new stochastic variable, which will become heat. And that will give us the chance to illustrate a bit, the interplay between open system dynamics and thermodynamics in light of Landauer principle. Okay, any question? Over lunch, thank you so much.