 I don't try the exercises or anybody has the collection of exercises because I realized that in the web page is not, well, I guess everybody has seen this, no, but so in the web page here, you have when here when it says the slides, you have the link to the lesson one slides are the slides, but then there is also the outline actually in the outline you can also find the some references. The course is based on these three reviews. This is a review that we wrote in Nature Physics and it's more a kind of overview of the whole topic. This one that you can download from archive is more or less the course except for the information flows which is very briefly discussed in this paper. And this one, information flows in nanomachines is a paper, it's a review but pedagogical. These three papers, they try to be pedagogical, but they try to be pedagogical, I mean easy to read or at least self-contained. Well, this is not easy to read the whole thing. And the part of information flows which is something that I think is interesting here is more advanced but it is something that it is now people is using a lot to understand molecular motors in biology and nanomachines to try to interpret those machines as Maxwell Demons. So this is what we will study next week and here you have its base on this. So with these three papers everything is covered. And here you have the references which are more on each chapter. If you are interested in information theory, the best book is this one, like cover and soma elements of information theory. This is a great book with a lot of examples and exercises and very intuitive. So here is the Maxwell Demons and so on. And then here you have the original papers, let's say, of each topic. So if you want to go to the original papers. And then you have the exercises. I guess you are doing the exercises but Leah and I didn't get any, well, we got some questions. But we have, I mean, you can ask, I'm usually in the afternoon here and Leah is also here. And you can write by email or in the coffee break or whatever you like. But please try the exercises. And I think on Thursday evening or afternoon, Friday, Leah will, you will have a discussion session but you should try the exercises before Leah or any one of you solve the exercises in the blackboard. Especially there are a lot of exercises based on the serial engine. Remember the serial engine is a box where you put the piston, you measure and you move the piston all the way in an expansion. And it's very interesting to see what happens if you have error in the measurement. Error in the measurement. Before, another thing that is fun maybe is that if you want to play, this is Leah and I were in a project with Natalia R.S. in Oxford on Nanotubes and we were trying to, we were trying to interpret nanomachines as Maxwell Demons and one of the people in the group, Brandon, he made this, this game. You can think what Maxwell Demons means. And it's not easy. You have, you can open the door but it's really hard to open. No, I did it. I was able to put all the, no, all the blues are very slow so you have to think very long to move it. But I managed this morning to put all the reds in one side except one, except one. So you can play with it. And, okay, and before going on a couple of notes on yesterday, yesterday was just a brief account of information theory, very, an introduction. First, I mistake that Elgar notices that the, I said Steiner Lemma for this probability of error. Remember, I think it's guess Q, the real is P, no, or something like that? What was guess P if the Q is the real? And I said that this is asymptotically 2 to minus N dvq. And this is called Steiner Lemma, right, at the ER without, so it's Steiner Lemma. And you can see the rigorous formulation and everything in Covert's book if somebody is interested. But this is a very interesting application. This is why the Kuhlbach library version is so important. Yeah, of course, these two comes, yeah, sorry. This is something that you have to get used to it when we write all the things that we have written yesterday like minus P log P and so on. And here we wrote this as P log P divided by Q. You can, as I said, you can measure all these quantities, mutual information, entropy, relative entropy. You can measure in bits if it's log of base 2. You can measure in nuts if the log is the natural log or you can measure in joules per Kelvin if you multiply by the Wolfgang constant. Steiner Lemma in information theory they use bits. So this is the result in information theory and this is, and you see here these two comes because you can put this, so here we should specify the, otherwise it would be like that. And Edgar has a very nice work on how this thing, the Kuhlbach library entropy is related with, when you apply this to P is the probability of something in the forward process in the process. And Q is the same probability in the backward process the Kuhlbach library is related with the entropy production. It's a measure of irreversibility. This is something that we have worked on for many years that you can use. The Kuhlbach library is a measure of distinguishability. So if you apply it to forward and backward processes it's a measure of irreversibility. It's a measure of the arrow of time. So when this was the first time we were able to quantify the arrow of time. And there is a lot of interesting literature on that. And now people are interested in measuring irreversibility in complex systems. But we don't know why. Okay. And the other note, the other note is something that I discussed with Matteo that entropies define like that, no? The entropy of a discrete random variable is like that is minus Px log Px. And this is a discrete sum. So this is a number between 0 and 1 and dimensionless. And log of dimensionless is dimensionless. So everything is fine here. When you have a continuous variable you have to replace the you have to replace the sum by an integral. And the P's are no longer probabilities are densities. This is a density. And the density has units. This is the units of this, the units of, for instance, if x is the energy of a particle the probability density has units of energy minus 1. And if x is a position it's position minus 1. So this has dimensions. And this is the problem because here if I change dimensions, I mean changing units, I mean if I change units, sorry, if I express this in different units there is a, here this is x minus 1, this is x, so this is 5, this is dimensionless. But the log of rho, if I change the units, if I express rho in joules or rho in electron volts, I have a factor. And this factor is an additive factor in the entropy. So this is not so well defined. This is called, this is usually called differential entropy. And also because P here is a probability. So this is between 0 and 1. But rho is, can be anything. Because it has units. So it, you can multiply by whatever you like. So the only requirement for rho is that it is positive. So this could be even be negative, this thing, this differential entropy. And you don't have the, remember what was the interpretation of this. This is the number of bits to describe x. And in differential entropy there is also the same, there is something, some interpretation which is also, this is the number of bits, well, by the, this is the number of bits, this is log of 2. This is the number of bits needed to describe x up to a precision 2 to minus n. Okay, sorry. This is the number of bits needed to describe, if you sum n. So this is a, sorry, h, h plus n is the number of bits to describe x up to a precision 2 minus a. So if you want, if you want to describe x as a, instead of decimal expression, binary expression, you will have some digits. If you want n digits, this means a precision 2 to minus n. You want n digits in your description. Then you need h plus n. So this is, but things are much easier for discrete variables. For continuous variables you have all these technical problems and so on. So these are two notes for what, the lesson that we had yesterday. Any questions? Yeah, this is okay. This is a topic of another course, let's say, but one can prove the following. Maybe we will prove it. You have any, x is anything. Any, any, so you have a process. It cannot cycle or whatever or machine, or even in the stationary regime. And you pick an observable. This x is anything, one observable. So, and you measure this in the forward, you run the process forward and backward. And then this is the forward, and this is the backward, which is more or less maybe like that. You pick a time, and this is, and you pick an observable. And this is the forward and the backward. You make a histogram of, you run the experiment many times. You make a histogram. And this is, k times this is bigger. It's smaller than, sorry, the entropy production. Yeah, this is smaller than the entropy production in the forward process. Yeah, if the process is reversible, then this is zero and there is an entropy. This is like that, no? This is zero. Okay. Sometimes this is equal. Sometimes this is less than and sometimes this is equal. And we know when, so if you observe reversibility in a process, maybe you observe reversibility, but there is some, some hidden entropy production there. In equilibrium, of course, this is zero and this is zero and this is zero. No, no, this is for any time. So, x could be any observable at any time. If x, actually we know that if x is the thought, it can be even a functional observable. Not an observable that depends on time, but then depends on the process. If x is the work, this is an equality, which is related with Krug's theorem and so on. But okay, this is no, this is something else. People are interested, so I'm answering the question, but this is beyond the course. N is just a number. So, if you want to, if you want to describe x with n binary digits, this is your precision, then you need h plus n. So, well, h plus n could be less than n. So, it doesn't mean, when we say, we say in average, how many bits you need in average to describe something. So, if it is uniform, it's clear that the number of bits is n. Because if it is a uniform, for instance, if x is a number between zero and one in a uniform distribution, you need to describe all the digits. So, h is zero in this case. Yeah, because you have log of zero, no? Suppose that you have a number between zero and one, and rho x is uniform. It's one. Well, it's one if x is between zero and one, and zero elsewhere. Okay, h in this case is zero, because it's the log of one. And then how many digits do you need to describe this number, which is a random number here, with a precision of n binary digits, n binary digits, of course. But suppose that this is not uniform. Suppose that this is a, this is something like that. Then this, these numbers are more likely. So, you can play with that. And then, and then make, describe this number x with less digits. Probably this has less entropy. And negative entropy actually. And it depends also on the, on the, on the units. Because also the, the number depends on the units. If you use other units, then the number is different. So this is just to, this is just to, to point out that information theory for continuous variables is more boring, because there are a lot of technical issues. So, I mean, I mean, information theory is fun by itself because it is telling you a lot of things on, on how information is transmitted, stored, and so on. By the way, something that I didn't say yesterday, but I think it's important because this was, for me when I started to, to, to learn information theory, it was kind of disappointed because, but I was maybe 18 or something like that. So, and because I thought it was a theory, a mathematical theory of, of, of how we think or how, or the language or how we express thoughts and things like that. And then you say, you see that it is completely, it's just, it's just a theory of random variables. And, but this was the great, and this is in the first paragraph of, of, of Shannon papers. It says, we start this theory by neglecting semantics, by neglecting the meaning of the messages. So we want to, we want to, to make a theoretical, a mathematical theory of messages, of communication. But to do that, we have to neglect the meaning and think only on the capacity of, of some string of bits to transmit information, but not on the content of that information. This is why a completely random sequence has more information than, I don't know, a text like, to be or not to be or that's the question or something like that. So it is, for us, to be or not to be, that's the question has more information than just random numbers. But for Shannon or for the theory of information, it, that's because, because in the natural language, you have a lot of redundancies and a lot of, so it has more information from the point of view of the information theory, a completely random thing. Because information theory doesn't care about the meaning. Okay, so that's enough for, that's enough for information. Now we go to thermodynamics, I think. Here I don't have, well, and here you have also, here you have also this, the, in, in, in today's, oops, ah, this is, this is not my computer. So in today's, in yesterday's slides here, Tuesday, slides, here you have, ah, nodes, this is not, ah, these are handwritten nodes, and these are the nodes for lesson two and lesson three. So it's on, on the web page of Thursday, but it is, ah, okay. But the nodes are the nodes. I mean the, the, if you want something more, ah, rigorous, let's say, you go to the papers. Okay. So, ah, thermodynamics. We are going to study some, ah, properties of thermodynamics, ah, but we are going to focus only, ah, in, in systems that are in contact with the thermal bath. So you have a thermal bath at a given temperature, and you have also some external, ah, the, the, ah, the energy exchange between the thermal bath and the system is what we call heat. In these situations where you have a single thermal bath and an external agent, everything is very clear, as I was going to show you now. Ah, and, and, and you have an external agent, and the, the energy exchange between the external energy and the, and the system is the work. Ah, so in, we are going to study first this heat and work, and give some expressions. First for, for general systems, and, and then we can, ah, we will, we will make a particular case of discrete systems, which are also much simpler. So, ah, you have the, ah, we, I will call X the, the, ah, the microscopic state of the system. Although we are dealing with, ah, ah, classical systems, everything is not so difficult to generalize most of the results to quantum systems. And, ah, so this is, I prefer X, but, and X could be even one of these easy models that you are studying with this act, that you have a plus one minus one. These models are, you can have discrete classical models as well. Eh, if you have, ah, a classical system, that in, for, for instance of, is, instead of spins, you can have a bistable system that can be only stable into, ah, in a double world potential. And, and this is, so don't think that discreteness means quantum is, you can have also classical systems, ah, with discrete states. So, you have a microscopic state here. And, ah, you have an external agent that, ah, ah, ah, can manipulate some parameter of the Hamiltonian. So, this is the parameter. You have a Hamiltonian, and the Hamiltonian depends on X, and depends on the parameter, and because of the external agent, this, ah, Hamiltonian is time dependent. So, this is the Hamiltonian of my system. And then I will assume as well that you, we have, ah, probability distribution over the micro, of the microscopic states of the system. So, this is, we, we can call this the probabilistic state of the system. Oh, it's done. No, lambda is no time, lambda is, ah, T is, ah, is time, yeah. Yeah, you change, the idea is that you have a system, and you change something, think of the Carnot cycle where you change the piston, change, think of the Cillar engine where you change the piston like that, or think of, ah, a field that you can switch on, switch off, or modulate, or whatever. So, lambda could be anything, ah. In fact, ah, ah, ah, for instance, ah, he used this lambda, adding this field, no, lambda by times observable, this morning, I saw that you are doing. This is a, this is a particular case, where you can have any. So, ah, this probabilistic state, you have different, ah, different theories, or different, ah, yeah, theories that gives, that tells you how this evolves in time. Maybe some of you know the Fokker Plan equation, for instance. If you have a system in contact with a thermal bath, if you have a discrete system, this could be, ah, this could be, ah, master equation. If it is, ah, if it is a quantum system, you can even, this could be a density matrix, and you will have something called the Limblat equation, that you probably do know. So, the, we have an open system. So, an open system is all, is, is, is in contact with a thermal bath, will have a kind of, ah, stochastic evolution, and this stochastic evolution is, ah, ruled by one of these equations. So, this, this will have the, we will assume that this, well, that we can, in principle, know this evolution. This could be, you will, ah, if the, well, if the system is isolated. So, ah, let's put Fokker Planck, ah, master equation, Limblat, et cetera. So, ah, ok. So, ah, how can we, ah, ah, calculate heat and work here? If you compute, ah, first, let's compute the energy. I will call this the energy, because the Hamiltonian depends on time. The energy is not a constant of motion, and the energy is actually, ah, this, ah, this average over x. So, ah, I have to, ah, integrate h. And now if I, if I calculate how the energy changes in time, the derivative, ah, I can put the derivative inside, and I have two terms. The first term is, ah, the derivative, the partial derivative of h with respect to lambda times lambda dot. Let me put it like that. And the second term is, ah, the Hamiltonian and then the derivative of rho with respect to tau. So, you have, ah, that the, the energy changes because of two things. One is, ah, the modification of the, the action of the external agent, and the other is the evolution of the, of the, of the rho itself. So, this is, ah, one can check that this is heat, this is work, sorry. This is actually the exchange of energy that the external agent introduces. I mean, the, the energy that the external energy introduces into the system, and this is, this is heat. Can we, ah, prove this? Well, ah, the proof, ah, the best way, for me the best way to, to understand this is, is to consider a quench. A quench is, is just, if you have lambda, is to consider that lambda is constant, and at some time, ah, you change the value of lambda to some other value. In this case, what is the work? The work is the energy that you introduce in, since, since the, the, this, this change is instantaneous, rho doesn't change, and, and there is no time for the system to exchange energy with the heat bath. So, the, the, the increment of energy due to this change is work. Okay? And this is essentially this term here. After this happens, the system here, suppose that the system here is, is, is in equilibrium or in a stationary state, and it's happy here, but then suddenly you change lambda. Then here you have a relaxation. And in this relaxation, the system is losing or absorbing energy. And this is heat, because the, the external agent is now, the external agent only operated here. So here the external agent has no action. So you see here the work is concentrated in the quench time, and what the, all the exchange of energy in the relaxation is, is heat. And if you apply this formula, you will see that here, what is happening in the relaxation is that the system is encoded with a thermal bath and rho is changing. And the, the Hamiltonian is constant. And here is the other way around. You consider rho constant. And you see the, the, the contribution to the energy of the system due to the, to the action of the agent. Okay? So, and this is the first, this is my unit of time. This is everything per unit of time. So I, we can write this like that. And this is the first law of thermodynamics. And of course you can integrate in, in, in a process and so on. And also to be con, to convince that this is a work and heat. It's also interesting to consider a quasi-static process. In a quasi-static process, lambda is, is, is very small. Let's put it like that, lambda is small. I want to put lambda close to zero because lambda has units. My students, I always say, if you have, if you have something has units you cannot say lambda much smaller than one or something like that. No. Let's say lambda is small. And what happens in a quasi-static, in a quasi-static limit if a system is in contact with a thermal bath? Anybody know what is the consequence or what is, it is in a clear room at any stage of the system, of this, of the process. So if, if, if I have a quasi-static process, rho x t, rho x t is very complicated. As I said, you need to solve the Fokker-Pann equation. You need to solve the Limba equation or the master equation and so on. But you know something that the steady state is the equilibrium. So, and the equilibrium is the canonical. We call this the instantaneous equilibrium because this is, if you freeze lambda, you keep lambda constant. This is, the system reaches this, this state. No? And if the, if the process is very, very, very slow, then the system can be in equilibrium all the time. Of course if the system is fast like the quench, the quench is the opposite. Here you are in equilibrium and suddenly you depart from equilibrium. But if the process is small, if this is small, you can have this. And then, and then this is a, let's call it like that, rho q, rho aq x lambda. This is Gibbs state, yeah? The thermal state or whatever you like. Yeah, you can have, this is for systems in contact with a single thermal bath. Of course if you have different thermal baths at different temperatures, then things are more complicated. But I want to just to, to look at this example. And you can also have other, what is called generalized Gibbs states, which are the exponential of minus beta h plus some other quantities. Conserved quantities, yeah. If you have more than one conserved quantities, this is, thermal states are based on the, on the assumption that the only constant of motion is the energy. Well, actually, when you, when you write down the Fokker-Panne equation, you force the equation to reproduce the Boltzmann distribution for a long time. And this is the fluctuation dissipation relationship. So if you are describing a system, you have to be sure that your theory, your equation, has as a steady state the thermal equilibrium. And this is fluctuation dissipation. In the case of Langevin equations, you have to force this. In the case of Maser equations, you have to impose detailed balance, which ensures you that the steady state is the equilibrium. In the Limblad equation, you also have to, detailed balance to put, detailed balance. And if your equation does not reproduce the thermal state as in the long limit, then it's wrong. Then it's wrong. Then you can make a perpetuum mobility of the second kind. And then you, so you, this is a requirement of any theory that you have for describing a system, a physical system, is that if there is only a single thermal bar in the limit of T large, the steady state should be thermal. Well, I call the steady state of the Fokker-Panne equation. And the steady state of the Fokker-Panne equation must be the thermal, the equilibrium state. Yeah, equilibrium, I mean equilibrium is a, it's a particular case of steady state. Usually we say non-equilibrium steady state, when we want to say that the steady state is non-equilibrium, we use this non-equilibrium steady state. Okay, so if you put this here in the work, we put now this, and then here we have, well, this goes out of the integral, so you can put, this is the work. And using this trick that, I guess, Isaac also used it to prove that the average of the, of this observable is the partial derivative of log z. You can take this out of the integral. This is for a given time. I could even remove the time, but it is just to keep the coherence with this expression. Now you have this, the derivative of, and the exponential, and you can write this as the derivative of the exponential. No, this is, I make the derivative, so I have to divide by beta and put a minus sign. No, if I, if you make this derivative, I get the exponential again, evaluate it in lambda t, and then the, and then minus beta, which cancels with this minus beta, times the derivative of h with respect to lambda, evaluate it in lambda t, okay? And now you can get this out of the, we can write this as kt minus lambda dot z. And beta can be kt, beta is whatever kt, and this is, we can take this out of the integral, and I recover the partition function, the definition of the partition function, so this is the partition function. So the work, the work, this is the power actually, the work per unit of time. This is minus kt, minus lambda, and here is the derivative with respect to lambda of a, let me write this minus here, of minus kt log, in this case it's natural log, and this is what we call free energy. This is the free energy in statistical mechanics. So you have that the work is lambda dot and the derivative of the free energy with respect to lambda, okay? And now if you integrate over a process, let's integrate this over a process, wt w, which is between some t0 and t final, or initial and final, of d omega t, sorry, omega, sorry, no, dt wt, you put this here and you see that this is an exact differential, this is the derivative of f with respect to lambda, the derivative of lambda with respect to t, this is the anti-derivative of this, so this is f, so this is f at the final minus f at the beginning. So this is delta f. So the work is equal to delta f in a quasi-static and isothermal, because the system, because this is quasi-static, the system is always at temperature t, so quasi-static and isothermal process. And this is a result of thermodynamic that maybe you remember. W is not an exact differential, so you cannot put w as the difference between two, between something at the end and something at the beginning, it depends on the path, but if the process is isothermal, then the work is just a difference of free energy, okay? And this is in quasi-static, and the second law if you like is that w, this is second law, that one can prove from here as well. The second law can be proved using Fokker-Pann equations or one of these equations, and one can prove that the work is always bigger than delta f, and it's only equal to delta f in equilibrium. So work is always the work that the external agent is putting into the system, so the extracted work is minus, and then this is telling you also that delta f is the maximum work that you can extract in a process when delta f is negative. So when you go to something with high free energy to low free energy, you can extract this difference as work. And if you do this reversibly in the quasi-static limit, then you extract this amount of energy. So this is why it's called free energy because it's the energy that you convert into work. But always isothermally. If it is not isothermal, things are more complicated. But this is just an illustration, yeah? Very good question. This depends on the time scales. What do you think? In an expansion of a gas, what do you think is the speed to reach this in an expansion? What do you think? How fast you have to move this piston? Yeah, it is related with the... It is related with the relaxation time of the system. Every system described by a Fokker-Panne equation has a relaxation time. Usually it's related with the eigenvalue. You know a Fokker-Panne equation has an eigenvalue equal to zero, which the eigenvector is the stationary probability, and the next one tells you how fast the system relaxes. So the time lambda must be much slower than this eigenvalue. But this will be important because in information devices, there are huge time scale separations. For instance, when you have a double-well potential, and this is interesting because we will discuss this tomorrow, when we have a single-well potential, a double-well potential, you have a time scale for the relaxation within a potential, which is maybe it could be depending on the particle, but it could be milliseconds or seconds, let's say, if it's a Brownian particle and so on. And then you have the time scale to jump, and the jump is necessary for a global relaxation. And this could be of the order of millennials or the age of the universe. So you have too huge difference in time scale. When I say lambda is small, it's clear that there is no this type of strange things. And that you have... And lambda must be much slower than the relaxation processes in the system. And this is what allows you to be all the time in equilibrium. We need the Gibbs state, yeah? No, no, from a Gibbs state, if you go from a Gibbs state with high free energy to a Gibbs state with low free energy, of course, you can't get work. No, you cannot extract work in unitary... This is in quantum mechanics using unitary evolution. But using thermal baths, you can extract work. But using unitary evolution, you cannot... Yeah, it is true. If it is isolated, if the system is isolated, then you cannot extract work from this state. This is called passive states. And it's because you cannot extract work in a Hamiltonian evolution. But if you are in contact with thermal baths, you can extract work. Yeah, suppose... I mean, the simplest case is suppose a particle in a potential like that. If you go from this to this potential, you extract work because the particle... Unless the particle is just in the vertex, in the minimum, if the particle is here, it has some energy. And let's do like that. So you have your particle, for instance, here. And you change the intensity of your... This is an optical trap, so you change the intensity to this one. So the particle loses this energy. And this is the work. Another thing is that in an optical trap, how do you use this work? I mean, nobody knows how to extract this work. But in principle, this is work. And the same. This is like an expansion. We have also with Edgar a Brownian cycle where instead of a gas, we use a Brownian particle. And the expansion is like that. And the compression is opening the potential. And the expansion is closing the potential. And we did this with optical traps in the laboratory. So yeah, passive states, you cannot take energy from this state using unitary evolution or in quantum mechanics. Yeah, some people call it like that, but I don't... I'm not familiar with... I don't know. Yeah, but this is when you have... I think in the aerotropies, when you have different thermal basins like that. Then I don't know. Okay. So free energy is very important. And one of the things that we want to do is to apply this to all what we talk the first day, the Maxwell-Demon, the Cedar engine, and so on. Processes where information is important. And processes where information is important, the main thing is that they are out of equilibrium. Why? And this is important because information thermodynamics is actually statistical mechanics but applied to things that can store information or can manipulate information. And what is information? Maybe this reflection is for the end of the course, but it is good to advance this now. And if you think of a hard drive or DNA, which is also a system that stores information, what do you think is the most important characteristic of those devices? Reliable? And what means reliable in terms of physical properties? Student change. Student change. So you need something that has long life states. But equilibrium is long life. equilibrium is eternal. But you need something else. What is this something else? You need states with long life, but you need more than one state. Because you need the 01 or GTAC in the case of... So you need systems that have different states. They have to... Those states must be long life states. And what else? I would add something else. Because, okay, you have 01 in a hard drive, GTAC in a DNA, but these states can be anything, or you need to manipulate the zeros and ones in a bit. You need also that they are... that you can go from one to the other. Maybe... They need to be stable, but by some mechanism, you need also to go from one to the other. And this happens in DNA mutations. But also, in DNA, you don't... I mean, mutations are not so good. But in DNA, you also need something else. That all the machinery, the chemical machinery that manipulates DNA for replication, transcription, and so on, must be such that the four bases, the GACT, must be similar in which respect to the machinery. Because otherwise, you would need a machinery for G, a machinery for E. And the same thing works with the four. So they must have some similarities. And the same for zeros and ones, because it is just the states of magnetic domains. So you need distinct states with a long life, but at the same time that manipulating some parameter, you could jump from one to the other, and that they must be somehow equivalent, or they must be similar. And the minimal model for that is the double-wit potential. Actually, if you think of this idea, what you need is really a symmetry breaking. You need a system with a symmetry breaking in which you know what is a spontaneous symmetry breaking. You know when your Hamiltonian is symmetric, but the system breaks the symmetry, and it goes just like that. Here, the system, the Hamiltonian is... Here, the Hamiltonian is this potential. This is the potential. And the potential is symmetric. But if you have a single particle, if you have a bunch of particles, you will see this density. But if you have just one, or if the barrier is very large and you start with particles on the left, they will remain on the left. So this system does not equilibrate. This system breaks the symmetry. It's not the symmetry breaking that you are used to, which is based on many degrees of freedom, on the easy model. But the idea is that you need this... this kind of symmetry breaking. And this is, at least for me, the main characteristic of information devices. So to summarize what is now thermodynamic of information, thermodynamic of information is thermodynamic applied to this type of Hamiltonian, this type of devices. And in these devices, you have... that you never equilibrate completely the system. You never reach a global equilibrium. You reach the state, even though you do a quasi-static process, but quasi-static in the sense of relaxation within each well, but not so slow, because maybe the time to jump here is the age of the universe. So you can never reach this stationarity. Now we have two hugely separated time scales. One here, the evolution here, and the other is the jumps. And if your process is in between, it's quasi-static with respect to each well, but it's not able to equilibrate the system globally. So this is what happens, for instance, in this sequence. This is an example. We have a barrier like that, and if you lower these, you know that if you apply the equilibrium, I mean, if this is really in equilibrium, particles will have more probability to be here, because the energy is lower here than here. But of course the system cannot... If you come from here, the system cannot relax. It will take the age of the universe to relax, because you need the jumps. So the state will remain like that. One half, one half. If you come from here, think that you come from here, you lower this well, and then this is a state which is out of equilibrium. It's out of equilibrium. Because it's not the state. In the Gibbs state, this peak should be larger. It's local equilibrium, but not global equilibrium. And this is one of the features of information devices. So you have non-equilibrium states like this one. This is a non-equilibrium state. So all these things are not valid, so we have to modify these things. And thermodynamics is... The thermodynamic information is just a theory of this type of state. And this type of... Even the state can depend on the information that we have about the system. If we now measure, if it's a single particle, and we measure and the particle is on the left, then suddenly we have a kind of collapse of the probability density, and this is again non-equilibrium. So you see that the state of the system of these devices depends on the history, because it depends on where this comes from, depends on the measurements that we do, depends on the information. They are mostly in equilibrium, because we are locally in equilibrium, but the global state depends on a lot of things. It depends on the information that we have. It depends on whether we have measured or not, et cetera. And we would like to have some theory like this one for this type of states. And this is what we have is... This is provided by something called the non-equilibrium free energy. And the non-equilibrium free energy, which is only valid for systems in contact with thermal baths. So we still have this scheme, the thermal bath. We can generalize this, but let's focus on this scheme, because this scheme is the Schiller engine as well. And the work done by the external agent. But now we don't have this identity. We don't have this identity. Actually, we don't have even... The system is no longer the Gibbs state, so we don't have a partition function. So we don't have even a free energy. But even though we don't have all these things, we can still define a free energy for non-equilibrium states. And this free energy is... We use this letter. This is the non-equilibrium. And it's defined for a rho and for a Hamiltonian. So it's a function. It's a number that you give me a probabilistic state of the system like this one. And you give me a Hamiltonian like this one. Well, here we don't have kinetic energy. The kinetic energy in these examples of wells and Brownian particles does not play any role because it's always in equilibrium. And I can calculate the free energy. What is a free energy? Well, you remember free energy is in thermodynamics is E minus T s. So this is the most... This is going to be the energy minus T and the Shannon entropy. So this is a minus K sum over X of rho X log rho X. And this is the non-equilibrium free energy. Yeah, some properties. If rho is the equilibrium one, then this is F. This is minus KT log C. So the non-equilibrium free energy applied to equilibrium states is equal. Another property which is nice is that the... If the system is not in equilibrium, the non-equilibrium free energy is equal to the equilibrium free energy plus KT the cool back library distance between the rho and the rho equilibrium. And this is always positive. So this means that this is always bigger or equal than the equilibrium. So the non-equilibrium free energy is always bigger than the equilibrium free energy. And the final property or the most important property is that what it's going to allow us to use non-equilibrium free energy to obtain the energetics of processes where information is important is this... This is not the property. This is just the main result is that in a process the work needed to complete the process is always bigger than the non-equilibrium free energy the increment of non-equilibrium free energy. So the non-equilibrium free energy has the same meaning than the equilibrium free energy for non-equilibrium states. It tells you how much work you can extract if you go from up to down or it tells you how much work do you need to put in a system to go from low free energy to high free energy. From low free energy to high free energy delta F is positive so you need to put work. Remember that work is always positive if you put work into the system. In a motor when you extract work W is negative and delta F must be, of course, negative as well so you need to go from high to low. No, the particular function is only defined for the Gibbs states. I mean if you have one of these generalized Gibbs states, you mean or what? Yeah, yeah, yeah, go on. For non-equilibrium states we can consider in the existence of some operator, for example H0 and the non-equilibrium states can be equal to the exponential of minus beta H0 at Z0 where Z0 is the truss. Yeah. But here H0 is not the aminitronium of the system it's just operator for defining some new states, some non-equilibrium new states. Yeah, he's telling that and we can have in principle you can write any row as exponential of minus something beta divided by something. Yeah, but this is this has, in principle, it has nothing to do with the non-equilibrium free energy. I mean maybe for some, maybe you can do this. Yeah, if you have here something A and here you have N of course log of rho minus log of rho is A minus log of N. So you can play with that but maybe you can find some expression for the non-equilibrium free energy. Yeah, I don't know. I'm not sure but... I see that because for some states for some non-equilibrium states the partition function the truss at the exponential of minus beta H0 for example is equal to the partition function of the... Well, let's discuss this later because it is a very technical thing and it is nothing... Okay, so maybe you want to prove of that or not, are you happy? This is the main resource of statistics of... This is the main tool I mean just applying this you can explain the CLA engine everything. This is the thing that we are going to use on Friday and to understand the CLA engine and so on. Yeah. This is also... This is an interesting question that we have... This non-equilibrium free energy is I think the first time it appeared in the literature was in the 70s. In a very strange book that I never... I never found it. It is something like the drops that come out from a faucet or something like that. This is the name of the book but I never found it. But it was cited as the first time this appeared, this notion appeared. And then we have used it in the 90s and in 2000 we have used it for not for this for thermodynamic information but people have used it for a lot of different things. In stochastic thermodynamics it is well used. And I don't know for active systems in principle you can use it as well. Although the problem with active... Yeah, for active systems you can use it as well. But tomorrow very slow because today the plan was to finish this lesson but I will finish tomorrow. But tomorrow we will see molecular motors and we will see that in case of... most active systems what they have is a consumption of some fuel like ATP or something like that. So in this case the problem is more complicated. You can do this as well you can do this as well. But it is more related with the production in the environment in this case in the fuel. So tomorrow maybe we can discuss on that. Is there an equivalent of a quasi-static process where you get the... Okay, this is in a process and there is the... Okay, I'm glad to make this question because I... Nobody will ask for the proof but the proof of this is actually... Oops Actually the proof of this is in... The proof of this is spread in different papers in the review that I showed you I think I think there is a very rigorous proof in one of the reviews. Yeah, there are... And also van den Broek is supposed to have a proof and I don't know how to do this. Okay but I will show you not the proof rigorous proof but but this is in our review in nature this picture and it's just an idea of why this is so and why this... and the question to derive this we are thinking of a process now it could be quasi-static or not, let's discuss this in a moment, but a process that connects something which is out of equilibrium because delta f is the final non-equilibrium free energy minus the initial equilibrium free energy so I want the process that the initial state is non-equilibrium the final state is non-equilibrium well to make it easier let's discuss the case where the initial state is non-equilibrium and the final state is equilibrium because it's easier so the idea that you have we represent the Hamiltonian in this case it's a potential like that and this is the state and this is clearly out of equilibrium because exponential of the potential is not equilibrium so if you let a relaxation if you let the system relax this is equilibrium this is more or less the Boltzmann state corresponding to this potential or the Gibbs state corresponding to this potential if you do this what is the work and what is the heat you just let the system relax without doing anything heat work is work is zero and there is the dissipation of heat or the contrary maybe sometimes the system adopts the system as some so if you do that you don't get any work but what is the best thing you can do if you want to extract work from this from the state so supposedly this has a some capacity to provide energy to you okay it's easy to see that the first the best way is the following is to modify the potential in such a way that the state is in equilibrium and you do this instantaneously you go here instantaneously it's a kind of cheap trick because it is I mean you start out of equilibrium with the system in equilibrium so this is instantaneous and then for now you are in the green part of the plot which is equilibrium so now you can use equilibrium thermodynamics you can use what we have derived at the beginning of the class so you can statically go from here to here and then you have and then you go there if you do that you have to prove that I mean it's so easy that I can prove it here if I have five minutes that what is the work the work is there are two steps one is the instantaneous quench if you like quench means that that it's very fast something so you have a quench and the quench is that this is this is instantaneous so there is no heat and the only work comes from the difference of energy from here to here so I have this is H0 this is Hamiltonian and this is H rho H rho by the way if I want H rho must be such that rho is my initial state by the way this is a nasty notation because rho is the initial and rho 0 is the final and it's because it's in equilibrium with H0 so you have such that rho initial is the equilibrium so this means that rho initial must be exponential of minus beta H rho so this means that H rho must be kT log rho initial so this must be maybe it's impossible but we are trying just to make a general statement so let's say we start with a rho the idea is first instantaneously we change our Hamiltonian from H0 to kT log rho and with this trick this is a new Hamiltonian let's call H what is the notation here H rho and with this trick let's actually go to equilibrium and now quasi statically we go from H rho to H0 again that's it so what is the work in the first quench is I've changed from H rho to H0 initially I have an energy H0 average over rho and finally I have an energy H rho over rho and then in the quasi static which is this one from H rho to H0 everything is in equilibrium so I can apply what I have derived before in the previous the free energy of 0 and the free energy of rho the equilibrium free energy okay so now if you sum the total work if you sum these things we have F0 F0 is H 0 in rho 0 because it's equilibrium and minus T the Shannon entropy of rho 0 F rho is H rho in rho average over rho plus T S rho and then I have to sum this which is H rho minus H0 in rho and things should sorry it was fast and things should cancel this cancels with this and then I get H0 in rho TS rho this is the final free energy this is the final sorry this is the initial free energy this and this is the initial free energy non-equilibrium and now it's non-equilibrium because now you see I'm averaging H0 over rho which is not the equilibrium distribution this is the important part and this this is the equilibrium free energy so this is the final this is the final because of this rho this is the initial rho so this is the initial sorry that it was a bit fast but if somebody has problems it's a one line calculation so it doesn't and then two more things we have derived this then well first thing when this is equal we have derived that this is equal but I've shown you that there is a process where I can go from here to here extract well making this minimal word delta f or extract the word if you like okay and and this process is reversible and you say my god it's reversible but if everything is non-equilibrium well if you think it's operationally reversible in the sense that if you do this thing in the you run the protocol the protocol is a quasi static change from this potential to this potential and a rapid quench from this potential to this potential the process is indistinguishable the forward and the backward even though the systems are in non-equilibrium so this is also important the equality is rich when the process is operationally reversible which means that it is indistinguishable the state of the system the process and in the backward process and to end and you can say oh my god so if everything that works in equilibrium now works out of equilibrium that's great no but of course here there is something which is very strange which is the quench I mean I mean it's like cheating you start out of equilibrium but immediately you are in equilibrium so I mean who cares about that it's like you are in equilibrium just during a millisecond and of course of course this is nothing but the nice point here so this means that this couldn't be applied to a real system I mean you are in initial state it's just let's say I mean because it uses that initial state but this is very important for information devices because in information devices you have the huge the huge separation of time scale so you can have non-equilibrium states which have a very long lifetime which are essentially stationary like the ones that we have seen in the two worlds so then we can apply that because then this can be non-equilibrium in the sense of two worlds and maybe and it's a state that is completely stable but out of equilibrium and then you can do this process and so this scenario although it seems very artificial it's something that you can do in information devices as well and this I don't know if this answers your question but it is more honest and that's it so sorry no time for questions if you have questions I will be around and we can discuss whatever and do the exercises and Liya will be also around this afternoon I have to