 So, hello everyone, I'm going to give you the lecture of today, so it will be about thermodynamic cost of measurement and erasure and it will be split in three small, I mean three sections. So first I will very briefly go to like a second flow of information and just, so give you just a general picture of like the two ways we're trying to follow to incorporate information in the second flow. Then I will tell you about memories and how we can take into account memory in a thermodynamic sense and last I will explain to you what measurements and erasure do when we try to deal with the second flow in the context of measurements. So my computer is a class. So basically what we have, if we don't have information, so if we just have a physical system, the second flow in terms of free energy will be that the work you will do on your system will be greater than the change of free energy of your system. So this is basically like the second flow in terms of free energy without information. Then what you saw is that actually if we incorporate information, so we don't care about how it is physically done or anything, we just say, okay now we have a physical system, I have some information about it, can I do something better than that? And indeed, I mean as you've seen with one before, now what you have is that the work you will have to do on your system to change this free energy will be the same plus this additional term in which here you have the mutual information that you gain from the measurements. So this is one first way to, okay this is a way to change basically the second flow by taking into account information in it. So yes, so here information is really, I mean this mutual information is really, it has the meaning of the information we acquire on the system and that we can use to maybe extract work in this case when we change like the state of the system. So a second way to deal with it now would be to apply the second flow, but this time not only to the system, so here this was if we still have like just a system interacting with the bus. But and we were taking information into account as if it was only like known by an external observer. Now what we can do is to have like the system, still the bus, but now we have like, we add an additional system which will be the memory. So basically this is the state of the demon if you want, for the state of memory in this case it's more or less equivalent. And now we say, okay, we want to apply the second law to this whole system. Okay, so this is what we are going to do now, basically. So first to be able to do that, I will tell you more about memories in the context of, yes, between this way of doing things in this way. So in this case here, we don't think of the demand as a physical system. We just think of the demand as just an agent which has information. But we don't think of information so much attached to a physical system. We just say we have a system, it's interacting with the bus. And in order to change this free energy by this amount, I mean we will have to give it some work that will be greater or equal to this change of free energy minus this. Because now we have some information. So we don't have to pay as much, basically, because we can use the information we have on the system. This is the first case, I mean the first way, basically, to incorporate information in the second law. But another way would be to take the system by incorporating the demand, so the memory inside. And now we can apply the second law, like the original, the standard second law, to this full system with the trick quantity. I'm just saying to obtain, I quickly open my slides again, sorry. Okay, so this was just, so these are the quantity. So here, just because I always feel that it could be misleading to have like an H, I mean, even if it's not like the same typology. But so here I'm calling the Shannon entropy with an S. So I mean it just changes like the Boltzmann constant KB. But it's just not to mistake like the entropy with the Hamiltonian, with the energy. So this was just like a recap of the definition. This is just what I was telling you about, that to incorporate information in the second law, either we can, so either we can have a new version of the second law and incorporate it that way, or we can apply the original second law to a full system of system bus and memory system. So this is where we were. So now, yes, yes, can they exchange heat, the system and the memory? It doesn't, no, it doesn't have to be, it can also be work. I mean, these are going to be two system interacting. If you're interacting with the bus, then the energy you will be exchanged will be only heat. But if you're interacting with a physical system like the memory for instance, what you can exchange between the system and the memory, it can be heat or it can be work, it can be anything. Does it answer your question? So now, how can we think of memory in the thermodynamical sense? So basically, what is a memory? So a memory, it is a physical system that can adopt a certain number of meso or macroscopic state and stay in the states for a relatively long period of time. So I mean, because this is what we want when we have a memory, we want to be able to store information and this information, it shouldn't be like just the equilibrium of your system. Otherwise, I mean, it's not very useful. What you want is that you will have some information here that is, that will remain like that for a long time so you can trust your memory. But that is not like the global equilibrium. So it is only the local equilibrium in this, for instance, in this double well potential, you're encoding the zero will be being in the equilibrium only with respect to this harmonic potential, but not with respect to the full nothing potential. So that is like the way we can implement a memory, let's say. So this implies an ergoticity breaking because it means that you're going to remain in this part of the phase space for long enough that, I mean, you can make use of this memory. So if I represent it in phase space here, what you have is that you're going to split the full space space in different regions and each of these regions are going to encode for one given informational state. So what is an informational state? An informational state is an unequilibrium state, but that is in local equilibrium and that can depend on history and our knowledge. So for instance, if you have a full phase space, we can say imagine just a physical phase space. So we can, we can have a nothing potential for instance. And, and say, okay, this right side of the potential is going to be, to encode like the position left. This left side is going, this right side is going to encode the position right. So I have two informational states, even though I have all the discrete position available. But I'm just like using all this phase space to obtain different informational states. So in this example here, my informational state would be this four possibilities. So to each informational state, I can associate a given probability. But I mean, we should remember that this informational state inside of them, so they are made of many different microstates. So if we look at the partition function for a given informational state. So for instance, I say my memory is in zero. So the partition function of your memory in this state zero would be this if you put m equal to zero, of course. And then the free energy associated also to one given informational state. We just have this definition. Yes? Yes, basically, I mean, an informational state is one, so you have your phase space. You divide it into ways that seem natural to you and also that can lead to like local equilibrium. And then in this division, you will obtain a certain number of informational states. It is the ensemble of phase space states. Well, it is, so basically an informational state will be, for instance, this region. And this region will be associated to the state m, which can be one, for instance, or zero. I mean, you can call it whatever you want. But all this is going to be associated to the same quantities that we're going to play on with the memory. Does it answer, yes? So I have an ensemble of microstates. And I'm gathering some of them together to build like an informational state, another set of this microstate together to have another informational state. So I'm splitting my full ensemble. I'm splitting it this way to obtain, in this case, four informational states. So what we want to do is to, from the full phase space, to obtain our informational states. But then when we derive the partition function of a given, a specific informational state, then we have to integrate over all the possibilities that are inside of the space space region. We are, we are, I mean, we are interested in this probability here. And this is what is going to be our memory, basically. You mean this probability Pm? So, I mean, it depends because it, so it depends if this is in equilibrium or not, basically. Because, yeah, it doesn't have to. So this is actually, yes, so, so, I mean, we can write like the informational state of the memory just like that. So you will have just like some given probability to be in each of the possible informational state for the memory. And you can also write it in terms of phase space, which will be something like that. If your microstate is in this phase space region associated to your specific informational state. And then what is important, maybe it will answer what you are just saying, is that, for instance, if we take this example, here we have that the probability to be in each of this informational state matches well the equilibrium probability. But if I suddenly change the potential, then here I will not change my distribution, but the equilibrium probability will be changed, you see. So here my informational state will be, I have a probability still one half to be here, one half to be here. But it doesn't correspond to the phase space volume associated to each of these two states, you see. So for these two situations here and here, and here it is a typical example, here you can think this is like a memory in which you are in the state one, for instance. But one is not like the most, I mean, this is not the state that you would end up with your memory if you wait for an infinite amount of time with this potential. So this is why, I mean, this informational state, they need to be living long enough that we can do something with the memory. Yes, sorry, I didn't understand. So we cannot have the informational state as a global equilibrium. So it's not a global equilibrium, but it's only a local equilibrium. This is why, for instance, if I have a potential like that, you see, it would be a very bad idea, and actually it doesn't suite the definition. If I say that, for instance, if I have this potential and I say, all this region will be associated to like this one, and only this half will be associated to zero. This would not be good. It will not be like a too good informational state, because actually, if I prepare the one, for instance, so I will have like a probability to be in this region here. But very, very quickly, it will go also in this zero region. So I mean, my memory, I cannot use it. Immediately, I will lose information if you want. Does this answer your question? So it's not a global equilibrium state. Is there is a equilibrium? I mean, yes, there is a local equilibrium, if this is what you want to say. Yes, if you look only at, like, for instance, this harmonic potential, so if you forget about the full potential, then here, yes, you can be in equilibrium. Yes, yes, yes, yes, yes, yes, and I mean, this is what you want, because you want exactly a memory that depends on what you told him before, because you wanted to remember some information. Well, you can, but basically, what you want is that it will stay where you put it as long as possible. But yes, you can compute how much time it will take to go from this place here to full, I mean, to go to the whole potential, but this should be long. I mean, you want to have enough time to use the memory to do something useful. Yes, I mean, you can even think, for instance, of the Zillard engine, so you can think that you have like a physical space, and that, so here, I mean, I have all the positions that are available, and then I just insert a barrier anywhere. But now that I have this barrier, then I can say that this is like the informational states, so this is, for instance, m equals 0, and this will be m equals 1. And because there is this barrier, I mean, once you have the particle trapped in one or the two positions, as long as the barrier is high enough, then, I mean, you're safe. You can treat this as a memory. But then you see all these micro-states of the individual positions here are mapped into just like a single informational state. Manipulate. Manipulate. So this is the most general case that we have in the formula state, this is here by the probability that it's non-unique, non-unique, non-unique. Thank you. OK, is this clear so far? So now, so what is the non-equilibrium free energy of a memory? So basically, the way to calculate this is to say that actually you can split this non-equilibrium free energy of your memory into these two parts. So you have the free energy associated to each of the informational states plus, well, minus k, t, well, the k is inside of this entropy, so minus this term here, which is related to the informational entropy. So this, for instance, if I have two informational states, just one and zero, it will just be like p0, well, minus p0, ln of p0, minus p1, ln of p1. So this is just information related to the informational states themselves. Yes, I wanted to. This, I mean, we can start from the definition. So we have the non-equilibrium free energy of your memory that you can write as a sum over all these microstates of, like, this is the energy of a specific one plus k, t of ln of the probability of a specific one times the probability like that. And then what you can do is to, yes, is to use the definition I gave you before. So here, you have the definition for the phase space state of the memory. So we have this p of x. We can put it back in here. So what we are going to have is k, t, minus, then this, and that, they're going to cancel each other. And then, then here you see that I will not have anything that depends on x anymore. So I can split the full phase space into, like, the phase space region associated to each of the informational states. So then I will have a sum over all the informational states, integrating over only the region associated to this specific informational state of, so, p, m, k, t, ln of p, m, minus. And then this is just so I can bring things together. So here, this, this will give me the average of my free energy for each informational state, plus this is what is going to be there, and this is what is going to be there. Now, we can think of what happens if we have a symmetric memory. So for instance, in the case of the Denfing potential, so a symmetric memory would be if we have something like that, well, so this, for instance, would be a symmetric memory. So if we have a symmetric memory, what is going to happen is that we are going to have, so, then we are going to have that c, f, m are all the same in the sum quantity f, and, of course, here the p, m are just like, but this I don't even need it, actually. So then what it gives is that, well, the change of free energy will just be kt of the entropy of my memory, initially minus the entropy of my memory after the operation. So this is only in the case of a symmetric memory. And what is interesting, so here you see that if you go from, so this was going from an initial memory in the state m to the memory in the state m prime. So if I'm restoring the memory to its initial state, so if I'm lowering the entropy of my memory, then it means that this will be bigger than that, and then this quantity is positive. Whereas if I'm using the memory to do something useful on my system, so I will go from a memory that is initially in a pure state, so this will be 0, and this is going to be some positive quantities. Overall, my delta f is going to be negative. I think there was a question. Yes. It means, yes, if you were waiting for an infinite amount of time, yes. But you can still have a symmetric memory in which you prepare just like, for instance, the state 0. It will stay like that for some time. It's just that the phase space has the same size for each informational state. Yes, with equal size on each side. Yes. In the case of the Zillard engine, this would be that. I mean, a symmetric memory for the Zillard engine would be to put the barrier in the middle. OK, so is there any question? So this is basically a way to treat a memory. So now what we can do is to look at what happens when we do the measurements. So for this, so now I'm going back. I'm going to the point I was telling you in the beginning. So now we're trying to look at the system and the measurement apparatus together and apply to them the second law as a whole. So what you have is that you start with your system, your observer. Initially, they are uncoupled and uncorrelated. So it means that basically the free energy of the system on memory is just the free energy of one plus the free energy of the other. And they're also uncorrelated. So I mean, the energy itself of each of them is just an additive quantity. Then you do this measurement. So the goal of the measurement really is to correlate the two. But we don't want to change the state of your system because this is what we want to gain information about. So the system we don't touch, but we change the state of the memory according on the state of the system. In doing that, what we're going to do is to create correlation between the two. And then we decouple them. So still there is no coupling term in the Hamiltonian or anything. We are uncoupled at the end of this measurement. So what happens is that initially your probability of being in the state m and x, so informational state m and state of your system x, is just the product of the individual probabilities because they're uncorrelated. As I was telling you, the energy is just additive. Same for the entropy here because there are no correlations. And then, as a result, the free energy is just also additive in this case. But once we do this measurement, what is going to happen is that now there are going to be correlations. So you see now the probability to be in m, x, is going to be the probability of your state to be in the microstate x times the conditional probability for your memory to be in m given that your system was in x. So of course, if we have an ideal measurement, this should be one because you want that you have perfect correlation between the state of your memory and the state of your system. So the energy are still going to be additive. This is because we decoupled the two. So there is just in the Hamiltonian one term for the system, one term for the memory. The entropy, the global entropy in this case, it is not going to be just additive because we have here the mutual information that is going to appear. Of course, I mean, this was the goal of the measurement, was to create this mutual information between the two. And then as a result, as a global free energy, it is going to be the free energy of your measurement apparatus plus the free energy of your system and this additional term, which is proportional to the mutual information. So here, remember that the state of the system has not changed. So here, the free energy we have there is exactly the same as we had here. The old goal of this measurement operation was to act on the memory itself, not on the system. So now if we ask ourselves, OK, so how much work do I have to put in order to do this process, to do this measurement process? So then I just have to look at the change in the free energy, so the total free energy. And what is going to be is just this quantity. So we can do the derivation together, but it's quite simple. So the change in the free energy of, so here, I'm calling the system x of x and m. It is going to be the free energy of x and m. So initially, let's say, well, the final minus the initial. This one, we said that it is the free energy of the measurement, of the memory after the measurement, plus the free energy of our system, minus kt mutual information between x and the memory once we have the correlation. And this was just this additive quantity of fx minus fm. So of course, this term and this term, they cancel. And we'll have the change in the free energy of the memory minus this kt mutual information between the two. OK, so now we know, so be careful that this is the minimum. So here I could have put an inequality. So at least to do this operation, you will have to give to the global system of your system plus memory this amount of work. And here, so the mutual information, here, it appears as the reduction of the entropy due to the measurement, so the entropy of the system. Yes, so this here, you mean? So here, this is the free energy of system and memory, so both together, the final one, so after the measurements. Yes, yes. So the thing is that you start with the memory. So if everything is ideal, I mean, in a technical world, we will start with the memory, for instance, in the state 0. So m prime would be your memory in state 0. And then what we want is that the state of the memory correlates with the state of your system. So let's say that your system has probability 1 half to be in 1 and 1 half to be in 0. Then we want that the state of the memory was 0 initially, and now it's 1 half with probability 1 half, so 1 with probability 1 half and 0 with probability 1 half. So this state m prime, it is the state of the memory once it's encoded information on the system. Yes, that's why we say with probability 1 half, this or that, because in the end, on a single realization, then you will have x equal to, for instance, 1, and then m is equal, is going to be equal to 1 if we have good correlation in this way. It's an effect. The system you can measure without altering the system. Yes, this is the most ideal scenario and indeed in the classical case. So yes, it is in the blackboard that I made a mistake. Yes. It's telegraphic to distinguish. So yeah. It's not like you don't hear anything, but you don't hear anything, but you don't hear anything from the bottom. OK, and yes, one more question. So you're speaking of this term? Yes. So of the system, of the system, because then they say after the memory, as in the end of the error, now they are in the top. Yes. So initially they were in the same and after the memory, after the memory, there is an answer. Yes. So the entropy of this term, is there a reduction of the entropy? And is there a reduction of the entropy? So the thing is that your system isn't changed after the measurement. So you can still say that the entropy of your system is the same from beginning to end. But now we have some information about it. So now if we forget somehow of the memory and we just think that we have this information, then you can always call this the new entropy of your system. But still the state of the system has not changed. You see? It's just that you gain some information about it. So it's more correct to think that the entropy of the full system, so system plus memory, has decreased because you correlated one with the other. So is there another question? OK. So now, so yes, as I was telling you, this is just a minimum. So the work we will have to perform on our system and measuring apparatus will be at least equal to this quantity. So now what if we include the erasure in the process? Because as we told you before, the thing is that if we just do the measurement and that we stop there, it seems very good. Because now we gain some information. We can do things with it and everything. But it is not very fair because we still have to put back the state of the memory to its initial state so that you can start again, for instance, use this memory again to do another measurement and things like that. So of course, once you've done the measurement, you can use this mutual information to apply some feedback loop. So to do an operation on your system, conditionally on the output of your measurement. So conditionally on the state of the memory. So this would be this step here. So this feedback doesn't affect the memory at all. You just use the state of the memory but you don't touch at the memory. And then we need to do the erasure. So we need to take our memory state and put it back in this initial state it had at the beginning. And this, so if we look at the free energy we have here at this instant in time, so here the free energy is that because if we used this mutual information we had to implement some feedback loop. So now our free energy is just the sum of the two quantities. And in the end, still we have no correlation. This erasure acts on the memory and we obtain this free energy that is also additive that is equal to the free energy of the system, well the same as the initial one plus the initial free energy of the memory. And see here, it makes sense. I mean this quantity here is the same as this one. So we just went back to the initial configuration. And in doing that, so the total change of free energy it is going to be this quantity. So it is going to be minus the free energy change of the memory. So minus because now I'm starting from the one we ended up after the measurements to the initial one. This is why there is this minus sign. And what we can do up to now is just to sum the work I had to put on my system to do the measurement plus the work I had to use also to do the erasure. And if I sum this to quantity, you see that I obtained just like this kt of the mutual information. So this is basically the cost of doing this whole measurement plus the erasure protocol. And it's very nice. And it makes sense because this quantity here is exactly the maximum amount of work I can gain by using the information I obtained from the measurements. So at best, I will have to give to my system this much to measure. And I will be able to extract from my system also this much in the best situation. And if I don't do everything right, then I can extract less. So in the end, it was that useful. Is that yes? So the feedback is that now you have a memory and a system that are correlated. So now you know basically the state of your system. And what you want to do is that based on this knowledge, you want to extract energy from the system. So what you can do is to say, for instance, it is the same in the Zillard engine. So the feedback in the Zillard engine, it would be to say that if I, in the case of the Zillard engine, you have your barrier here, your particle there, if you have a very good measurement, you'll know that your particle is here. Then the feedback would be if my particle is on this side, I let this barrier move in this way. So I do this expansion. And during this expansion, I can extract work. And if the particle is on this other side, I let my barrier move in this side. And I'm extracting also some amount of work. So this is the feedback. So it depends on the measurement.com. And this is the step in which you can extract work from this system and memory global system if you want. So this is at maximum. At maximum it can be this amount. So this is because you start from a state that has this total free energy. And then what you can do is that you, okay, you don't want to change the free energy of your system. But I mean, this we can relax this condition you will see later. At least we don't want to change the state of the memory because we want to extract energy from the system itself, not from the memory. So this has to stay constant. So what can you do in order to lower this total free energy? The only thing you can do based on this condition is to lower this term here. So I mean, I can always take this part and extract it. Yes, it will coincide. Yes, it's true. I mean, when we do this expansion, we'll lower the particle again. If I say that the feedback, so here this is if I just want to extract work from the from the mutual information I have only and not from the system itself. I mean, okay, I agree with you, but it will not change in this case because the energy is going to be the same. But okay, in the next slide maybe this will help. In the next slide, what I can write is that I can apply the second law only to the feedback step. So I'm applying the second law only to this feedback step. So we have that the work that I have to do on my system has to be greater than the change of free energy during the feedback. And this will be equal to like the change of free energy of the system itself. So this is what you are saying also. The change of free energy of the memory, this is going to be zero. And I have this quantity there. So if I'm using all the information I have on my system, it means that at the end, this mutual information between the system. So here you see, I assume that I've changed the state of the system, this way they have an S prime. And so if I'm trying to extract as much as possible, this should be equal to zero. And then I will obtain that the work I have to perform on my system is equal to be the change of free energy of X minus this quantity. So here it's taken into account the fact that X can be modified. So during the measurement, X is not changing, never. So here also X was not changing during the measurement. Here I'm just looking at the feedback itself. No, not the erasure, it's just because here, okay, if I come back to the beginning, here we had a system and we wanted to incorporate information in the second law. So either we look at information as some external thing I have or I say, okay, I have this system, so the memory plus my system. And now what I do is that I first do a physical measurement, then I use this measurement to do a feedback. So this is actually this feedback step that I want to look at. Because in the end, what I want to say is, okay, let's say I have a system, I have some information on it, so I have already done the measurement, this is what is implied. I have already done this measurement, I have some information on my system, how much work can I extract or how much work do I have to perform on my system to change its free energy by the given quantity, by delta F of the system. So, okay, let me write it. Sorry, well, this is what is written here, yes. Because, yes, you have your system and memory, you do this measurement and then you do this feedback, so you split them, here, you split them, and then you do a feedback. So the feedback, indeed it can change the state of your system like that, it doesn't change the state of the memory, and then what you want to do is that you put this back to the initial state for the memory. But then, so this here will be the measurement, this here will be the feedback loop, and this here would be the erasure. Well, but the thing is that in the previous slide, it was written as if the system was not changed. Ah, yes, yes, this is the measurement. Exactly. I mean, because it doesn't change the other one, it changes, but it's under the cycle. Yes, yes, indeed. We need some point to send S prime back to S with another measurement or feedback. Well, you can, but here I prefer to think in terms of just staying at this S prime. I mean, if you were to repeat, so if we think of this as like a cycle of a measurement on Jain, for instance, then yes, but here if you just want to apply the second law to a system on which you have information, then what you want is just to apply the second law here because you want to say, okay, I'm applying the second law to a system on which I have some information and I want to see, during this process of the feedback, what is going to happen given that I have some information on my system? And I don't ask that the system at the end is like the same as initially. I'm just looking at the system on which I have information and I see what can I do with it and what are the quantities that are going to be involved in the second law. This is why I like to write it in this term because basically we're applying the second law, so the standard, the typical second law, only to the feedback part, you see. And in this one, we don't restrict anything on the state of the system. Before I was, before you had X because you can think, you can apply this also to the case of a cycle, but here we can be more general. We can say, okay, what is the second law in the case in which I have information on my system? Yes. How do you decouple the two together, I mean? So, so, so it will depend on the physical implementation, of course, what's, what is, I mean, maybe you should not think of it in this term, you should think that we have some mutual information and this mutual information, it has a thermodynamic counterpart. So this mutual information, it's lowering the total entropy of your system plus measuring apparatus. So when you have lower entropy, you can extract work. So you know that by increasing, so by removing this mutual information, you can extract work. We don't even need to say how you will do it, but we just need to say that, okay, you started with a total entropy that was to a given value, now you increase it by removing the mutual information. Because you're basically, if you remove, if you remove the mutual information, it means that you're losing the information of the correlation. So when you lose information, it's as if, it says when we expand, doing this expansion is basically losing information on where the particle is. So losing information about the correlation is also, well, something from which you can extract work. So here it is very theoretical, we just say, okay, we have this knowledge from which we know that we can theoretically extract some work. But then, I mean, the way to do it would be just, for instance, if you have, I don't know, if you have like a, is it simpler to think of? Yeah, if you think of the Zillard engine again, if you have like the state of your particle like that, so you start from, so the state of your system and memory, after the correlation, if it's a perfect measurement, you will have one, one, well, it will be in one, one, zero, zero. So you will have only these two possibilities. So this is the state of the system, this is the state of the memory. Now if I do the expansion, it means that I know that my particle is on this side, my memory tells me that my particle is on this side. Now in the feedback, I do the expansion like that. So after the feedback, my memory tells me that my particle initially was on this side, but now my particle, it is like, it is going to be, I don't know what to write it, but it is going to be one with probability one half plus the state zero with probability one half. So this is how my system has evolved because now I've done this expansion. And same, if I started with a memory telling me, oh, your particle is on this side and that I do the expansion, my memory hasn't changed, but my particle now is with equal probability on each side. So now I'm going to end up with a state in which actually I can factorize the state of, so it's not very good to write it like that, this is the state of the memory. So then I can just factorize and say that actually no matter the state of the memory, my particle in the end has equal probability to be on each side. So you see in this process, this is a typical example of a feedback in which at the end you destroy the whole correlation. There is no mutual information anymore because your memory doesn't tell you anything anymore about the state of the system after the expansion. Yes, yes, yes, yes. I mean, because this is all we want in the end, because you have a system, you have some information about it, now what can we do with it? I mean, the whole point of all this story was to say, okay, I want to apply the second law to a system on which I have information. So we add this first step of the measurements to say, okay, now I have information on my system. But now that I have a system, an information about it, I want to see, okay, now what can I do? I mean, what is going to be the second law in this process given that I have some information? So this feedback is actually very important because this is what we want to know about. We want to know if I have a system and information about it, how much do I have to do on my system to change its free energy by a quantity. So this is what we're interested in, basically. This is what is written there, you see. I mean, basically you can think that now applying the second law to a system on which you have information, it tells you that the work you have to put on your system is going to be at least as big as the change of free energy of your system minus this quantity with the material information. So it is, let's say another way to find like this modified second law in the case of information. Does it answer your question? Yes? Yes. So here we are assuming what? Yes. Yes, yes, yes. Yes, yes, yes, for sure. This is the case in which it's all perfect and ideal and everything. If it's not, I mean you will have other possibilities, but I mean if your measurement is not the worst in the world, still this will be the state with the larger probabilities. You will have others, but with small probabilities. They will contribute a little, but not so much. Is there any other question? So okay, I should not have written it this way because it's not very good, but when I write it this way, it means that x is equal to one, m is equal to one, here x is equal to zero, m is equal to zero. Here I say that my memory state is one, so maybe it would have been more clear to put it like that. My memory state is one, and this is the state of the system. So this is to say that after the expansion, I have probability one-half that my particle will be here, one-half that my probability will be here. This is just to say that the average state of my system basically, so the new state of my system, so is the x prime if you want. Well, I can say x prime. It is going to be a random variable with probability one-half to be in one and one-half to be in zero. Yes, and also to gain work out of it. I mean, it is the most, I mean, it seems very natural to want to expand like that because you can extract work also in this way. Indeed, it will bring us back to the initial state, but the main point was to extract work in this game. Is there any other question? So in the end, that was fast. Yes, more or less. Who's the following? Okay. If there is no other question, Juan is going to continue like the next lesson. You see that it was also the fact that maybe in the previous slide, the fact that, no, in the previous one, the one that... The one in previous slide? The next one. Here. The next one. Here, the fact that, this is the most important thing, the fact that the entropy of two systems can be written as the entropy of each one, mining, mining, I accept. Of the entropy. This entropy is always smaller than this one. Why? Because here, you have correlations of x from this, x is telling you things about, and prime and prime is telling you things about x. So, and this is precisely this good information. So, look at this, this is just this thing, just this equation. And the second row that we wrote, this is delta x w bigger than delta f. This is enough to analyze the energy and so on, the hour and everything. So, this is the, no, take it. There was a question? No, I was about to say that it was... Ah, okay, sorry. What now? What did you say? Microphone. So, and no, no, what did you say something else? Ah, yeah, well, yeah, I wanted to say that this scheme, well, this scheme is in the papers by Sarawak, and it's even implicit in Venice paper in the 1970s when he wrote that, the cost, you remember the story, you know? Maxwell wrote this in 1867, then Cedar introduced the Cedar engine in 1922, and for 50 years there was nothing new. No, for 50 years people thought that to restore the second law, you'd need, to restore the second law, you'd need to, there is a cost in the measurement, because of course, nobody thought of ferration. I mean, who cares, this is very surprising, and this was in the 70s, and now we have this scheme that the work, the work that compensates the feedback can be here or here, and actually everything is explained, this is what happened in science, that things that look very attractive when you solve it or when you understand it, then they become very, kind of, not very, but still this is interesting. And, but look what it is going on. What it is going on is that you create a correlation that decreases the entropy, and consequently increases the free energy. And then in the feedback, now you have an extensive free energy, you can exploit this, free energy is work that you can do. So, the whole story of the Cedar engine is that there is a creation of correlations, and then an exploitation, exploitation of correlations in the feedback. And actually when we wrote the review for nature physics with Jordan Horowitz and Agaba Tagajillo, is the one who really did this in a very formal way. Then we were discussing what is information. This actually was my reflection for the last day, but we can tell what is information from a physical point of view or conceptual point of view. And I insist always that it is this what Leah mentioned at the beginning, these states that have a long life and so on and so on, but Tagajillo and Jordan are saying, no, information is correlations, and the Cedar engine is just to create correlations, the strict correlation, and this is it, this is the whole story. I don't think so, I think there is, I think there are two aspects of information. One is this one. One is that information is just, or just is, one aspect of information is that it is the correlation between two systems. And in this sense, correlation is free energy. This is the main message. And then you can explain everything like with that. The other aspect is this long life time states and so on, which I think it's also another important and crucial aspect of information, yeah. But on average, this for me, the whole, that's right. Everything is for average. Oh yeah, but for local, or for local, the information for one realization, it's made that. By local, you mean a single realization? Yeah, that's it. Okay, for a single realization, we have fluctuation theorems, which unfortunately we cannot explain, but you have the references, and then you can have fluctuations. You can have fluctuations in the work, in the heat, even in the information. And then there are theorems that tell you that you can have, of course, if you look at single trajectories, then you can have violations of the second law. I think that the second law is only restored on averages. Broken, yeah. For instance, in the Brownian, you don't need to have Maxwell's Demons or anything. You have a Brownian particle here in the gravitational field. And then the Brownian particle can go up. And if you compute the energy, the entropy change is the entropy of the universe decrease. By, but of course, but this decrease is for the KT, for the K, sorry. So you can always expect in singleizations to have a decrease of entropy of order K. In the Cillar engine, if somebody said the first day, well, if you don't measure and you proceed with the Cillar engine and you are lucky, sometimes you are lucky, and then you can, even in what you have done with that or and so on, well, not for the optimal, but well, in the Cillar engine, you can be lucky and decrease, and gain KT of two, even for three, I mean, if you run the Cillar engine like 10 times, not the original one, because in the original one, if you are wrong, you cannot compress. But in the one with the Brownian motion, you can run this, well, when you are wrong, you lose, well, you can manage to lose not so much. And then, even in 10 times, you can decrease the entropy of the universe if you are lucky. No, the measurement creates correlations. The feedback exploits those correlations, as we have said, the optimal feedback is when you destroy all the correlations. And the eraser doesn't have anything with correlation because the eraser only affects, concerns the demon. So the eraser is what Leah explained at the beginning, is just the change of the state of the memory from a state that can take on two values. So it has an entropy one bit to a state which only has one bit, the typical eraser. I mean, the typical, the Landauer eraser. So Landauer eraser has nothing to do with correlations. It's just restoring the state of the system. So thank you, Leah. And yeah, we have 15 minutes to continue. So what we have seen so far is the basic aspect of this story of the Scyllarenian information and thermodynamics, which is the explanation of the Scyllarenian, let's say, this is what we have done. In the last 10 years or so, there was an attempt to interpret as information machines certain molecular motors, protein motors that maybe in biophysics are important. In nanophysics, people are trying to build these motors. This is different. The Scyllarenian is just, well, a physical system, very interesting with an external agent and these motors are, the idea is to make motors that are autonomous, like in the cell. In our cell, there are a lot of motors that work with ATP. Probably you have, everybody knows what's ATP and ADP. Yeah, no, okay. ATP is a molecule, it's like the fuel in the cell. Everything, in the cell you have a lot of molecules that make things, transport, pumping ions from inside to outside, or, I don't know, or reading the DNA. You have the DNA and you will have a course, no, in the third week on RNA. So to read the information in DNA to RNA, you need another motor that runs and reads so all these machines use a fuel and this fuel in the cell is ATP, which is a venosine triphosphate. Well, it's the acronym for a molecule. And it's a fuel because like gasoline, you take it and you degrade it. In the case of gasoline, we burn it. In the case of ATP, it's a chemical reaction and this fuel can be used to perform these tasks, to move against the force, to compress DNA in a virus for instance, or things like that. So in the two days, tomorrow and Wednesday and Thursday, we will try to, I will explain a framework that has been developed to try to interpret these molecules as information, these motors as information machines. When you have an autonomous system, well, we will do also, I want to recall the, we were in lesson three, sorry that we have to do these jumps. I explained this, the Markov chains and we explained the, this is also good, but I mean this of course, the utility of this, this is useful for many of you, even if you are not going to study information and things like that. Whenever you want to model a physical system with discrete states and many systems, even if they have a Hamiltonian, at the end if you want to highlight some of the properties of a system, discrete systems are great. If you can explain things with a very simple model that's always better than to have very realistic complicated, this is good I mean if you have a computer you can go to models with 20 parameters and everything, but if you can explain things with four states and four parameters, this is also great. So this is by definition, these are transition rates and this is the current and we also explain detail balance and this is exponential of minus beta, the energy minus the initial energy and from this we also prove that the entropy of the universe which is the Shannon entropy in the case of a system in a bath. So here I have a bath of temperature T that creates these transitions. This is the entropy of the environment this is the delta and this is S environment. We prove that this can be written as all transitions from I to J and a kind of irreversibility between the transitions times the Boltzmann constant which was Pi gamma IJ this is the number of jumps from I to J divided by the number of jumps from J to I and this is of course bigger than zero which is the kind of second law so we prove the second law this is not the proof of the second law this is just a proof that detail balance is compatible with the second law or that detail balance implies the second law so this is what we did on Friday I think and now we try to in molecular motors well here you can have an external agent and so on but in molecular motors you can usually have well we also explain the driving and the total editorial production in molecular motors you usually have that the system is in contact you can have your external agent performing some work remember the work was the work was the sum of well the work is Pi PiT and the derivative of the energy with respect to time because the agent changes the agent changes the levels and this was work whereas heat is the change in energy due to the evolution and here you have your bath but you have also the fuel and the fuel is usually cells are in an environment with ATP and then you have here particle reservoirs well remember this list of sources of non-equilibrium that I did when we studied the detail balance I said we proved this imposing that the steady state is thermal and then I said but once you have this you can go beyond equilibrium using sources of non-equilibrium we studied three one was driving the second one was temperature differences in the transitions of different thermal baths which we also studied in an explicit example and the third one was non-equilibrium chemostats non-equilibrium thermostats are for instance two baths at different temperature so here you have chemostats and when you have chemostats this means that you have ATP ADP and so on so one transition for instance from A to J can be mediated we say we use this word it can be mediated by a chemical reaction so you can have that if this is a protein in a conformation the protein absorbs ATP and changes the conformation by degrading the ATP to ATP so we have you can write this as a chemical reaction and A and J can be anything can be spatial states like for instance a protein motor trying to move and so on and this can be ATP or another fuel for instance in some chemical motors David Lee is doing motors and I talk about Feringa and Stoddar who won the Nobel Prize for Feringa and Stoddar in 20 something this can be light this can be anything let's put this example so how the detail balance is modified is modified in this way I have the energy actually the detail balance is always the free energy after minus the free energy before here because we are considering Microsoft, well Meso states well actually the real detail balance condition is with free energies but we always neglect the entropy remember free energy C minus TS so we neglect this part but we should write here or we consider that the free energy is the same and doesn't change so the only thing that changes is the energy we put here A E J E I but here when I have this exchange of particles the environment does change and we have to add mu ADP minus mu ADP and in the case of in the case of in the case let's call this delta A delta E sorry the jump in energy and usually we call this delta mu delta mu is the chemical potential of ADP minus the chemical potential of ADP this is positive in normal conditions and then this means that even though I need an energy extracted from the fluctuations to jump from I to J the fuel helps me because this is minus minus this is plus the delta mu is positive is biasing the transition toward this uphill direction and this is the basis of many motors that the fuel helps to move uphill in the energy landscape of course if mu ADP and this is the chemical potential this is also interesting that when I was studying at the beginning all these motors and so on or maybe you have studied biology and you think that ADP has a lot of energy this is because everybody is saying you that ADP is the fuel in the cell so my picture was that ADP has strong bonds and then you break these bonds and then you release a lot of energy and it's not true the energy because in ADP remember that mu ADP the chemical potential is the free energy per particle so is the energy of a molecule minus T the entropy of a molecule and actually in this this is around 14 kT in the human body but this is because of the pH pH because of the pH if the pH changes this I mean because the energy is not so different the energy of ADP minus the energy of ADP I don't know how much is this I think it's 3 kT or something or 4 kT so most of the energy comes from the entropy and this bias is super important for instance many motors in biology even though they are microscopic so they should go back wire and forward because you have the two transitions remember if temperature is infinity this is one so whenever you have high temperatures if you can go from I to J you can go from J to I unless delta mu is super big and 14 kT is super big because 14 kT divided by kT is 14 an exponential of minus 14 it's very small and this means that you can bias ADP can bias things so when you look in the laboratory one of these motors there are many experiments with motors usually you only see the motors going in one direction it's very hard to push the motor you have to increase a lot delta E to see the motor going backwards so tomorrow we will take this and we will repeat this calculation for this detailed balance condition and we will introduce something called the chemical work which is an interesting we will have to revisit the definition of heat precisely and we will learn a lot of motors which is as I said even beyond the thermodynamics of information it's useful and then we will try to use mutual information to study these motors as information motors thank you