 Right, so welcome everybody to the sixth lecture on stochastic thermodynamics. As usual before starting, I leave the room for questions. Any one question, if any. If not, go on. Okay, since no questions so today's lecture as I anticipated yesterday will be about information demons. So, in particular, I'll discuss methods of extracting work that do not rely on having to thermal baths, like in a, in heat engines, which is what we discussed yesterday, or that do not rely on applying a force directly on system, like when you do work when you raise an 11. There will be a different pathway in extracting work, which is by using information of a physical system. And before starting, I give you a good reference to read which is the review paper in nature physics that I showed in the top of my slide. A review paper containing lots of advances. It's a summary of lots of advances in stochastic thermodynamics for systems which have information, so I highly recommend you to read. In particular today, I will present you three classic paradigms of information thermodynamics. One is the Maxwell demon, which you probably know very well it's really a classic in thermodynamics. The second is the one I show in the bottom left, it's the select engine, which is smaller version of the Maxwell demon in which demon is operating not between two thermal baths but in a single thermal bath with a single molecule. So it's really a key thought experiment in information thermodynamics, which uses the least possible elements to extract work from information. This is really, really important for the field. And also, I'll relate all this with another principle which is also a very important result in information thermodynamics later on. Once I explain this I will go into a framework of information thermodynamics which will try to unify all these ideas and discuss also fluctuations of these systems. So Maxwell demon, it's really a classic in thermodynamics. This was proposed by James Clay Maxwell in 1871. So it's really a long time ago where he was thinking already about this. This is published in this paper theory of heat, which I highly recommend you to look at it. And the idea of Maxwell demon is, is to device. It was a thought experiment by back then, in which we can reverse the thermodynamic out of time. So imagine you put in contact to baths of molecules that are different temperatures so imagine the, the bath in the left is a temperature TA which is colder than the one in the on the right so this is temperature TV. So these I can illustrate with the velocities of the particles, as you know from statistical mechanics a bath with high temperature means that the magnitude of the velocities of the molecules are higher temperature means the magnitude of this velocity is larger than in the other case. So typically when you put in contact the hot bath with a cold bath. What do you expect. So you expect that the temperatures, equilibrate in an intermediate temperature between this initial temperatures, this is called the zero law of thermodynamics, and you expect that there should be a heat flow in this case from right to left. So from, from the hot reservoir to the cold reservoir. This is what you expect but what if now you think of a tiny demon, a small intelligent being that it can control a tiny door between these containers and open it and close it selectively in a very clever way. The clever way is as follows, when the demon sees that the molecule in the hot bath, sorry the cold bath on the left is it has an instantaneous velocity higher than the average, then the demon opens the gate. Why, because this, this particular molecule is hotter than the average in the cold bath. So then it lets hot particles to move to the hot bath and vice versa, cold particles to move to the cold bath. In this way, what this is fast and slow molecules as I show in my, in my handwriting here. In this way, the demon can achieve that the cold, the bathroom was called gets colder, and the bathroom was hot gets hotter. This is an in apparent contradiction with the second law because this requires a heat flow in the opposite direction as the temperature gradient. Now this can be done. Thanks to the fact that the demon uses information is using information and changing the dynamics of the system. Every time it opens and close the gate. This we call in this field control or feedback control. It's doing feedback on the system. Okay, from statistical mechanics points of view, this can be seen as you have in the two thermal baths, Maxwellian distributions, and you just look at the demon opens and closes the gate when it sees red events in these baths for example in the, in the cold bath molecules who have a velocity in this shaded area which are above the average and vice versa in the other path. So maybe you can change the temperature ways. Okay, so this is really a classic, and it's something that shows that using information you can get events that cannot be obtained spontaneously for the second. The next paradigm, which was was introduced by Leo Szilart, and it is the following it is a one molecule version of the maximum. So here you have many, many molecules, but now we will think of a demon that is operating on one molecule. So we have a container, which contains only one particle, and every time the particle hits against the walls of this container, it equilibrates with the temperature of a thermal bath that is surrounding this container. So imagine now a demon that measures, where is the particle on the left or on the right half of the box. The demon can use this information cleverly in the following way. If it sees the particle on the left, like in the top I show here. It puts a piston on the right half, and let's this piston move be moved by the particle in a way that this is like an expansion of a gas for one molecule from volume behalf to volume B. And vice versa you can do the same when the particle is on the right side on the right half of the container. So you see that the demon can cleverly attach a rope to this piston and use the expansion of the gas to lift the weight of a given mass, of course a small mass because we are talking about one degree of freedom, which has a characteristic energy of KBT. So you will be able to lift a very, very small tiny, tiny mass. In the end of the operation, the demon completes a cycle so when the gas is expanded reversibly you return to the initial state. So, in this sense it can do cycles and extract work from a single thermal bath in a cyclic manner. This is against the formulation of second law. So it's like a paradox with the second law. What we see here, if you think about this system as an ideal gas will be the work done on the system is minus PVV. This is just classical thermodynamics. So the quality I use the law of ideal gases, so please KBT divided by volume. And then I integrate from behalf to be, and I get the work done on a cycle is negative. So I'm extracting work cyclic and this is really a big thing if you think about it. This is not something forbidden by the second law. So minus KBT log 2, log 2 is positive, so minus KBT log 2 is negative. And what came next is the analysis of what is the demon doing, so the demon is measuring information, so it measures a bit, and this bit contains an amount of information that is exactly KBT log 2. So if one takes into account the information in the entropy or in the energy balance, one records the second law, I am anticipating what's coming later. Okay, I'm seeing questions in the chat. Okay, what WR. Okay, please, if you have a question, I really encourage you to say it by voice because it's difficult for me to follow the chat while I'm showing the slides. WL and WR mean that WL is the work that you do, given that the particle was measured to be in the left side. This is the work done in this branch in the top branch, and WR means the work done in the bottom branch. Of course, there will be half of the times you will take the top branch and half of the times you will follow the branch in the bottom. Okay. That's why I have here WL, but WL is equal to WR. The average work over many cycles is one half WL is the average mean the work done when you measure the particle on the left plus work done when you measure the particle on the right. Okay. That's it. Solve your question. Alejandro. Yeah, thanks. Okay. I have a question. Please, please go ahead. Yeah, so it is like a means to stroke engine right. No, no, no, no, this is different to stroke engine is compression and expansion. Here, what you do is there's a demon, and it's looking at the two boxes and then it sees if the box is left or right box, if it's on the left box, it puts a piston on the right half, and then let, let it expand. Okay, so there's only expansion. There's no expansion compression. It is just measurement and expansion. It is not too strong. What you have to understand from the top branch and the bottom branch is two different events that can happen. It means that if the molecule when the demon is measuring could be on the left box or on the right box, sorry, on the left half or on the right half. So it means there are two possible pathways in the same cycle. Okay. And it is an isothermal expansion that temperature remains constant. Yes, it's nice internal expansion. Exactly. That's why I'm using this formula here. I don't know if you see my mouse, but I'm saying that the pressure pressure times volume is kbt. So this is the law of ideal gases in isothermal conditions for a gas of one molecule. Okay. Professor, is it equal to what is happening in the cells and we have gate that can be activated or not. And in very specific ways, can the gate have the rule of this demon. Well, this is very brave to say. Of course gates can be open or close. But most of the times, for example, the ones I explained in my course, like I own channels, this opening and closing is from a, this is due to a fluctuation. So it's not due to a demon, which is measuring, which is doing something very clever, depending on the outcome of a molecule this. This is a thought experiment. This is something that process information. So I would not expect that the cell is doing this in general, I would expect more that things happen because of fluctuations. Because in some point we are just collecting more ions inside the cell while the potential inside the cell is offered and the environment. Yes. So, in a sense the cell and applies feedback, because it's, and, for instance, the opening and closing of a, of a gate of a channel can depend on the concentration on one side of the memory or not. So in this sense you can see that feedback, but not as a similar engine, there's a very particular thing which is what I'm showing, but I agree with you that there will be there's feedback in the cells. You can see, for example, in healing, the ion channels in the air open and close with a different probability when there is more or when it's different calcium concentration. So there is feedback in biology. Yes. In the broad sense, it's not that there are still engines applying biology doing a box. So it's something more complex in general. And some of the students find it strange that you can apply the law of ideal gases to a gas of just one particle. I also do. I also do find it strange. Yeah, it's really an approximation but I think this is because you are looking at a quasi-static transformation. Yes, yes. And if you think that the piston moves very, very slowly, there's a mass, a much larger in the one of the particle, then this is exactly what you get. Well, actually, there's a key assumption. I'm saying this expansion is reversible. Reversible means infinitely slowly, which is what material is right now saying. So if you want to use the law of ideal gases, you're assuming at all intermediate times you are in equilibrium. So you need to go infinitely slowly. It's really an idealization this demon and it's a result value for quasi-static driving as well. So that's a key point as well. And also you assume the particle is a point, which is not what it is. It's not a point. So ideal gases means particles are points that have no volume. But there has been a lot of corrections to this and including exclusive volume effects and many particles still are. This is really, really the most ideal situation you can find. Okay, I'm seeing a lot of questions in the chat. Not really. It's like the mess. Yeah, yeah. Okay. There's no T for one particle. There's no T from part of here we're saying there is a bath surrounding the this box and this bath is filled of 10 to 23 molecules. The particle is single particle that collides with the wall, and then it equilibrates with the temperature of the environment with is a real temperature. Okay, this is something very important. It is different to what I explained yesterday which was temperature for particle which it was an effective temperature. It is very good to do it if you are saying that there is a nice conversation here in the chat. But please, you can ask questions in real life. So I had another question. Efficiency of slizzard in the engine will come out to be one. Okay, okay, the efficiency. Well, you cannot define efficiency in the way I explained yesterday, because I was introducing efficiency as the ratio between the work and heat. Well, here, you can say yes, there's a heat absorption of KVT of two and, and there's a work I could say, yes, one, but it is a big idealization this process so it's, it's not something we can. Basically, we don't think about the efficiency is here in this, in this one, because there, there's not that the notion of two thermal baths, like we have in the heat engine. There it makes sense to talk about the efficiency because there is a intake of heat, and part of this hot Q8 path is useless work, and part is dissipated in the bath. It's different as you say, isothermal systems are not bounded by car now in efficiency are bounded by one, as you say. Then, then, how, why are we saying that this violate second law, because one of the corollaries of the second law is that you cannot extract work in a cyclic process from a single thermal bath. That was proved by clauses. It's not cyclic. I mean, yes, it's cyclic because you look at the left on the left you have the, the particle in a box in a full volume. And then when the demon ends the cycle. Well, I'm just saying the cycle, the demon initially puts a piston and then it lets the piston expand from volume we have to all be so once the expansion is done the demon takes out the piston. You cannot see the work with an ideal piston. Okay, so they miss on the final state at the same. Okay, okay, thanks. Okay. Right, I think I see that the other questions. Yes. Okay, fine. Yeah, actually, I explained this before. Okay, let me go ahead. I mean, this is a thought experiment which is key for some thermodynamics and we could do as a lecture only about this, but I'd like to advance a little bit. And the first thing I want to discuss is that before you see that here, the probability for the particle to be on the left or on the right is one half. We are saying it's perfectly symmetric situation, but you could do this with a bias, like a bias coin instead of half half, you can have 70% 30%. So I wonder if a bias in the state of the system or the measurement can lead to more work extraction. So you can say, as I show in the top figure, there is a variety P to be on the left wall and priority one minus P to be on the right half. And this can be done for example by putting the barrier of the piston, not in the middle but in a symmetric position. You repeat the same analysis and you see that the work on average is minus cavity, the Shannon entropy for a binary symmetric channel with priority P. So this is basically minus P log P minus one minus P logarithm of one minus P. This is a function that is concave and that has a maximum at one half. So for any asymmetric CLAB engine, you get an H and information that is less than KVT log two. So this means that the CLAB engine is the one that extracts the most work from one single part. With many particles, you can also generalize and actually we need with Mateo we need a paper this year and there's a lot of research on this and it is much less trivial than what I'm showing. Okay, I'm just starting with the basics, which is single particle slide. Okay, another related key topic is Landau's principle, which is related in the sense that is the opposite process in which we think about the ratio of a bit. So imagine you have a particle in a in a double well or in two halves of a physical system. And we can think about it as a physical bit. So the state of the particle will be zero or one, depending on how close it is to this minima. If the particle is close to the to the left minima, we say the state of the particle is a zero. And if it's close to the right minima, the state of the particle will be one. So I was, I realized first time by Charles Bennett, and it is a physical implementation of a bit. And the process that I'm sketching in this, in this figure here is the duration of a bit. So what you can do is for example, tilt the potential, increase the minimum of the right well and make, make it in such a way very very very slowly you can do it reversibly this process. I will show you later experiments. You can do it in a way that at the end of this process, 100% of the particles, because you have to think that this is a system with fluctuations imagine a colloidal particle. 100% of the particles will end up in the left well, which is analogous to erasing a bit. You have a bit in zero or one, and you erase it so you restore it to zero. What it was shown is that by Landauer and Bennett is that even though you do this process infinitely slowly, you would expect when you do a process infinitely slowly to get zero entry production because it's all the time in equilibrium. See this is one thermal bath equilibrium. So I would expect the entry production to be zero. But it turns out that it's not that there is a minimum of entry production associated to the racer of bit. And this is related to the fact that you're breaking an ergodicity. So time zero, the particle can can reach every part of the of the potential of the face space, but at the end of the protocol, the particle cannot, the party is very high so the particle cannot jump between the left well and the right well so there is a, and I can say that there is a minimum of entry production of the face space into regions. And this is an energetic consequence. It means that, well, first and entropy consequence there is a minimum dissipation or entry production by breaking the symmetry. But also this implies that there is a minimum amount of heat needed to erase a bit. So what a computer the computer has many bits and every time you you press delete this deletes a lot of kilobytes or megabytes gigabytes. So when I was thinking about a very fundamental thing, what's the heat needed to erase a single bit see no one. And he showed that you need KBT lock to have dissipation so very similar to what Cedar was saying and actually you can unify this in a single framework, but I won't go into this by now I just want to give you some key results. Okay, so here are classic reference and books about this Maxwell demon. There's a very nice book, I can recommend you here in the bottom, the papers of Landau and embedded sensor, and the key point is that information is physical as was saved by Landau and erasing So if you want to erase information, for example, and burn a book, burning a book implies dissipation of heat, you need fuel, you need something to burn it. So this is something intuitive but that now we have quantitative understanding for this. But now I'll go into something a bit more detailed, which is the theory of thermodynamics of information processing in small systems. In particular, I'm going to follow a very nice piece of work that I really like, which is the pieces of Takahito Saga what it's called thermodynamics of information processing in small systems. This is chapter four, five, I believe it is really amazing this this this work and Saga was one of the two scientists who generalize the second law to system with information. So the second the new second law, as you will see it has information as a source of entry production as you will see. Okay, so before starting, I give you preliminaries on mutual information. So you have. Now we will think about two variables. X is the state of the physical system and why is what do we measure. So for example, X is the position of a call it, and why is the, what is the position that we measure. If we have a device that is imperfect, if the particle is here, maybe we think the particle is here. That's what is why. So in these quantities, we can define a mutual information, which is what I put on the top is nothing but the logarithm of the joint distribution divided by the joint distribution, if the two variables are independent, which is p x times p y. This is what I put in brackets. And this is defined as the logarithm of the conditional probability of y given x divided by p of y. Okay, textbook. And this is in the course of material actually information theory. This is stochastic information. It's also called information density. So you have a random variable, you measure X, you measure why, and then there is a probability that this is happening. So this neutral information depends on your actual measurement. And the books more often is the average of this information, which is called the ensemble average of the mutual information. This is what appears on the second line. By the way, do you see my mouse. If I'm passing here. You see, as you can see it. Yeah, okay, very nice. Very nice. Sorry, because I was a bit lost. Okay, so this is the average of this quantity, and you can show that this is positive because this is in the end a cool back library of divergence between the joint distribution and the factorized issues. Another important property is the symmetry property. So the information that X contains contains about why is this information that Y contains some of X. And secondly, it's relation to entropy information is the entropy containing variable why minus the information about why that you get knowing X. And this is the, how much gain information by knowing X. This is a very natural, very easy thing to do. Okay, so now I will apply information theory to non-eclimate systems. So, which situation I'm thinking about the reason I'm thinking about I'm following closely this chapter by Zagawa is the following. So we're thinking about. Okay, so this is the situation before we were thinking about a control parameters are driving applied to a thermodynamic system. So we had this piece of the cake. And now we are doing what we call a feedback loop. So, we are doing a control, sorry, we are doing a protocol in the system. And we measure X. So the outcome of the measurement I call it why. And what we do is given the measurement of why we change the control in the next step. This is done by a controller. And this operation is called feedback or feedback control. Okay, so what I explained in my previous lectures, didn't have this part. Okay, it was just there was a driving and the system was responding to the driving. Now we have another element which is feedback control, depending on what we measure, we change something. So the engine was doing this because the engine was doing this because it was measuring X. Well, here, why equals to X. So it's a perfect demon. It doesn't make mistakes in the measurement. And then it applies. It does a different protocol, depending on the measurement. That's exactly what I'm trying to generalize in this in this image. Okay. The following variables so x n is a trajectory, y n is a trajectory. This is of the system variable and this is of the measurement. This is the value of y at the end and the same for X. And here I introduced the feedback loop. So I measure X zero, and this gives me Y zero, and this controls the value of the parameter in the next step, which I call lambda C. This one, which measures Y one, and it implies another one. So we are operating in loops applying feedback. And the theory of Saga that I'm explaining has two assumptions. The first is that the measurement error, which means, what is the probability to measure a given Y, given all the history, and looking, it's independent on the previous values of the measurement. And it's like measurement error only depends on the history next. It doesn't depend on what was my measurement in the previous step. This is the first assumption. Sorry. And the second assumption is that there is no back action. This means the system is classical. If it's quantum, if I measure it will be a back action on the controller. Here we are neglecting this. But mathematically it means that the exam, so this is the marginal distribution is independent on whether we do measurements or not. Another thing is that I'm not assuming the measurement is Markovian. You can do it Markovian if this condition happens if the probability to measure why a time in given all the previous history of exam only depends on the last observation of exam. So this is Markovian measurements and what I will show later is valid for Markovian measurements, but also for no Markovian measurements. Okay, reminder. Without feedback, we had this always the same protocol and the response of the system is fluctuating. So for to the same protocol, you can have many responses. This is to cast. Now we have feedback control. So we are measuring. This is the process we measure why and why the signs that the control parameter. So in every trajectory we have a different protocol. That's the key difference between the two scenarios. What I was explaining in my third fourth lecture and then explain now. Okay, if you want to see a nice example, Gaussian noise in measurements go to the, actually the chapter in this thesis is chapter nine, go to chapter nine to one and look at the next sample with a Gaussian noise in a measurement. All right. So now, let me start with genetic features of this theory. The first thing I do is when I assume general non Markovian measurements, and I calculate here the probability for a given trajectory, given a sequence of measurements, which is why up to n minus one, and this implies a given control in the system. So what is this probability probabilities, of course, the probability at time zero to be at x zero, and then I'm factorizing the probability into, for instance, the probability that I observe x one, given x zero, and why zero, the probability. Okay, then it comes to my one given x one, and then the probability to see x to given the previous x one and the previous control. So it's just in the spirit of the, sorry, of the feedback loop that I explained here, it's just the probability for this process joint. Okay, this is just probability theory, there's no no fancy assumptions here. Okay, so this was the joint, this is the joint distribution to see a trajectory xn and a measurement sequence xn these are sequences of states and measurement. And this is what we, what the system is doing. So now, let me define in this, sorry, this doesn't work very well. In this calculation, I will split two parts, one is this part and the other is this one, and I will define the part that has probabilities of why is given x. The probabilities of measurements, given the state, I call it p sub c. This is the, this is be careful it's not the conditional probability as I showed later, it is something that they call p sub c of why and sequence of measurements, given sequence of measurement sequence of states xn. And this, you can see, you can take p x zero with all this is the probability to see a sequence xn, given the protocol, why, and so this is given the product. So, please realize that this is private effects given why this is a variety of why given x somehow. Okay. What is important is you can check normalization of this. So when when you build a private distribution the first thing you should do is to see if it's normalize just to see that what you're doing makes sense. So I integrate over all x and y's of this distribution, and it's very important how you take this integral. You have to use causality and start from the last thing that appears, you first integrate the yn, so it's the last measurement. And this is a marginalization, this is integral of this is one. Then you integrate over xn, this gives one, and you go on, go on, go on backwards in time. So all of these marginalizations give equal to one. This is the nice thing. From here, you can define marginals. So the marginal of a trajectory xn is integrating over y's, the same for the y and conditionals conditional is the joint divided by the marginal of x given y of y given x. Okay, and now it comes the key point in this theory that is the fact that this conditional distribution here is not the same as what I called before PC yn xn. This is when you go to this chapter and read it. This is the key. If you understand this, you probably understand the full chapter. This is very important. This is the note. Note P y given x, this is really the joint distribution, is not P sub c y given x. Okay, and we can build this because P y given x, sorry, I'm making a mess, P y given x is P x comma y, the joint, which is this one. This was the definition I did before divided by the condition P of xn this way. So how can we get that P equals to PC, P equals to PC when P of xn given, sorry, this is not working very well, P of xn given the feedback, this one is equal to P xn. Okay, in this case, PC is a conditional probability. This means that the probability to see a trajectory given the feedback is the same as the probability to see trajectory without feedback. So this is in general not true. This is true only if there's no feedback in the system. That's why these two distributions, P sub c is not the joint distribution in the feedback process. Okay. This is an important fact. And what you can do later on is to measure what is the mutual information from measurements. The information obtained from measurements is, okay, so I'm using the same definition as before for information, which is this one. But instead of putting P and putting P sub c, because it's the one that contains information on the measurements on the measurement error. So this quantity P sub c divided by P. I just apply the definition here. And I realized that this can be written as a sum of information at time k. This is how much information. The sum of information at time k contain a about the full history given the previous measurements. And this is some of our old times. The most important thing is that this YC is not equal to I to the full mutual information. If you put here instead of PC you put the, and this is because what I explained right before in the previous slide. This is just a detail. What is the first thing that we can find is an integral practice in theory. So this quantity, I see, I sub c or base of lactation theory, because when you average it with a negative exponential you get P divided by PC, which is something that they had. And then you need the joint distribution here. You can easily show it because I show it before what is, for example, P divided by PC, you can build it with my previous slide and show that there are two terms that cancel and you just get at the very end of the story. The joint distribution. So you can integrate first in X and then in Y and get that this is equal to one. This is just consistency and it's just a mathematical consequence. And as I say now, it is connected directly to physics, which is a fluctuation theorem, detailed application theorem for a fixed control protocol. So now, all the things that I'm explaining to you of information I will connect to physics to hit. So how can I do this, and the way of doing this is to consider first a fluctuation theorem, given a control protocol. Okay. So this is the probability in a forward experiment with this control parameter to find those with measurement Y and state X and this is the priority to have state X and given that you apply a given protocol. The given protocol is for a fixed wire. So imagine you do this. Okay, I'm telling you the best is this one. Imagine you do this feedback control protocol many, many, many, many times, and you get many, many control parameters. So many trajectories of gamma gamma for each gamma, there is a wire next. Actually, there are many wires next for each gamma. I'm saying I'm fixing gamma. What is the probability that for this fixed gamma, I, I see a trajectory XN. Okay, this is what I'm putting here. What I put in the denominator is the priority to see the time reversal of that this that trajectory that I'm seeing in the time reverse protocol that is chosen chosen from the forward experiment. And I say, okay, in the forward they had a control, and this produced me some wise and some X. So now I will look what happens if in the backward I'm using the same control that obtained in the forward feedback. And it gives the time reverse trajectory XN. So this is from freezing the control parameter is the same as what I did in my previous lectures, which was, I had a control parameter. So this is the probability of trajectories. So this is related to the heat. So I'm applying the same theorems as the one I showed before, because I'm fixing the control parameter and fixing the protocol. So when I do this, I get that this ratio is related to the heat along trajectory for this feedback. So first, this is conditioning on X zero. So if I remove that conditioning is multiplying by the X zero. I just get an extra term that is the system and so I can go from here to here, almost a year. All right, this is a bit mathematical, but the nice thing is that later, what I can do is the following. So here I repeat this is the pro. Okay, now I will do joint distributions. This is the joint probability to see XN YN in a forward experiment with feedback given by lamb time. Okay. So think about this, it's a joint probability. And now I look at the joint probability in the backward process of seeing the backward trajectory, so the time reverse of XN and the same feedback. Okay. I'm saying the same feedback because I'm going to apply feedback forward, and then I will take this one, and I will use the protocol that I got using this, this wine, and I will reverse it in time. That's why I put wine here. That protocol will be this lambda plus, and we can compute the race or the number of trajectories XN plus wine in this backward experiment run with the trajectory of the forward. Okay, this is a, okay, this won't be real for you but it's important that you review my video later because this is a non trivial. And also to read the chapter of Zagawa where you will, we will have more time to understand it. But the key point is that once you do this, these two quantities, you can do the race or between them and show. Okay, here I'm using the definition before the joint distribution is PC times BX, even Y times BX zero. And here I'm doing the same thing for the backward. And then introducing PYN over PYN. This is like multiplying by one. The important thing here is that this one here is equal to PYN. Because in the backward process, I am imposing to have YN from the forward. So this one is equal to PYN and factorize with this one. So overall, what you get from this race or is the exponential of minus the entry production, which is this part. Now we cannot see your mouse. I don't know why you cannot see my mouse. Now. Now, yes. Okay. When you do this operation, this one cancels with this one, and you get this part that is the e to the minus entry production from my slide before. And this one with this one, which is minus IC, what I explained at the beginning. So, all in all, this is the fluctuation theorem with mutual information. The ratio between these joint distributions is related not only to the entry production, but also to the information. And this is very important. There is an information term that comes out of this ratio, which later on, you can use it to get a fluctuation theorem. Now, you can get that this average equals to one, and this average was is the average of e to the minus s minus I. So this is the integral fluctuation theorem with feedback. It is the same as what I explained in minus static was one, but there is an information term here. Okay, so if you want to have a take home message of all this theory, which I highly recommend you to go through the chapter I explained you. There is a yellow equation here. There is a practice on theorem that has a modification with respect to process without feedback. So without feedback IC is zero. And E minus says on average is one with feedback you have to take into account the information. Okay, information from the measurements. This is the IC. If you apply Jensen inequality. Before I was telling you e to the minus s equal to one implies s is positive. Now is this one is positive so s is greater or equal than minus KB the information. The information is actually positive. Sorry, I think here I'm a plus should be there is a there is a lot. This is positive. So this means that the entry production on average can be negative because of information. Okay, this. This is a way in which we explain what I was talking about of the seal art before. This is for an isothermal system total entropy is work my free energy by the template. I plug this here. And I get that the work is greater or equal than delta f minus KB T I. You see that I is positive. And in the seal our engine the information is the channel entropy and is KB T log two. So this proves the analytics of them of the signal engine and shows that there is a source of entry production that is information. So you could rewrite this equation as s dot plus I and say there is a full entry production, which has the path from the heat and the path from information. We covered a second law. If we interpret this information as a source of entropy, which is the way we understand now the second law. But there is not only entry production from the heat, but there's also from information when there is feedback. This is the idea. Okay. I don't know how much time they have, but they wanted to show you some experiments in case after. 10 minutes. Excellent. Okay, so this is the theoretical path. I'm sure it wasn't trivial for you, but I'm explaining one of the most complete theories and you can find examples and in the review I gave you there are simple cases but I'm trying to show you the one I like the most and the one I think is the most Okay, so please review, go to the reference and study. Now I'll finish with experiments. So these principles have been tested in the lab. A very beautiful experiment is by Jomri Koffers group from the 2004 where they did a test of Landau's principle in a feedback trap. A feedback trap is a device that can create with an electric field any type of potential. What I showed you yesterday from our experiments was a parabolic potential and an external force that was run. This step is different. Here there's no external force is just a potential. And it's done in a way that there is feedback. You see the particle here, you know the force here should be zero so you apply force to do. You see the particle here you want the force to be high and you apply that force that's called a feedback trap. This is a beautiful experiment, and they could do this type of protocols of low increasing barriers, creating double wells tilting. These are trajectories of the particle that end up always in the same way. So this is a physical experimentalization of Landau relation. And here is the power of stochastic thermodynamics that you can take these trajectories and measure work measure heat, etc. And they could check that this goes to KVT log two. And you know this protocol. This is really a very impressive test, which was actually done even earlier in this nature paper in the Chillibertos group in Leon, where they did this with optical tweezers. So this is a different step. It's not the feedback trap is an optical tweezers where you also build up a double well by shaking two lasers within two positions very fast. And this way you can create a double well. And tilting also the potential and moving the particle always to one of the wells, they could measure distributions of heat and get that it converges to KVT log two, 0.69 KVT. This is really really a tiny measurement of heat in this system. And okay, there are more setups we also did an experiment in Barcelona. And there's some beautiful experiment by you can play call us group with electronic systems, where they build an autonomous them on it was the first time ever. I think it was 2018. In this experiment, the ones I showed before, there is an external controller that is applying device protocol, but in the pecola lab they could do a demon that is autonomous that you let it do it, and it works on its own so it was very impressive. And give you the reference in the next lecture. So break through 2010. The group of Sano in Japan. They, and what they did was something similar to this picture. So here you have a staircase and a particle goes more downhill than uphill. But if you have a demon is very clever. The demon can anticipate that the particle will fall and put a wall in a way that it only allows the particle to climb. But they could do this with with two particles is a doublet of particles that are attached. And they are subject to an optical field. And they could process information measure information in a very precise way, and get that by relations of the standards of law so work my energy was negative. In some cases. However, this fluctuation theorem I told you about, which was this one with Newton information, it would be work my energy minus I, the eye is to cast stick in each run, and they could test it very very precisely in this device. And it's a very impressive experiment where they had around 100,000 runs in each of these points. So I was very impressed when I read this in nature physics some years ago, and I really highly recommend you to go through it. And this is it. This is what I want to explain by now. Of course, this is a very active area of research and I really in one hour I cannot do much better than this. I really encourage you to go to the, to the papers, I recommend it to you. And also materials course information theory is online. It could be a very good combination with this, this lecture. So, for me, this is it. And I'd like to listen to questions if there is any. So, what is the difference between this IC and the mutual information. Well, I see it's a little information. Because you said that PC is not the condition of probability. Yeah, so PC is is not exactly beat. You see it here because PC, the difference with PC and P is this one that PC has this conditional distribution. So it's the probability that the state was XN, given that you implemented this feedback. So XN is the unconditional probability. This is the probability that the system took the state XN, but the feedback was anything. So it is the marginal. Yeah, therefore there is at least mathematically understand that there is a difference between these two. Yes, but then when you take any compute I and I see, you should get two different results. Yes, yes, yes, yes, I and I see are different, except in the case where you have only one measurement so there is no trajectories in that case, you get the same thing. This is what you can show the way it's understood in this in the book of Chicago is that I see reflects the correlation due to measurements on the system, whereas I is the one due to feedback. This part, very, very, extremely clear. I understand it mathematically, but I think what counts is that the way this is defined because here, I think that the point is that there is feedback there, whereas, if you don't put here, and the C. Okay, I think maybe the notation is not the best, but probably we have to go and understand. And what is PC, actually. So PC is shown here. So it's a probability of yk. So this is which was the, the, the measurement given a state. So this is, and this is the way we, we, we introduced piece of C, but piece different because when you do the marginal here, you do be why you next, you have the joint which is all this divided by the priority of X. This is not exactly the same. And the way I understand this is like this, but I know how much more intuitive understanding, which maybe you have because. I think mathematically, this is not exactly the same as the joint, we don't know for me something why and it is not exactly the same, just because of this. Any other question. So can you explain again, there was an expression where the ratio of probabilities became exponential of entropy. Here. Here in the bottom. Yes, sir. Yes, sir. So how does how to go from here to here. Exactly. Okay, so in the first line, you have given X zero and given all this. Okay. This is given X zero, but here there's no given X zero. So I have multiplied by P X zero in the top and by P X zero till in the bottom. No, no, even the heat part I am. So here I use the partition theorem that I explained in my previous lectures, there is a fixed protocol. So when you have a fixed protocol, the probability in a fixed protocol to get a trajectory divided by the probability in the same but time reverse protocol to get the time reverse trajectory is the heat. I mean it's like again applying the local detail balance here. Yes, exactly. But I mean, for the system, I mean, okay, we are assuming that it's valid in these systems even though. Yes, yes, yes, because we said in the previous lectures that given a fixed protocol. Yeah, I'm fixing the protocol. It's the same setup as the non equilibrium driving explaining my previous lectures. So this is not the whole story of feedback. It's just the trajectories that were subject to the same protocol. This is the same as doing the same protocol once again. Okay. I understand. Thanks. Okay. Excuse me, I have a question. Sure. Is there some bond between I see and I mean, I could expect that I see is bonded from above by I in the sense that if I do not apply any protocol to gain some high quality information I have the information without protocol. This is a good point and I don't know, I must say, because you see, I and I see are related by this factor. There's an extra term that is the log of this reason. I'm not sure maybe one can prove that on average, one is bounded by the other. I just look at the papers of Chicago, but I never tried this proof. It's a good exercise, actually. But it should be in principle, no. At least intuitively one would expect this. Yeah, I expect so. But probably proving this is not so difficult because you just have to check the average of the log of this reason. If this is always getting on zero. Okay, maybe. Yes. But you have to check carefully. But it's a very good question. Very good questions. You promised students. Okay, so please go through the reference I gave you. This will be very helpful. And yet this is it for me today. Thanks a lot for your attention. Thank you very much. Thank you.