 How about now? Can you hear us? Yes. Right. Yeah, instead of the chat, I think it's better if you just use the microphone. Otherwise, we don't pay attention to the chat on the panel. Yeah, I understood. Okay. So yeah, maybe before going through the exercises. I will finish the part of some of that. I told you, we have information. So we started this morning. The main concept of information theory that you will have another course on information theory. So you will see much more stuff, but this is the important thing. The uncertainty of a general entropy. The uncertainty of a random variable. And how to review this uncertainty by asking questions or by making measurements. And the reduction of this uncertainty is precisely the knowledge. Okay, this is information. Now we go to thermodynamics. The things that we have seen this morning is all the time. And so we need to know what is work, what is deal and so on. In Andrea's talk, you can also talk about the different thermal bars and so on. So what heat and work is something that we know from thermodynamics. You remember, but actually it's a very controversial thing. Heat and work is not. We don't have a clear definition of that word. So in general, for a general situation. But for a specific case, the definition is completely clear. Which case is the case of a system in contact with the thermal bar. You have a system in contact with the thermal bar. Then everything is clear. Heat is the exchange of energy between the bar and the system. And work is the rest of it. Okay, so from a microscopic point of view, it's also possible to calculate the work, the work and the heat in the process. The typical setup, as I also mentioned this morning, is this one. Let me share that to use my fancy pointers. Use this pointer which is quite different. This is the microscopic view of the thermodynamic process. In thermodynamics, you have a system and you have a parameter that move. Well, if you want to do the microscopic dynamic, you have to imagine that the system has a Hamiltonian. The system has a Hamiltonian. And the parameter, which is maybe the volume of your gas or a field or whatever, is the parameter in the Hamiltonian. Okay, so this is the situation. You have a Hamiltonian that depends on the parameter or in general, you prefer to serve a party. And X is a microstate. To be also necessary. And so what is the process for a protocol? This morning also I mentioned protocol. Protocol is to change the parameter in a big way. In a statistical mechanics, the description of the system is to a probabilistic state. And you'll have an evolution equation depending on the system. If it is an isolated system, you can have it with the equation, if it is a system you can have a network and an equation. If it is a bundling system, you can have the equation. If it is a bundling system, you can have that equation there. So that equation for this function is- this is a general set of- So you have a system. So this is the general setup. So you have a row that depends on time and you have a coming goal. So here is the main idea. The main idea is that the average energy, which is this one here, this one here, if you differentiate with respect to time or calculate the differential how it changes in time, then you have here and lambda depends on time. So you have to make a derivative, this is product, so you have the derivative of this times this and this times the derivative of this. And the derivative of H with respect to time, which is what here lambda is changing in time, so you have to make the derivative with respect to lambda. This is called the conjugated force. It's the derivative of the Hamiltonian with respect to the parameter. So if you make the derivative of this with respect to lambda, then you get the conjugated force times row. But you have also a change because the row changes. So you have H times the answer of the row. So the energy changes, let's say for two reasons. One is because the Hamiltonian changes, like when you change the field, the Hamiltonian changes, so the energy changes. But also because the state, the probabilistic state of the system, this row changes according to some equations. Okay, so you have these two times in the change of the interval of the energy system. Okay, so this is work and this is heat. Why? Well, the reason is a, this is a very standard equation how to calculate heat and work in a system. The idea is that work is what the energy that the external agent introduces in the system. And the external agent moves, you can imagine, for instance, a instantaneous change of the Hamiltonian. The energy changes. And this change is entirely new to the external agent. So this is work. And when you be the system relaxed, you change it, and then you relax the state of the system relaxed. In this relaxation system exchange energy with the thermal mass because the agent is not doing anything. So this is heat. Okay, so this is work and this is heat. We are not going to calculate this. You can actually, the heat is very easy to calculate. Because the, the, for past static processes were raw in the process of the system, but we know raw. Usually you calculate raw by, by installing this particular equation of the equation. But there is a game where you always know what is wrong. And it's where the system is important with a thermal path, because in this case, Roy's just the canonical, and it's class is starting to change the parameter. And in this case, Roy's just the canonical example. And in this case, it's very rapidly easy to calculate the two times here. Okay. This morning we have seen all this issue or all this problem of the connection between thermodynamics and information has to do with the maximum demon or selecting has to do with the following. And it acquires information by measuring and then extracts work. So what it is important in terms of information is this work, how a system, how much work. Can we extract from the system. If we know something about the system. And for that, if you may remember from thermodynamics, the question is blackboard or what is that. This is just, I think it's on the screen. On the screen only have to use this one. Yeah. Maybe you remember from thermodynamics that the maximum amount of work that you can extract, or the minimal work that you have to do work is always bigger than the free energy. The final free energy minus the initial period. This is a, this is the second law for C for iso thermal process. This is for iso thermal process. So it is like a thermodynamic basic. This is a basic idea of thermodynamics. You can, you can, the typical way of, of, of deriving this is by calculating the total entropy in the universe, which is the total entropy in the system. This is the system. Plus the total entropy in the bath. And the change of entropy in the bath is you drive it. I'm using here a different convention of science. This morning use the opposite. This is the typical one. And then this is the first law, which is the value for q plus. So the system receives energy from an external like it, but it receives energy from the bath as well. So if you have this, you have that Q is that data view. And you put this everything here. You have that assist minus T that time. Sorry, T that is T that is minus the day. Remember that it is T minus PS. So we know it's a thermal process. Is that I might be delta S of the system. So this with this is a minus delta. No. And this might be this might be the class. And I take this here. This is a typical derivation of this form. So for our internal process. You can derive this formula also from the definition of work that you have there. So this is the, this is the statement of second law. If you have a cycle, then it is zero. So this means that the work is zero. So you cannot extract work. Remember the extractive work. The extractive work. Minus this. This is the conventional side. So we have positive work when we put work. We have negative work. So this is the second law for our thermal processes. And okay, we would like to use this for the, for our system as well. For information when we measure for this. But we can and the reason that we can. Well, we could, we could do it. But this is, this is something that connects the equilibrium state. Actually the free energy. There's a lot of free energy at the top. We cannot define this for money. In principle, we cannot define this for it. So this is not. This is not completely true for it. But I will extend this. And the problem with information is that when you measure. The system is no longer in equilibrium. No, because the, for instance, in the case of the city. If you don't use the particle fields, the whole thing. And when you measure. Or it's even more clear that you have to do optical traps like this. The, the, the, the equilibrium is safe. This is to share. I mean the particles in the tool. Here we probably won't have to. When you measure you drive the system out of the video. You need to express this. This is to extend this persistence out of it. How we did that. I understood that. Like this. The fact that the system is in or out of the equilibrium depends on the information that you have about the system. It's a subjective thing. It's a subjective thing. Okay. Okay, yeah. Okay, so, so we need to extend this to non equilibrium to processes involving non equilibrium states. How we do it. Well, in non-equilibrium states, how we do it, well, this has been done by people not so many years ago, but this is a result of many, that has been derived independently by people. The idea is to define a non-equilibrium free area. And this non-equilibrium free area depends on the Hamiltonian of the system and on the probabilistic state of the system. And this is defined as usual, E minus Ts. E in this case is the average energy, T is T and S is the shallow range, multiplied by K. Or the interest in the non-equilibrium free area. And what it is interesting, well, this is a note that it is not a thermodynamic function in the sense that the thermodynamic function only depends on H. And this one depends on H and both. So it is not a function of the state. It depends on the history of the system and so on. But it has a nice property, which is that if I have a process and I go from a situation from non-equilibrium to non-equilibrium and isothermal. This means in contact with the thermal back. This expression is the same for non-equilibrium free area. And this is put in many ways in the material that I did in the paper that I gave you. This is a review of this issue for the University of India. And it didn't explain these relationships. Okay, now we can explore this if you like after the exercise once you see the exercise. This is a proof, partial proof of this expression, but I will not give the proof of that. There are several proofs. I'm very happy with the proof, which is in the paper that I gave you. The memory refers to... Well, I said memory, but this is for a system. This is because this is from another talk, but this is for a general system. It could apply to memory. This is something that we will see after this. Okay, now what is interesting is the following. What happens to the non-equilibrium free energy when I measure something? Okay, when I measure something, the non-equilibrium free energy is like that. Here we have to do some assumptions that when you measure the system is not affected, then this is not affected, the average energy. I mean, it's not affected in the sense... Yeah, by measuring... No, the average energy is not affected, because we are averaging not only over the probabilistic step, but also over the outcomes of the measurement, and this is not affected, but the general entropy is affected. Like for instance, in the CIDA regime, my particle can be in two places, with probability one-half, and after the measurement is here or is here. So the general entropy is reduced, and we have the same as the gain of this morning, that the uncertainty or entropy of the system reduces when I measure, and how much this is this reduction, precisely the motor information. So if I plug this into this, and I get that just because of the measurement, after the measurement, the free energy increases, the entropy decreases, but the free energy has a minus the free energy increases. And so far, at this stage of the theory, I don't care, of course, this increase, I'm increasing the free energy, so I'm increasing the possibility of the system to do work. This is not free, but I'm going to, I suppose I just want to see the consequence of the measurement on the energetics of the process. I don't want to solve the big problem, which is how to restore the second one. So this comes from the measurement. I don't care if this costs energy or not, but I know that if I measure the free energy increase, and the amount is the motor information. So now, suppose that I have a process, no? This is the standard second law, where I have, well, equilibrium or non-equilibrium, this could be equilibrium free energy. Usually I use this F for non-equilibrium and the Roman F for equilibrium, but this is not important. Now suppose that I measure in between. I can apply the second law here and here. So I can compute the minimal work to go from here to here, and the minimal work to go from here to here. And I have a jump because before and after, because of the measurement, I reduce the entropy of my system, I increase the free energy, and this is the internal free energy. So if I do that, I get what it is called a feedback second law. The work that I can extract from the system is the final minus initial plus some extractor, sorry, minus some extractor that comes from the measurement. And it's just, it's as simple as this. It's a sophisticated way of saying I have some entropy, but suddenly I measure and for free, we will see tomorrow that this is not for free. We get some extract, we reduce the entropy of the system by K i, I use the motor information, or I obtain an extra free energy, which is K t i. And then I can extract more work. Remember that this is work that, so when I change the sign, it's extract work. Now on the cycle, this is zero, I can extract work, like in the CIDAR engine. In the CIDAR engine, that's zero and so on. Okay, so we don't need anything else to extract the SSS stuff. You see that everything is a bit better now, so now you are going to see this flow, how it works in the practical case, which is the CIDAR engine is errors. Of course, in the CIDAR engine, the CIDAR engine with no errors, this is very simple, this is delta i is zero. Mutual information is log two, and I have the exact work that's going to be log two. So I do it, this is very easy, but you will have to do now. You know how I do the tables and things like that? Okay, now you have to try to do the exercise. I think for the online people, the online people, I also asked you to do it. Now I will connect my iPad, so this is just my presentation. Now you have all the material to do the exercises. What we can do is to relieve, I leave you like 15 minutes to work the exercises, you can talk in groups or whatever, and to do the first exercise. The first exercise is even you can do it just with what we said this morning about the CIDAR engine. It's the state for the second exercise, we need mutual information and so on. So let's do something. I will leave you 15 minutes for the first exercise. I can go around and solve questions if you have some problem, and then after 15 minutes we will solve the exercise, maybe what you already solved it, and we can discuss it. Remember the exercise is just the the CIDAR engine. Remember it was like that, so you have a CIDAR engine, you put the pistol in the middle and you measure, but there is a probability of error. So if x is the position and a is the outcome of the measurement, you can have left, right, but if the position is x you have left with probability one minus epsilon and right with probability x, because you have a probability to have a mistake and because your apparatus is not precise. So you measure left, right, and then once you measure something you cannot move in the original CIDAR engine, you move this all the way to the right, for instance, and here you cannot because maybe the particle is here, so you cannot compress it down to zero, so you compress it up down to, I think it's to alpha, alpha b, b is the total volume, right, you can put b equal to 1, b cancels. So here this is the situation, so if you are right, you expand the gas and you get work, but what happens if you are wrong? If you are wrong the particles here and you are compressing, so and this of course with the probability of x is one minus x, so you have to calculate the work which would be a function of x is not an alpha. Calculate the exact work, okay, in average, of course, we are thinking of average, okay, there are like people maybe just in the explanation that you can rotate the camera. Well, it's okay, it's in there, I mean it's in the exercise, so if you have any questions just ask them, so I will leave you, it's in 15, let's discuss at five past four, we can discuss the exercise, but it's good for, it's very simple, I mean it's an exercise of elementary thermodynamics, back playing with the probability, you have to have this slide with the sit-up, the originals that I read, so we can hope for. And when you refer to the protocol that maximizes the extractive work, how do you, how do you design that, develop, no, the alpha that maximizes the work? Yeah, it's still fixed, it's given by your measurement, what we call the protocol is what do you do after the measurement? So the parameter is alpha, so it is to maximize the extractive work over alpha. Okay, for those, remember this formula for the work, the work is here, I left this, the work is here, okay, the extractive work, so you just have to apply this expression, and here you see that if the final volume is bigger than the initial volume, this is possible because of expansion, in an expansion I extract it, but in a compression it's the other way around, okay, and here you have to want to do it when you are wrong, you have to compress, okay, And so, I do believe that, I do see it as a saying in the United States. The United States has a very bad deal with other countries. Compared to theателя to the U.S. it's not equal to the provincial state. The U.S. requires more than a half the something that the U.S. I will give you the solution of the question. I will give you the solution of the question. I will give you the solution of the question. I will give you the solution of the question. I will give you the solution of the optimal, the optimal in this one. I will give you the solution. I will give you the solution of the optimal, the optimal in this one. I will give you the solution of the optimal, the optimal in this one. I will give you the solution of the optimal, the optimal in this one. I will give you the solution of the optimal, the optimal in this one. I will give you the solution of the optimal, the optimal in this one. I will give you the solution of the optimal, the optimal in this one. I will give you the solution of the optimal, the optimal in this one. I will give you the solution of the optimal in this one. I will give you the solution of the optimal in this one. I will give you the solution of the optimal in this one. I will give you the solution of the optimal, the optimal in this one. I will give you the solution of the optimal in this one. I will give you the solution of the optimal, the optimal in this one. Now, if we are right, let's say the measurement is okay, measurement okay. Then we perform an expansion, which expansion, this is the extracted work, if it is okay. This is the expansion from the initial, the final volume is alpha, alpha times volume, divided by initial volume, which is 1 half. So this is KT log alpha 1 half, 2 alpha. So this is a very common expression in the CIDA engine. We look over it with alpha and CQB, if alpha and CQB expand all the way, and then we can. And if the measurement is wrong, so here we can have, the measurement is wrong means that the part is here, and we believe it is here, so we expand. But in fact, we believe that we are expanding, but in fact we are compressing. So I will write this in red, if the measurement is wrong, let's extract this work, this KT, and now the final volume is smaller. The final volume is 1 half, which is smaller than 1 half, and the initial volume is 1 half. So we have KT log of alpha 1 half, 1 minus alpha, no, sorry, 2 1 minus alpha. And you see this is positive, this is positive, and this is negative, which means that the compression, if we are wrong in the compression, we don't extract any work, on the contrary, we do work. And here you see also that alpha cannot be 1, in this case, we have the log of zero, which is infinity. So just the fact that we cannot compress the glass, even if it is just a single particle. We cannot compress from zero volume. We cannot compress. So this is a measurement problem. Now, if we repeat the cycle, it can happen either this and this. This of course with the probability of 1 minus x, and this of course with the probability of x, we are wrong with the probability of x. In principle, in the real system, it could be small. So we have to extract work. It's just the average. Waiting with the epsilon. Let's wait this first. This is the measurement is okay with probability 1 minus epsilon and the measurement is wrong with probability epsilon. And this is the track work in average. And what one can prove this is from here. Okay, this is the extractable. This is the, this is the first question. This is the first question. Second question to look for the optimal protocol. So what is the optimal protocol? As we said before, epsilon is given by the apparatus. But here, the only, the only choice that the, the demo, let's talk about the demo. This is the piece and the only choice is how we can choose alpha as we like. So the optimal protocol here means what is the optimal alpha. And this is very easy. We have just to calculate of maximize the function. So we calculated the liberty from the traffic work with respect to alpha. We have one minus. Well, everything. We can extract the key here. And we have one minus alpha. So we have to divide by class. No, sorry. There is a minus there. And epsilon divided by one minus alpha. And this in the maximum what we don't know what we could plot this for a fixed epsilon. Well, we can make a plot of this in terms of alpha. But this is, this is, this is, this is not ready. Oh no. Okay, so this is zero. And we solve this equation for alpha. And we get alpha equal alpha, let's say, optimal. We get that our optimal is equal to one. And we can compute the maximum work. And this would be one minus. This is the key. This is one minus epsilon. Well, I can extract a lot to here. You see that there is a one minus epsilon log two epsilon log two. So this is a lot to it. This is a lot. And then I have one minus epsilon. Sorry class. One minus epsilon log one minus epsilon and epsilon log epsilon. And this is an interesting formula. This one, this is the maximum work that I can. These two, these two things are interesting. The optimal protocol and the maximum work. And the maximum work is this one. This can be read this looks like, you see that it's Katie law too, which is the stealer. It's like the entropy of the binary. I mean, what is this best this is this looks positive, but it is not because epsilon and one minus epsilon are smaller than one. So this is there. And actually it's better to write it like that. This is in the exercise. This is the expression. So it's simplified by using the function. This is the sharing entropy of the binary variable. With probability. This is maximizing. This is maximizing. Yeah. So it's okay. Here. And this is locked. This is the maximum value. When epsilon. So the maximum. When it's only zero. I can extract Katie law too. Which is the city. Because it's a city without air. When it is one half. I extract zero. And this is what we call also a blind blind protocol. Because we. Okay. We measured by the outcome of the measurement is one half. I mean, it's. Run. Here we see also what we were discussing this morning. Your question that. If epsilon is one. So when I say one, you are wrong all the time. So it will become pressing on the time. But mathematically what you're saying is this one. Why. Well, I think it's first because. This is no longer a maximum. But it is also physically is because. If it is completely wrong. I mean, if the measurement is completely wrong, you can just do the. And you get. You get the protocol, which is the protocol. Which is the inverse of what you expected. Which extracts the maximum. I just want to say. So it's one. Then alpha zero. So you're compressing in negative direction. Yeah. Probably. The solution is. Yeah. Zero. See. Is zero. So you go back. Yeah. Yeah. Which implicitly means that. You know the function when you ask this morning, you need to know the function. Well, yeah. You know the function because in this case. The function is the negation of the. The. The. Right. But it is a unique. It's a function of. They have the measurement outcome is a function of the. Okay. More things here. Yeah. Something which is also interesting is the fact. That. I have to. Compress just an amount. Which is one minus. So this means that this is excellent. No. The. The tiny. The space that I left here. So this is a. This is not a coincidence. This is something. That makes the process completely reversible. This is one. I think was not in Venice. Then it's explanation that. So suppose you measure something. You measure the. You have a measurement of something. And you want to optimize the use of this energy. You want to extract the maximum amount of energy that. Which. What is the way to do that? Is to. To do a protocol that is completely reversible. In the problem. And this is. It's completely reversible. It's in the following sense. Wait. So both. You start to put your. You put your. You. You put your. Here. This in the. In the right. In the right. In the right. In the right. In the right. In the right. In the right. In the right. In the right. In the right. In the right. And now suppose that you. You. You mess. Left. Left is the outcome of your. What is the. The. The. The. The. The. The. The. The. The. The. The. The. The. The. The. The. The. The. The. The. The. Even though the. The. Okay. The. Okay. And here. The. The. The. The. The. The. The. And here. Here with. Here. Here with. Here. With. Here. Yes. Did. This process. Which is something that. And you consider the backward protocol, which is there. And there are those this morning. But you do a reverse in time, your action. So here you reverse it. So here you put the piston here, you move it to the middle, and you remove it. Look, when I put here the piston, what's the problem? The problem is here. I can see a person here. Identical probably speaking the forward processes equivalent to the backward process. And it was a big story, the story of what is going on. Well, it was not, it was not quite needed. John Horowitz and fully by me, but you know, I don't know how to pronounce the last name. Well, they discovered something and we said, but this is a new thing that to define reversibility in this in processes where you have a message. We will not talk about this because it's a whole topic that you can look at. I can give you some things and some things. And it is, it is, this actually is not a coincidence. It's just what makes the whole protocol. So could you repeat how was the reverse protocol? It's probably speaking, it's not a probability. These are probabilities. So when you measure, you update your probabilities. Before measurement, this is one half, one half. One half, one half. And now we measure. And by measuring, this is the magic of measurement, that you don't touch the system in principle. You have to measure that. The idea of classical measurement will be touching. And the system and the probability changes. Change, the probability is changed. If it's not zero, then you know for sure that it's all the way. But if it's not zero, let's say you update your state. This is called my yes, an update. You update your probability. And you incorporate the new information. And then this is your new state. Okay. And okay. Now consider the backward process. The backward process is such that they updated the state, the probabilistic state coincides with this. Because when you insert the map of the backward process, by the third of the piece of here, you need the two probabilities, probability to be here, probability to be here, probability to be here. So your state, probabilistic state in the backward is identical value in the forward. And that's the notion of feedback. So when you're like, for example, we talked this morning, the total amount of it would be. Yeah, well, yeah, because the feedback systems. They need. A full. But essentially, I will not talk in the review. This is a short version of what not a review is a long version of the review that we published in nature physics. And, and, and, and right in the book, which is the long version of everything. And there is a chapter of population. And another chapter of the visibility. And it's the book in five minutes. Which is something. Who knows. You have all the information. Of course there are papers and so on. But this is it. This is the notion of progress. Okay. So to finish the, the, you see that we are not considering the cost of measurement for the cost of the measure that we talked about this morning. We are just thinking of the demo measured something and what is the effect of this was the effect of this basically the right to extract war. I can optimize the work that I started doing so and so. Let's not connect this with the second law. So remember that we have the second law here. And the law that I just. So we are. Now we are going to. In the second exercise. We are going to check this. So we have obtained the exacting work because this is the specific example, but we can apply the law of the ideal gases and we can. Solve it. But think of maybe the system is optical tweezers or something complicated that you cannot explicitly compute the world. But we have these two. So what I want to write you to do what I want you to do in this type of exercise is to check this. So this is the, this is the system and this is the outcome of the measurement. So the purpose of this is to calculate this. But now you are going to use the information there. You have to calculate this. And the second part is to check that this is okay. For this is good to use the, this formula. Because whenever you have a binary binary bio, you, the entropy is immediately different. But for me to have to calculate, well, I don't know if. So, so the purpose is to calculate. I will let you with the formulas. Here. I will let you. So the first part of the existence is to calculate. Let's see. I think in this case, the best, the best formula to use. He's. He's. No, the third is. To calculate. Because I think the best one, let me hear the best one here. It would be a calculate. As. H M. Minus H M. X. This is the best way. You can use the other formula. This one. And this one. But in this case, I think the best one is. Maybe there are many ways of doing this exercise, but. I think this is the best way. This formula. And then check the second though, I will try the second one here. Here, the work is this. Remember, this is the real work. Not. This is bigger than. Minus. I can. One. One. One. One. One. One. One. One. One. One. Two. One. Yes. Dye. One. Dye. So this is our, this is our, uh, ready, you have to, and this is, this is one. This is, this is, okay, but then this, okay, which is something that's important. You could have guests, some very, because another one's no legs, and M is one. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. I think something that, for a binary value, you just have to calculate the probability of one value. Just to calculate the probability of one value. And then it's a small case of this value. So it's very much faster. So use this formula. This is the formula. Use this formula. This formula. You have to use this formula. You have to use the rest of this formula. H of x minus x of x and so on. And then what should we do? All these variables are binary. So you have to calculate the probability of three. You have to calculate the probability of three. This is the formula. This is the probability of three. This is the probability of three. This is the probability of three. This is the probability of three. This is the probability of three. This is the probability of three. This is the probability of three. I think it's a very bad probability that it's going to happen in the future. I mean, I use this one, I use the other one. Yeah, but it doesn't come out like that. The other one doesn't come out like that. No, but it doesn't come out like that. Because it doesn't come out like that. It's the probability that it's going to happen in the future. No, but it doesn't come out like that. It doesn't come out like that. We won't want to do anything about it. But I'm sorry. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Sorry. Sorry. Sorry. Yeah, you have to graduate for people by the speed and then. It's at the end. So. And then Please call the university in the minds of friends law. No more. Yeah. Don't be touched. Are we in agreement? No, I think we're in agreement. I hope you subscribe to our channel. Thanks. You're welcome. Sorry for that. I got it. I think we're going to try to. We are. We are. Yes. I mean, there is a general purpose. That's something. For the boy. Is to. Change your habit. The state is wrong. You know that this work. But this is maybe impossible. That would be for instance. You. Get information from the velocity. It's not clear how. Yes. Yes. Yeah. Yeah. Yeah. If you have a single message. We know that. But if you have. You have. If you don't have a single message. You don't have a single message. You don't have to do that. It is the most important. It is the most important. Oh, my God. Stop it. No, because I was. And so. So. Try to find. Thank you, thank you, thank you, thank you, thank you. This is not a problem. This is a distinction. So, why is it so hard? Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. We have the position of the particle. We have the position of the particle. We have the position of the particle. We have the position of the particle. We have the position of the particle. We have the position of the particle. We have the position of the particle. We have the position of the particle. We have the position of the particle. We have the position of the particle. We have the position of the particle. We have the position of the particle. We have the position of the particle. We have the position of the particle. This is the binary channel. The computer information theory is also called the channel capacity, because it's how much information you can transmit in a channel. If the noise is so big that m and x are independent, the computer information is zero, and you cannot transmit it. If the power is zero, which is not one half, then you can transmit some information. Okay, so let's calculate the, so I told you that the best thing here to use is the, is this one, hm minus hmx. Because these are the, we could use the other formula now, which involves the other formula is hm, hx, hmx, xm. This is most variable with this xm, xm can take on, so xm can take on four values. So this is not the binary, I mean the pair is not the binary value. This is the binary value, this is the binary value. So when you have a binary value, you know that we can use this function, the mirror here. We can use the formula that is in the exercises, h of something, x is minus x of x, minus one minus x of x. This is the entropy of a single variable, because we shape x is one, and this goes to log two. This is the maximum entropy of a binary variable. This is one, not two is one, not two is one. Okay, so we are going to use this formula for a, this formula for a single variable. So the only thing we need to know is, this is a binary variable, this is the binary variable, so the only thing we need to know is with which probability takes on the probability, the value zero and the value one. In this case it's very easy, because it doesn't matter the value of x, n is binary variable with probability section of one minus section. Okay, I mean, if you fix x, if you fix x, if x is zero, then n is zero with probability section of one minus one, and x is one, but the probability is always the same. So this is just h epsilon, this is just h epsilon, this is just h epsilon. So we only have to calculate h epsilon, and to calculate h epsilon, n is the binary variable, so we need to calculate the probability to be zero or one. So what is the probability, let's do it here, what is the probability that, for instance, that n is equal to, let's say, one, no, zero. And to be zero, you have two possibilities, x is zero, and I measure, okay, the measure is okay, or x is one and I measure both. So these two possibilities are x is zero, and I measure, so x is zero, and I measure correctly, or x is one, and I measure both. So instead of being simplified, we do think it is actually this p plus epsilon minus two p epsilon, and it doesn't admit any further simplification. Let's call this q, and what I knew is that h of m is h of two. So that's it, i x m is h of q minus h of epsilon. Yeah, I don't think it can simply, okay, you could maybe simplify these by using the same value of h of epsilon, I don't think it's working. So what did you see was a, not so, sorry, what was, okay. And q is a number between zero and one, which depends on, and of course what, this is what I mentioned that maybe this is to generate. Although this is nice, this is, I mean, you are in communication there, it is important for people to use it at all. This is possible, always possible. This is also interesting. I didn't show any information theory there. In the pilot, it's like, and so it's possible information is possible here. Maybe next week with, I don't know the name of the, the proof, my idea is to use the, the ideas of that. Okay, but now that the purpose of this exercise was to check the second law. And let's do it. In the case of the speed, we instead of this in the middle. So please one half. Remember, we instead of this in the middle. We instead of this in the middle. So he is one half. This is very easy because of this q. You can see here is one half of my section on one half. So if this is the one, one half. So it doesn't matter. It's one half. You can, you have one half times one minus. The way to go from zero is one half. One half. So again, so sorry. It is one half. The information. It's H of H of one half. It's one half minus H of H of one half. This here is law of two. So this is, this is law of two minus H of H of zero. And now, and now you see what we calculated using physics. You see here. I like this exercises because you go from physics to mathematics and you see that, that you reach to the same point. So this we did this. We calculate this before using physics. I mean, using the equation of ideal gases. Because the law of B divided by volume, final volume, this is the consequence of the ideal gas. We calculated this, you know. And now we see that we obtain the same thing. The extracted work is just the mutual information. And this is the, this is just a consequence of the second law. Remember the second law for is the, the work is, which is minus the extracted work is bigger or equal to delta F. Plus or no, sorry, minus K. So delta F is zero in the, this is already because it is a cycle. You have to go back to the original. So this is still, and then this means that the extracted work is smaller than KT, the mutual information. And this is a consequence of the second law. Actually, this is the second law for any cycle. This is for any cycle. For any cycle with the lesson. Of course, the second law for a cycle is that the extracted work is smaller than KT. But if you have a measurement, this is, and this is possible because I can extract work and look at the result. You have calculated this using the information there in, which we got H2 minus KT. And you have calculated this, using physics, using ideal gas equation and so on. And you have obtained the same result. But this tells you that the optimal protocol that you calculated is optimal not to keep the same, not only because you keep the calculation and the, it's optimal that you cannot do better. You cannot do better. And the second law is that it's important, of course, it has a importance on the point of fact, but it is also a benchmark to what you can do. And a benchmark is always useful because you, it helps you to see if you are far from the optimal or not. So even, so this type of inequality is very important. And here we see that the C-arranging meets the equality. So can reach this optimal structure. And we see that this is the practical case of the second law of feedback process. And also you have seen the importance of the mutual information. This is something that Charles Bennett didn't care about. But they consider only error-free measurements. And an error-free measurement, remember that the mutual information in an error-free measurement, this is zero. The mutual information is just there to figure out. That's it. I have a problem with the size. Maybe the size of the quality that comes from the size. Because, yeah, this is a, okay. This is because I like to write the, usually you will find the second law written in this way. Although this morning I was surprised that Andrea was still in conventional size. In stochastic extermination, we always use this one. This is the word that, this is the way that we use this. This is why we wrote the first law is always like that. The first law is there that we think what that will be. We always write the second, the first law. So this is the word that you do. Or another word. To extract the word, that you must be there. So you have to, the second law is like that. The second law is always, the word is bigger than something. And that's the answer to this. And if you want to, the stochastic word is minus, and in a cycle this is zero. So you have minus, minus. You can revert the size by starting by, I assume that it's where I'm closing. When you revert this expression, and extract the word, and mutual information, it seems to me that it should be the other way around. No, no, you just have that. You can change the set. You can multiply and multiply by minus one. If you change the, the, the direction of this. E is i is positive, right? E is positive. E is positive. Q minus h. This is positive. This is easy to see in this case. Because h is zero. h is zero has this form. Sorry, it is very small here, but it has this form, it's like a parabola. And the maximum is not two. So h is zero. Not two minus h is positive. So this is the maximum value of i. So when we are not at the maximum, we have less than this. And on the side of the expected work, it's low to minus h of h term. So it should be extra more than i. No, this is what you obtained before. You can say that the optimal work with the optimal protocol is kt log two minus h. So log two is the maximum h of two. Yeah. So this particular thing is more than kt i. No, it's always bigger than this. Log two is always bigger than this. So i is positive. So expected work is smaller than some positive part. This is smaller than this one. And the maximum value when expected kt h, h, log two minus h. Okay, I have to... Because this log two is the... log two is the uncertainty of the position. Actually, this is another way of thinking of the... log two is the uncertainty of... for the entropy of the particle before pressure. h of epsilon is the entropy of the particle after pressure. And you have a reduction of entropy, which can be used to extract this amount of work. That's the way this works. Tomorrow we will do things in terms of entropy, but of course when you extract this, you extract this energy, this energy comes from the thermal back. So the heat... So this is the expected work and this is equal to the heat. So this is a heat, because it's a cycle. So this energy comes from the back. The entropy of the universe, if you don't consider the system's degree, the entropy of the universe, the entropy of... Yeah, the total entropy is minus two divided by t. And when it misses by k log two, this is another way of seeing the... interpreting the stiller... Well, sorry, k log not log two k... So the entropy... in the cycle, the entropy of the universe decreases by k. Okay. I see with your simple argument that maybe where I'm lost is when we replace p by one over two in order to be in the case of the star and drive. And then inequality in the expected work and... p is given by the initial position of the p star. So it's one half. It has nothing to do with alpha. Alpha is going to be moved. And so p is one half. And I don't know if this is... We find the same expression for the mutual information for p for one. And the expression of the expected work is the same expression. So that's inequality for this case. And we don't see the inequality here. Well, inequality is because this... Well, I did that very rapidly, but I did that in the first... It cannot be more. Because of the physical arguments. That's from the law. It cannot be more because of... I mean, I presented at the beginning of the afternoon a proof of this. Very fast. I think it's this one. This one, no. When I... I did this thing. It was very fast because I wanted to just... Because you have here the measurement. And let's say, give it to you for free. Give you for free an amount of free energy, which is styrofoam. And then from that in that... This is general. This is not for the styrofoam. This is super general. What we have that is to check that this... Sorry. We have checked first that this is the styrofoam. And we have checked that for a given... For a given alpha, you can even reach the quality. Which is absurd, as somebody asked me. Can you always leverage the quality? For instance, we know that if you measure properties of the velocity, you normally know how to use this information in an optimal way. And for instance, if you monitor a particle in continuous time, it's also not clear if you can use this information. Well, in this case, the information is infinite. So it is not clear that you can use this information. So there are some... We know that this can be reached if you have... If you measure the position and you measure in discrete times. And we do that in India. We already know that. Actually, we have papers on how to reach... This is called optimal maps for humans. And they are based on this property of privacy that we discussed before. And we have papers on when you can... This inequality is the best part. But you can never... Okay, so... We have done... We have explored, I think, in the day, we have the idea of how to incorporate information into the second most. There are other results like information reservoirs and memories and so on. But this is the most important. This explains... This can be applied to SIGAR and to any feedback process. But we didn't address the most fundamental question, which is... This is the second nobody, which was Maxwell's original concept. This is nobody or it depends on the other. That's the second one depends on the information that we have. This is objective property or objective stuff. And we will try to stop this tomorrow by incorporating the physical nature of the demo to the product. So we have a system and a demo. And there will be physical systems. And we will apply these type of things to the physical system. And we will... We will derive some minutes' idea that measurement costs something to measure and also to erase. Where is the... And then at the... And because I know that the school has... One of the first ideas was to have financing companies. But I would not in the biology. But there is a... Tomorrow I will talk about information projects. It's a tool to interpret non-interview systems, like motors and so on, as information devices. But we will try. Okay. And I think... So many... Before we finish, maybe we can give opportunity to Zoom participants maybe for one short questions. I am asking that. What about the exercise? What did you do? They are ready. I have a question. Oh, great. Yeah. First one, I just want to make sure. This feedback second law is valid for arbitrary protocol or to protocol that extremize the work and information? No, no, no. It's here in the quality. So it's valid for any protocol. Of course, the information depends on your measurement. And on the state of the system. People on the measurement. But the second law is completely general. Completely general. Okay. And the second question is when we... The question is how good is the quality or how tight is the quality? But the inequality is from this in general. Okay. And the second question is when we do this exercise too, the mutual information, which is supposed to be symmetric between X and M, the expression we got seems the asymmetric. The problem is that you have to choose... You have to choose one of the three possibilities. Ah, you don't see me. Okay. No, it's a mathematical response. So this is... You have to choose between the three expressions. This, this, this. I suggest you to use this one. You can use this one. H of X is H of B. It's true that apparently, but this is very difficult to calculate because you have to suppose that, you know, M, and then how do you pair X from M? So it's a... The race rule. No, you have to play. Yeah. This is not very... But the result must be the same because it's a mathematical result that the... But the interpretation of... So these formulas are... These formulas are... It's not so trivial as... I mean, this is not including that the reduction of uncertainty in M due to X is equal to the reduction of uncertainty in X due to M. In many instances of information theory, this is... Okay. This is just... But to think of it, it is something which is not trivial at all. Let's see. Okay. So you are simply just replacing... I mean, swapping epsilon and P the probabilities. No. No. Okay. Yeah, that was my... Okay. For this thing, for instance, you have to assume, do you know F0, but 0, X, X condition to 0 is not what I... Because you have the P here. So it's not so clear. Okay. I understood. Okay. Thank you. You can do it as an exercise. You can calculate the conditional probability. The potential probability is just the judge probability divided by the condition. So you can do it, but it is a mess. Yeah. It's a mess. Look at what is the probability. In the denominator, you will have this probability. Any position, any problem, which is the probability of a mess. So this is a real calculation here. No. This is the probability that n is equal to 0. So it is... And this will be the denominator of the... Okay. Thank you. Thank you very much. Maybe some more questions from Zoom. I don't... The lecture's closed, but the material will be very careful for looking at it again. I don't know. The power point there. Well, I can read the power point, but the... But I think the paper is more clear, but well, I don't mind to send the slides. This will be uploaded to... Okay. It seems that we do not have any more questions from Zoom. And I think that we are now about closing this very, very successful day. I think that you are all still fresh, which means that we have started at 9.30. Now it's 6. This is how much it is 8 hours in the cup of thinking, lecturing, and the fact that you are still fresh describes very well what was the day today. So thank you very much. Thank you. Thank you very much.