 It's questions and one exercise, which is based on the exercises that you have done in the class. So it will be very easy. So for everything you can have, you can use notes or written, I mean, your notes or printed notes. You can use anything that you like in the exam, except, I mean, except phones or things that can connect you. Things that can connect you with other people. So no phones, no tablets, no computers and so on. So if you don't have notes, you can print some or you can just make a summary of that. And if you need some formula that it is not, you can ask during the exam as well. So, and Lee, I know that's everything that we have to say about the exam. So yesterday we introduced this, let me recall the main point. And now that we have this formula that we have the second law that tells us that if I have the entropy of a system, in this case we are interested in bipartite systems, each one with its own environment. If we compute this per unit of time and we sum the entropy of the environment, then this is the total entropy production or the total change in entropy. Sometimes I use total or prod because it is a production of entropy. It's something that, you know, the entropy increases in the universe. And this is how the entropy of the universe changes. And this is bigger than zero. This is the second law which we have somehow, maybe you can say, well, you are using the second law and sometimes you say that you have to prove the second law because the Maxwell demon put the question into question in the second law and so on. What it is true is that once you consider systems with environments, so here I have an environment X and an environment Y. In fact, you are already assuming some time of degradation of the information because you don't have access to the microstate of the environment. So you are in a scenario where the second law is safe. And actually the master equation, as I showed yesterday, it is the master equation is compatible with the second law. Or if you like, you can prove the second law from the master equation. But this is not the proof of the second law because from fundamental mechanics, because in fact when you introduce environments, master equations, Landgevin equation, all these things, you are already including irreversibility there because you are replacing all the microscopic dynamics of the environments and noise and stuff. So we are now not interested so much on foundations of the second law and things like that, but just on applying what we have learned about the Maxwell demon and so on, trying to apply this to this scenario, which is relevant for nanomachines, biological machines and so on. And the key idea here was to decompose this. We know that we can use information theory to decompose this into the entropy of each system minus the mutual information, which is a measure of correlation. And actually this formula is what Leah used on Mandate to, I mean she didn't use this formula explicitly, from this formula, but if you integrate this formula for instance measurement. In the measurement, let's say that this is the system and this is the demon. In the measurement, this doesn't change, so this is zero when you integrate it. This is the change of entropy in the demon, in the observer. When you integrate this, you get the mutual information at the end minus the mutual information at the beginning. At the beginning is zero because measurement starts without correlating the states. So you have a kind of second law for the measurement, which is exactly what Leah derived on Mandate using free energy. But free energy, remember that it is energy minus T s. So you can go from entropy to free energy just by multiplying by T and so on. So this equation contains everything that we have there for when we kind of have. Let's say this is the main equation. And as I said, you can apply this even to continuous time processes. Still our engine is continuous time, but you have the measurement. So it is just an event that probably lasts some time, and you can integrate this in the different stages of the Cedar engine. But you can apply also to continuous time. And the idea is that when I dot is positive, you are measuring. And for that, forget about this for the moment. Suppose that this is constant. If this is positive, you need that the entropy, you need to dissipate to increase the entropy of the environment. You have to dissipate energy or heat or something like that. And so measuring costs to something. And when I is negative, you have feedback. Now, when I is negative, this term is positive. So you can reduce the entropy of the environment, which is the idea of a motor of, I mean, whenever you have a motor or you have the motor takes energy from one bath or air conditioning or heat pumps. That heat pumps also reduce the entropy of the environment. So here you have that the entropy of the environment must be bigger than zero. And here you can have that this is a possibility, of course, not always. So you have here your machine is taking resources to measure or to, this is bigger or equal. So it could be, of course, I'm neglecting this now. This is the case of SXSI0. Let's say, I'm putting this per unit of time, but you can have this scheme of this idea also integrating this and in some period of time. And this will be instead of I dot, it will be delta I. But this is the idea. So whenever you find that the correlations are increasing, you are measuring and whenever you see that the correlations are decreasing, you are making feedback. And this could be the whole story of the SILAR engine in just one line. Yeah, yeah, yeah, yeah. Here, Leah had to use notations for X before, after, and so on. And she used also M. But now I'm going to analyze two systems where it's not even clear a priori of server who is the demo and who is the system who is the demo. So I use X and Y. And yeah, here when you have S dot, these dots mean derivative with respect to time. So this means that why the entropy depends or why this depends on time because X and T, so this is X and Y are stochastic processes. So now you have these stochastic processes. And this is, I didn't include the time. But they are, of course, if they are random variables, this is zero. I dot means the time derivative of this. So now I have two random processes. This is super general. You can have these two random processes. They can be continuous. So you have a longer equation, so stochastic differential equation. They can even be, well, no, for that you need physics. So they must be physical systems. But you can have master equations. You can have whatever you like. But they depend on time. So they depend on time. So everything, the entropy depends on time. Here you have, this is just the sum over X and Y of P, X, Y. The last thing I wrote this yesterday, T log of P, X, Y, P, X. And I write T, and you can have rho, which is the density. And Leah used not this because this is continuous time. Well, the series is continuous time, but you can integrate, let's say, between the time before and the time after. And then he used Y prime or M prime or whatever to denote the variables after and the variables before and so on. But this is even more general. So actually, we could have started the whole course like that and applied, but it is the historically and logical. All this comes from the question of historically posed by Maxwell and so on. So now I have a bipartite system. Bipartite means here just that a system is composed by two subsystems. But now, in a moment, we will have to restrict this definition and I will give you a definition of bipartite system. What is the problem? If I have two systems, if the system is autonomous, so there is no driving or no external agent, well, no driving. And now we are going to, well, for the moment, we can assume that X and Y are continuous, discrete, whatever. In other autonomous systems, the system reaches a steady state and in the steady state, of course, by definition, a steady state means that this does not depend on time. So this derivative is 0. So in the steady state, I dot is 0, but also the entropies are 0. Well, actually, this is 0. The entropy of X is 0. The entropy of Y is 0. And the only remaining term in this is this one. And this is the second law for a steady state. You can imagine any transport phenomena, like you have two thermal baths and something in between, and there is a conduction of a flow of heat. This is a steady state, non-equilibrium steady state, and it's produced, produced in entropy all the time. And what is the entropy? It's just when you have two systems, T cold, T hot, and here you can have whatever you like and whatever you like here. But in the steady state, so here you have a system and in the steady state, the environment entropy is, if you have here, let's say, Q cold, Q hot, so you have two baths now, the system, and there is a Q hot, a Q cold, and this is the entropy of the environment per unit of time. So it's a flow of heat. This guy is, this is usually, this is negative, but let's keep the convention of time. So you have minus Q hot. This is the Q cold divided by T cold. This is the change of entropy here and here is the change of entropy here. And usually this is positive. So the hot reservoir, the entropy of the hot reservoir decreases and the entropy of the cold bath increases. In the steady state, Q cold is equal to minus, Q hot, Q cold is minus Q hot. So here you have that this is Q hot and this is minus, so this is plus, so this is one minus T cold divided by one minus T hot. So, sorry, in this example, the change of entropy is Q hot, one minus T cold, one minus T hot. And this is bigger than this, so this is always positive. So this is the typical, I mean, this is the simplest case of a steady state where the second law just tells you that the heat flows from cold to hot. The, well, the derivative is just per unit of time. Yeah, it doesn't matter. Yeah, they are equal. The derivative, I don't understand. The Qs are equal or not? Q hot equals minus Q cold? Q is always in a period of time. So yeah, it's the integral of Q dot. They are equal. Maybe in the transient, in the transient maybe you can have, yeah, of course, if they are not, no, in the transient, the energy of the system, because this is the first law, I mean that the Q, the first law tells you that delta E in the system is Q hot plus Q cold. So if the system in the transient changes its energy, it's because there is a run, this is not balanced. So yeah, you are right that the Qs can be different from zero. Yeah, because everything is constant. The steady state by definition, by definition in the steady state, this is independent. Let's put it like that. This is independent. There is a steady state. This is, if this is a master equation, we will see, we saw it yesterday, that the master equation has a stationary solution and this stationary solution is constant. For instance, remember the example that we did last week? I think it was like that. These transitions were mediated by a bath at T1 and this at T2. And I think there was an energy, the energy here was zero, here was epsilon, here was zero, and here was epsilon. So you have a master equation for that. This could be, this is not a bipartite system, but it is, well, you could even think of this as X and this as Y and you have two bits. Let's say two, you can write this as X and Y. So we saw that in the steady state, there was a current. Do you remember this example? If T2 is smaller than T1, we saw the limits infinity and zero. In infinity, essentially the particle is here and when it reaches two, it goes down and it can never go up. So it goes like that and it's making this type of motion and then down, this type of motion and then up, this type of motion. So you have this current. This is a case of a steady state. You have a current, so this current, for instance, whenever the system is jumping like that, it's exchanging energy with the bath one because whenever it goes like that, it takes this, it goes from zero energy to epsilon. So this energy comes from the bath. So whenever you have this step, you have epsilon. Whenever you have this step, you dissipate epsilon to the cold bath and so on and so on. So you are taking energy in each step. Well, in each cycle, you are taking two epsilon from the hot and dissipating two epsilon to the cold. So this is another example of this type of thing. Here you have the same. Actually, because the cold is zero, the entropy production is infinity, but this is normal. When you have baths at zero temperature, then you have infinity production of entropy. Well, q hot, I think, is two epsilon because, no, I would say it's epsilon j. No, maybe it's two epsilon j because you have to multiply q cold is, and this is a correct unit. This is energy and this is probability per unit of time. So this is time 2 minus 1. So this is joules divided by second. The two comes because I think you have to sum. Yeah, remember that q is sum of i is more than j of jij. Then the heat, in this case, is the energy of j minus the energy of i. No, the opposite. And if you apply this formula to the four links, well, this is the total heat. When you want the heat, the q cold, q hot, will be ij but mediated by the hot. We didn't study this because we studied in general only heat. But if you have, this is kind of a natural though. If you have two baths, one is affecting this transition and one is affecting this transition, you have to obtain q hot. You only have to look at the transitions mediated by the hot bath, which are one, two, three, four. So yeah, I'm pretty sure this is the heats here. And what is the S environment? It's going to be this one. 2 epsilon j 1 divided by t1 minus 1 divided by t2. OK, so these are examples of... Actually, we are going to study now the information flows. And this example, we didn't put any exercise because it was complicated, the calculation, but maybe this example is easy to calculate the information flows. I will check it. This afternoon I will check it. No, I put this just... This is an assumption. This is just an assumption. No, it's not necessary to be like that. But why I put it? Because actually, this is not... I wrote this just to clarify the meaning of measuring and feedback. And in the case that Leah started discussing on Monday, this is the case. The measurement, this is the case because x is not affected. In the feedback, this is zero because it's not affected. In the feedback, this is not zero, but the integral is zero because it's a cycle. And finally, in the measurement, in the measurement, this is zero because x is not affected. And this is... The integral of this is zero if it is a memory is symmetric memory. Because in a symmetric memory, the entropy doesn't change. But it is an assumption. It's just a special case to... I mean, because this is how its system changes. But the important thing here is the correlation and how the correlations affect the entropy in production, the entropy in the environment. Well, in the steady state, yeah. In the steady state it's always... Outside the steady state, you cannot say nothing in the transient. Okay, but this is just an introduction to say that for an autonomous system with no driving, you reach the steady state. Usually you are interested in the steady state because the transients are complicated, depending on the initial condition. And I mean, when you want a motor in the cell, you want the motor to work in the steady state, essentially. So this equation is useless. Well, this equation is useless. It's just reduced to that, which is not very... Okay, it's okay. This equation tells us that heat goes from hot to cold, or that currents of particles go to high chemical potential to low chemical potential. But this is something that you know from undergraduate courses on thermodynamics. So this is not a big deal. We want to keep... We want to have something similar to this nice equation, but for steady states. And then for that, we introduce the information flows. Yeah. Where is Q... Because I raised the Q's. Okay. No, but the Q... So when you have two systems at two temperatures, maybe this is basic thermodynamics, and you have a flow... You have, let's say, a system here, and you have Q1, Q2 per unit of time. Or when I put the per unit of time, you can think also on a cycle, like Carnot cycle or something like that. If the system doesn't change, Q1 is minus Q2, and the entropy production is minus Q1, T1 minus Q2, T2. So if I use this equation, I can, for instance, use here, I don't know, minus Q1, T1, plus Q1, T1, T2. And I obtain Q1 that multiplies 1, T2, 1 minus T1. So this means that if this is the cold, this is positive, and this must be... And the second law tells you that this must be positive. So if T2 is the cold, this is positive, and this is negative, and so on. This is the... So what is your question? But this is the consequence of the second law. I mean, heat flows from hot to cold because of the second law. Because of the second law, because it's the only way in which the entropy of the universe increases. Because this is the entropy product. And this must have the same sign as this. And this... We want to keep... I mean, to extend this to steady states, but to keep these terms, or at least information about these terms, about the mutual information, the entropy, and so on. So we introduce the mutual... Sorry, the information flow. Well, I did all this yesterday, but I repeated the class of yesterday, I think. And the information flows are... The following are defined as the derivative with respect to t e prime of x of the... You have other definitions in the literature, but this, I think, is the most elegant. Not the most practical, but the most elegant. I wrote this yesterday, so... This is the definition. This is hard to calculate. This is hard to calculate because for that, you need... I mean, if you want to play with that, it's a pain because you need the probability. The joint probability of x, t prime, y, t. And the best way of calculating this is to calculate the probability in two times. For instance, x, y prime at t prime and x prime y at t. And then this is the joint probability of finding system x at x prime and system y at t and this at t prime. And then you have the marginal, the x dy. And this gives you the marginal. This gives you the probability distribution of x t prime y t. But this is a mess. So we are not going... This is... And from that, you can get this mutual information which is the mutual information of x in a given time, t prime, different from y. So this is a mess. Unfortunately, these two things are easy to define in something called bipartite discrete systems. So what is a bipartite system? There are also different definitions, but I'm going to use the simplest one which is that gamma, the transition rate, so the probability to jump from x, y to x prime, y prime per unit of time. This is zero if x is different from x prime and y is different from y prime. This has a very simple interpretation. If this is y and this is x and these are my states, so this means that in a jump, I cannot change x and y simultaneously. So there are no diagonal jumps. So I can only jump from x to... I can only change y or I can only change x in the vertical and the horizontal, so no transitions where x and y change. Both, let's say, where both x and y change are forbidden. Forbidden? Forbidden. So this is the definition of a bipartite system for discrete systems. For continuous systems, it's a bit more complicated. For continuous systems, if you have two lines of equations, bipartites means that the noise is uncorrelated, the noise in each equation. And you can even have mixed systems. In the basic references, there is a paper which is a kind of pedagogical paper which flows and we apply this to a device which is a nanotube which vibrates, so it's continuous. So there is a Langevin equation for x and y is the occupation of a quantum dot. So it's discrete. It can be only 0 and 1. So you can have that x is continuous and y is discrete and this definition must be modified. This definition is only for x and y discrete, which is much simpler. Well, when this happens, the information flows are rather easy to calculate. Well, when this happens, you can write this as follows. You can write the probability to jump from x to x prime for a given y. The only nonzero transition rates are the vertical and horizontal. So the horizontal, for instance, are given by that. So this is by definition the probability to jump from a state xy to x prime y. So you only change x and y keeps its constant. We will use this notation, a similar notation for jumps in y. And now if we write the Langevin equation, sorry, the master equation, remember the master equation was something like that. Pxy, the derivative is the incoming flow. So I can come from everywhere to my site xy and the outgoing flow, which is from my state xy, I can go to any state. So now this is general for any system, for any system composed by two. If the system is bipartite, I can, instead of doing this, I can compute the jumps. Well, it's just to decompose this. I have only, these are the nonzero transition rates, the nonzero gammas. So I can take now here, I have to repeat, I can write my master equation like a sum over x prime. I can only compute. So here I can come from anywhere, but now the system is bipartite. So I can reach x by changing x prime. So by horizontal jumps, let's say, so I can go from x prime to x and I can also leave my system. This is the contribution. This is the contribution of jumps where y is constant and this is the contribution of jumps where x is constant. So i prime. And now remember that we use the currents. I will keep this because this is where I want to reach at the end of the calculation. So I have, and you can see, you can see that the master equation is split in two parts. One is due to jumps in x and the other is due to jumps in y. So this is going to be relevant here because remember the information flow, x, is the change in the mutual information due to the evolution of x. So we see that this guy is going to be, I mean, we can express this in terms of the jumps in x and this guy is, we keep x constant and we move y. So this iy, the mutual information flow, y is going to be given in terms of this. So if we call this, we are going to call this, well, let me write the definition. The current for a given y from x to x prime, this is the horizontal current, let's say. So here you have the diagonal transition. So you only have current here in the horizontal. The horizontal current will be jy and the vertical current will be, the vertical is keeping x constant and moving y and they are defined like that. This is gamma from x to x prime, the x minus the gamma minus the jumps from, it's everything. I think the notation is a bit cumbersome but the idea is very simple. It's to split everything in a master equation in two terms, on x and jumps on y. Yeah, yeah, yeah. This is a condition, not every system is bipartite. In this sense, this is a definition of bipartite systems, which I don't know, this was introduced in the paper by Jordan Horowitz and Massimiliano Esposito. I don't know if it is general, maybe in other paper, I don't know, maybe you, Matteo, you know that. Bipartite essentially means that the system is composed by two systems. This is an extra condition. They call bipartite system the systems that obey this condition. I mean the condition of that there is no diagonal term. But I'm not so sure if it is a completely general expression. I mean, so, but they use it, so I trust them so in the case. In measuring and the system. Well, but this condition is a condition on the dynamics of a system. So you can even imagine that the Maxwell demon in general is what Leah did on Monday. Usually in the measurement only the system, this is why we use this type of thing. One of the systems, the system remains constant and only the demon changes. And in the feedback is the other way around. So this is inspired by that. And the same, we define the same for X. So the master equation can be written in this way. Pi, sorry, Pi. Pxy is a sum of currents from X prime to X plus a sum of vertical currents. So it's again the incoming flow. And now we can distinguish between the two. And you see now it's very clear. This accounts for the change in the probability due to the system X and this is due to the system Y. So one can prove, for instance, I'm not going to use this. This is not in the paper, but you can prove, it's not easy, but you can prove that Pxt prime, Yt. So this is the derivative with respect to time of the whole thing. Remember that we can write this as Pxt, Yt, X, Y. This is the probability distribution of these two random variables. And this is the probability distribution of two different random variables, but now we make the derivative and make this equal to zero. Because this is essentially how the probability changes, but only considering that X changes. So only considering those jumps. You can prove this as an exercise, which is not an easy exercise, but in a bipartite system you can split the evolution of the whole thing by an evolution on X and an evolution on Y. And that's the whole story. The definition is that if time is very small, then we have... No, no, no, no, no. This is more precise. This is more precise. This is... When you define transition rates in a master equation, the definition of transition rate means the definition is this one. The definition is that the transition rates must... So you have this, the probability... Let me now use... Well, X only, because it is... The definition is... Well, let's use X and Y. The definition is X, Y at time T plus delta T. Condition, or X prime, Y prime. Condition to X, Y, T. This is gamma X, Y prime, X, Y delta T plus terms of order delta T squared. Of course you can have... So this is very specific. This is what allows you to derive the master equation. So it's not a question that you... Keep delta T very so small that you... You neglect this term. So if you... These jumps must be of order delta T squared. Which means that they... You can have... Motion like that. But this motion must be of delta T squared. I mean, the probability to jump from here to here delta T squared. No, it's not so restricted. Bipartite systems are... Usually they... These are chemical reactions mediated by some chemostat. And... Yeah. Well, I must admit that most of the applications are models that people have cooked. I mean, they inspire some biological motor and so on. But they are models, so... Yeah. Yeah. Yeah, you are right. That in the end is in approximation because... This can come, for instance, this type of models can also come if this is a protein and this is a conformation of a... This is position and this is internal states. Usually this comes from a landscape of free energy where you have like the egg box. You have different minima. And then you have minima with barriers in between. And usually this path is much easier than the diagonal path. But in this case, it's what you said. It's a kind of approximation. Yeah. Yeah. Yeah, if the system comes from some... Of course, yeah, the system always comes from either a classical system which is continuous and then you discretize something like the native states. I mean, the equilibrium states of proteins. Or if it is a quantum system, you can have... This has been applied, for instance, in the paper by Massimiliano and Jordan. It's applied to quantum dots. And then, essentially, you neglect the simultaneous jumps of... In principle, you should... Mathematically, the condition is that one. So you need that this is zero when both change. So the condition is that the simultaneous change should be delta T squared. But, of course, this is kind of... at the end, yeah. Well, you have here a Markovian approximation. So to get the master equation, you have a lot of approximations as well. So maybe you are right that in these approximations, sometimes you can neglect the simultaneous. Okay. So the nice thing of a bipartite system is that if a system is bipartite, then the total information the mutual information can be written as the sum of x plus y. And this can be written... Remember that... Remember that i dot is the derivative with respect to time. Of the mutual information. Remember, the mutual information is... You can always remember the mutual information as the cool back libelar divergence between the joints and the factor i's. So if you compute this... This is... You make the derivative of this times this and this, the derivative of this. This is always... Because of normalization, the only remaining term is this one. And using... And then we can use the master equation. The master equation on the typical one with all these things to express this as i prime x, y and the log of p of the joint. Okay, this is something that we have done. Remember that... We have done this for the Shannon entropy, but it is the same for the mutual information. If you make the derivative of this, you get this term p dot times this and then p times the derivative of this. The derivative of this, you put this in... Well, this is just the Shannon entropy and you remember the derivative of the log gives you a one over p which cancels with this p dot which is zero because of the normalization and the same with these ones. So... You have only to make the derivative here. The derivative affects everything, of course, but all these terms are zero. The derivative with respect to time of this is zero. And now you apply this and here you have how the mutual information evolves. Okay? And this in the steady state is zero. But now we have these jumps and the idea... This is the original idea by all these people that I mentioned. The idea is that we can split these into two terms. One with the currents affecting with these currents, the vertical currents and one with the horizontal currents. So in this... Well, I'm doing it in general. Then I'm doing it for time dependent probabilities but at the end of the day I will apply this only to the steady state. So I split these sums over just changes in y and changes in z. So... 1xxyx' So here, remember, I have jumps that keep y constant and go from x' to x. I have now this thing and the same. So here, let's say I'm in fact... Well, yeah. I have the sum on x, y and now I only change y. So I keep constant x and I jump from y' to y and this is the information flow due to the jumps in y and this is the information flow due to jumps in x. And the nice thing is that now in the steady state this can be zero but the flow can be different from zero. This is the idea in the steady state this can be constant the whole thing the whole mutual information is constant but this can be different from zero and this can be different from zero. So in the steady state I is zero and then this is minus the information flow due to this is minus this but this can be different from zero and actually it's different from zero in most cases. Today I will not have time to do an example but this is in most cases. So then I will have one information flow positive one information flow negative and now look at the interpretation of this the interpretation of this is the following this information flow I think we discussed this yesterday as well this information flows is the change in the mutual information due to the evolution of x keeping y constant and due to the evolution of y keeping x constant so what happens if for instance yx is positive suppose that yx is positive what means that yx is positive this means that x I consider the evolution of x keeping y constant the mutual information grows so x is measuring y so this means that x is the demon this means also this means that x is measuring y and actually in this case y dot is negative let's put it like that so this is because of this in the steady state if this is positive this is negative what means that this is negative that y in its evolution x keeping constant y is decreasing the mutual information so it's exploit a mean it's like a feedback in the feedback remember in the feedback y doesn't change its state but modifies the still arrangement and so on to decrease and exploit the correlations so this means that y is y is the working substance and this is the way we we can apply the ideas of thermodynamics of information to this system this is nice but you can say is this useful to say x is the demon it's very nice but we would like something more useful which is the second law and the nice thing is that if you remember the second law this second law was useless because everything was zero in the steady state everything was zero we can make a second law for each of these guys we can prove local second laws we can prove a second law for x so we can split everything we have split this into a sum of jumps this is how the mutual information changes we have split it into the change due to x and the change due to y we can split this also on an environment of x and an environment of y I will do this tomorrow and we can well this is just ss and these are already this is due to jumps on x because it's the entropy of the marginal and this is the general entropy of the marginal so we can prove two second laws one due to jumps on x which is this one and the other is this one this is general and in the steady state in the steady state of course this is zero this is zero so in the steady state we will have that minus i dot x tomorrow I will prove this is sx environment and because in the steady state this is minus this and this is new this is not just a fancy interpretation of the machine as a information machine and so on these are two conditions if you sum both you get of course the sum of this sorry this is x the sum of this plus this is cancelled with this and you get that the environment the entropy in the environment you get the second law so you recover the second law but you have two more constraints to your system and this is because the system is bipartite and this tells you this is interesting because this is telling you for instance suppose that x is the demon we have said that x is the demon if this guy is positive if this is positive then this must be positive because this is minus so this must be positive so this means that the demon is increasing the entropy of its environment so it's essentially dissipating heat or things like that and it's increasing this entropy to measure but now look at this guy now this is positive so this can be negative so why the working substance can decrease the entropy of its environment and this is essentially a machine a machine is something that it is increasing the entropy of something for instance in the ATP ADP machines in biology you increase the entropy of the chemostat of the ATP ADP reservoir for instance but you move against the force or something like that so you are taking energy for instance heat from another reservoir so you are increasing the entropy in one reservoir and you are decreasing the entropy in another reservoir until okay this is I wanted to establish the laws maybe I have time to prove it because they are not so difficult so let me let me prove it so the idea is to get I will prove the first one the second one is essentially the same thing so I can I will use this okay let me use this but here well I will make the proof and you will see the the definition the specific definition of the environment so to finish this let me consider if I split this log into the sum let me do it with this term so I have this term here and this is a log of p of the joint minus log of px minus log of py and now this this sum runs over i prime so I can get this log outside so let me now consider only this term so this term is jx y prime plus y log of p and this is a sum over y prime and it's also a sum over x and y so I can get this out and rearrange the sums in such a way that this is pxt and this is the sum over i and y and y prime of j y prime y and this is 0 okay this is 0 because the current is anti-symmetric with respect to y and y prime and then I'm counting all the possible jumps so this is 0 or if you like you are computing you have to look at this a current times a function is how this function changes so you are computing how the log of x changes using jumps that only change in y so it's 0 only in this sum only those terms that have y will change and here only y is constant so the terms that only contain y doesn't change so in these two equations in this equation I can neglect this p y and in this equation I can neglect this p x so the information flow is just this sum and now in the steady state this will be a steady state I can do like that so this is the information flow of x what is the environment well tomorrow we will prove this actually we will prove this so let me keep that this is not a steady state so we will prove it for the general case what is the environment so you can imagine what is the environment so when the environment enters into the mathematics of the master equation it enters when you compute the energy change or when you have a stable balance condition so what you have remember this that we always have that you can have this e j e y and then you have gamma i to j here you can have a temperature for the transition because we have seen in this example that each transition can be in thermal bath then you have this term and then we have the free energy change in the environment so when you take k t log of this you have the change of entropy in the environment now you have changes in y and changes in x so the temperature the heat because this is the heat the energy dissipated to the bath and the free energy change in the bath you will have some changes in the y transitions in the vertical transitions and some changes in the x transitions each environment is responsible for these transitions so the environment of y is responsible for the vertical transitions and the environment of x is responsible of the x transitions You can have different thermal bars, different particle reservoirs, and so on and so on. To prove the second law, you only need to classify, let's say, yeah, they can have a common thermal bar, for instance, or a common, even a particle reservoir. Yeah, they can have a common part. So what I'm going to do at the end is, for instance, what is the environment change in x? So this is going to be, this is what we are going to do tomorrow. This is going to do, we keep system y constant. We change from x to x prime. And this transition has some change in the environment, which is k log of gamma y, x, x prime divided by gamma y, x prime x. So you keep, this y is xy, and this j is xy. So you consider transitions where only x changes. And this term, which was in the master equation, the term of the entropy, the change of entropy in the bath. This term gives you the change of entropy in the environment. And the same for y, no? For y would be k. So it's true, the bath can be, the only important thing is that you identify, as environment x, the environment that it is inducing the horizontal changes. And the environment of y is the environment that it is responsible of the vertical changes. But they can be the same. For instance, in the paper by Massimiliano and Jordan, you have two quantum dots, which are coupled by some capacitance. And as far as I remember, they are connected to the same, you know a quantum dot is something that if you connect to a battery or to something in a circuit, it can be occupied or empty and so on. So they connect this to a battery. The battery is essentially two Fermi gases with different chemical potentials, mu l and mu r. And they connect the two. So the reservoirs are these ones. But when an electron jumps, this is x, and this is y. When an electron jumps from the electrode to this quantum dot, this is the y environment. And when you have this jump, this is the x environment. So it's not important that the environments are physically separated. The only important thing is that how the environments or how the single environment affects the transitions in x and y. So yeah, that's a good point. And tomorrow, we will make this derivation. This derivation is very, is essentially the same derivation that we have done yesterday with the master equation and the second law, but using jumps on x or jumps on y, horizontal jumps and vertical jumps. And then I will apply this to an example. And then we will finish with some notes about the lesson 9 and lesson 10, which will be just some transparencies.