 So an ICTP school in July, but not in the same date. So you can go from school to school. And this one is in Madrid. And it's about complex systems, mostly complex systems. And it's kind of cheap because the registration is 200 euros. It does not include the accommodation. But there is an accommodation for 40 euros per day, which is fine. For Madrid, it's very good. And also, if you want to information, this is organized by the Division of Statistical Physics of the Spanish Physical Society. But of course, everything is in English. And you can look for Hefenol, which is a group of statistical physics and nonlinear school, and 24, because there is a school every year. And these are the courses you see that you have neuroscience, machine learning, of course, artificial intelligence, modeling, biophysics, et cetera. But there is one which is a kind of original. This one, the complex systems and sports analytics. And this is the lecturer, Javier Buldu, who is a Javi Galeano. Javier Buldu works for Real Madrid now. And Javi Galeano works for Carolina Martín, or Marin, who is the world champion of Badminton. And they apply, especially complex networks. And they have a lot of papers or football. Here you see, if you click in the name, you see the project is in Spanish, but some paradoxes about soccer and things like that. So it is complex systems applied to soccer, but also to badminton. It's not only soccer, so it's there. And this was one of the courses, complex system and sports. So look for KFNOL School 24 in Google, and you will have all the information of this school. This is, I will let it here. And also, before continuing with what we did yesterday, we are talking about thermodynamics. We are in the third lesson, thermodynamics. But I want to make some comments, because talking to you in the afternoon and so on, we realize, Lea and I, that there are some exercises, need some clarification. The first comment is to clarify the notion of mutual information. As I said on Tuesday, I think when you have something which is a noun, a random variable, and you inquire about this random variable by questions or measurements or whatever, the i is the outcome of the measurement, then the fact of the inquiring of the measurement is to reduce the uncertainty. This is the uncertainty after the question. This is the uncertainty before the question. And there is a reduction. And this reduction is precisely, if you like, you can write this as h before minus h after. This is h after, and this is h before. And this is the idea of mutual information. We discussed this, but this also explains why hx is the number of bits needed to describe a system. Because if this is n bits, and if I can make questions, just no questions, each one provided an information. So I can make n questions. Let's say this y1, y2, yn is such a way that the information is one. Well, if the questions are error free, then we proved last day that the mutual information is the entropy of the answer. Mutual information is equal to the entropy of the answer. So if you make questions where this is one bit, each question reduces the uncertainty by one bit. So if I have n bits, the first question reduces this by one, by two, by three, et cetera. If I can make n questions, then with one bit, then hx will be n minus one minus one minus one, so it will be zero at the end. So this is why I only need n questions to solve the problem to find out the value of x. And this is true for x could be numbers. We said that x is a random variable, but it could be anything. And there is a nice application of this, which is maybe you have here about this problem, you have 12 coins. And one is false. Now, this is a false coin. And it's false, and it waits different from the rest. Maybe more, maybe it's heavier, maybe it's lighter. You have heard about this problem, never? And you have a balance, and then you can put your coins in the tool. And then you have to find out which is the false coin in three operations. So you are only able to use the balance in three operations. So the problem is, this is a problem that you can tell to a kid or to anybody. You don't need any mathematics. You can solve it by using logic, but you can solve it using information theory. So the exercise is solve these. So you only have three operations, three measurements. And the question is, well, you can solve it just by thinking, I mean by logic, by common sense, let's say. But the problem is solve the problem using information theory. Solve it using information theory. So you can calculate the uncertainty. There is an uncertainty here because you don't know the coin. Even more because you don't know if the coin is heavier or lighter. So there is an each operation, each measurement is a measurement. And then you have to maximize the entropy of the outcome to maximize the information provided by the measurement. It's a nice exercise tomorrow we can solve the entropy of the outcome. This is because Ix, we saw yesterday that it was symmetric. And so you can write this as this. If this is the outcome, this means this is this equation. This is the uncertainty before and this is the uncertainty after. But you can also write it in this way. The interpretation here is not so clear because this is the unknown and this is the outcome. But if the outcome is error-free, error-free means that y is a function of x. Maybe not in one-to-one function. Maybe something I don't know. If x is a dyes, 1, 2, 3, 4, 5, 6, y can be, is it odd? This could be a measurement. And this could be the measurement. And then this means that if the person that answers the question is sincere, I mean, if it is not a liar, then y is a function of x, which is yes. If x is odd, so 1, 3, 5, and no, if x is even. So error-free means that y is a function of x. In the case of a measurement, a measurement without error, x is the state of the system. Maybe the microscopic state of the system. And you measure, like in the C-lar engine, there is the position. And then you measure right-left. If the measurement has no error, then this is just y is left. If x is negative, let's say, this is x, and this is 0. So left f, in this case, for instance, y would be f of x equal to left. If x is negative, and right, if x is positive. So this means that if you know x, the outcome is deterministic, which is the characteristic of error-free measurement. Is it clear? And in this case, because this is deterministic, once you know x, this is 0. So this is a y. So in an error-free measurement, the mutual information is the uncertainty of the outcome. This is why people confuse information with Shannon entropy. But it's a different thing. The real measure of information provided by a measurement or a question is the mutual information. But in many cases, because error-free measurements are very common, mutual information is equal to the entropy of the outcome. So in the case of the balance of the problem of the coins, you have to maximize the entropy of the outcome. So you need to arrange the measurement in such a way that the entropy of the outcome is maximum. Which in this case, what would be the maximum entropy of a measurement? If it is just no, it's 1 bit no. This is the maximum entropy that you can have. But the balance is just no experiment. What are the possible outcomes? No, I measure in the balance. I put some coins in the right plate. What are the possible results? Three. There are three, which are equal left or right. So it's the maximum entropy that you can get is log of three. It's not 1 bit. So this is the trick that you can do. OK, this is the first comment. So I hope with this, you will have a clear idea of the meaning of mutual information. This is very important. So in this case, we don't have so heavy calculations as it sucks, but there are a lot of concepts. And the idea of the course is that you really understand these concepts. And to understand a concept, the best way is you have to look at this concept of this magnitude from many different perspectives. So OK, this is the first comment. The second comment is that I realize that the exercise number one is a bit confused. People asked me yesterday, the second exercise number one is the Cedar engine where you can have an error. I mean, you can think that the particle is on the right. No, let's say we measure it and it's on the left. So it's here, but actually, the particle is here. So this is I plot this like a ghost. So and people was using, because I insisted a lot on the first day on the pressure that you have to exert. And if you remember, if I think that the particle is here, I will try the expansion in this direction. So I will put my weight like that to oppose the force to the pressure. If you do this and the particle is actually here, it's a mess. I mean, you finally get this in this direction. And so this is not the idea of the problem. The problem, because you can do in the Cedar engine, you can move the pistol by putting this pressure, or you can, and in fact, in many experiments, what you do is to control this, your control variable is the velocity of the pistol. So you have a, you move the pistol at a given velocity. And you keep this velocity constant on very small, because it's quasi-static. And then the particle starts to bump, bump, bump. But the idea is that if you believe that the particle is here, you move the pistol in this direction with the velocity V. So if the particle is OK on the left, then one can prove that the particle, one particle, when a particle hits a wall in an elastic way, it doesn't, the velocity is the same before and after. And the kinetic energy is the same before and after. But if the wall is moving, the velocities are equal in the frame reference of the moving piston. But in the laboratory, the out velocity is smaller than the in velocity. So there is a loss of energy, which is the work that we are talking about all these days. So don't think of the pressure and so on. Think just that the part, the piston moves and that the work is just the initial, the integral of PDV. And that you can indeed use the ideal gas equation. And this is kT log of final minus initial. And with the same convention that we are using, it's like that. So if I have an expansion, this is positive. And this means that the system is losing energy. It goes to the external agent. And when I compress, the external agent has to do some work. When I compress, this is positive. I mean, this is negative. So for exercise one, you can do just this. Which is, by the way, in these experiments where you have a Brownian particle, you can also think of the C-lar engine. Instead of this thing of the walls, which is a bit complicated, you can think of a Brownian particle here in a box, in a one-dimensional box, and writes a barrier and moves the barrier. And this is equivalent. And this formula is also true for this type of system. Questions here? Yeah? Well, no, because the formula, the original formula is minus PDB. But I just use this to check that in compression, work should be positive. And in expansion, work should be negative. We don't put the sign because of that. We put the sign because we said that the work is the energy that the external agent puts into the system. And this is always P minus PDB, because when dV is positive if you want, and if dV is negative, you are compressing and you are expanding. In a numerical simulation, I think it's easy. And I think these are the, when people have used or have people simulate the C-lar engine, I think it's like that. They move the piston. I insist it on the pressure, because it's in the book of Maxwell-Demond. But now I realize that it is a bit complicated. They do like that. They move a piston like that. And the simulation is very easy. I mean, this position of the piston is V times T. And then you compute, you resolve the collision problem, which is very easy. I think V out is V plus 2 times V, I think, something like that. Because when you are in the reference frame of the piston where everything is fine, it comes out. But then you go back to the laboratory. You have to subtract V here and some V here. I think it's like that. But I'm not so sure. And then you look at this, and then you calculate the kinetic energy and so on. So you accelerate the pressure? No. No, it is a collision. And there are no accelerations. If it's true that the first day I said this of the pressure and the pressure is more complicated, because then you have to put a force. And then you get acceleration, some boom, and another transfer of momentum and so on. So here, the momentum of the, of course, is more artificial, because momentum is not conserved. Because the momentum of the piston is constant, and the momentum of the particle changes. So it is not conserved. And not even the energy. And it's not conserved precisely because the particle in these collisions is losing energy. Where does energy goes? It goes to the external agent that it is fixing the velocity of the piston. So in the simulation, it's very easy. You just put the piston like that. You compute the collisions. And then you say that the increase or decreasing kinetic energy is the work. And if you are wrong, if you are wrong, and so if you are right, the particle is here, and then you expand. If you are wrong, you are compressing. And it's the opposite. When the velocity goes like that, then this is a minus. Sorry, this is a minus, no? This is a minus. When the velocity goes like that, it's a plus. So you are putting energy into the particle. But to solve the exercise, you just use this formula. And then the only thing that you have to do is what is the probability that you are right? You calculate the work. If you are wrong, you calculate the work, which is different, because in this case, you are compressing. This is also why you cannot compress all the way to the right. You have to compress up to some alpha, because if you are wrong, you cannot compress to a zero volume. This is the idea that you just compress with a constant velocity. So forget about the pressure and the weights and all these things that we discussed the first day, which are in the book of Maxwell-Diemann. They are important, but to solve the exercises is much simpler, this idea. OK? Questions? And for the second exercise, the third comment. So exercise number two and three, I think, because there are two exercises. I don't have it here. Anyway, in exercise two, it's just to repeat the arguments that I gave the first day on Monday on the Landauer principle. But I realized that the argument that I gave here was not so, I mean, was a bit not in detail. So let me do the Landauer principle in detail. So you have a memory. The memory is in a box, in contact with a thermal bath. This is the phase space. Well, actually, this is not a very good. I have the memory. I have my memory in contact with the bath. So the bath, and there is possibly a heat and a work. And there is an external agent, like always. And in this case, what happens is that the memory, this is the phase space of the memory. Initially, the system can be in any microstate. These are microstates. And then the phase space is split into two regions. One corresponds to 0, and the other corresponds to 1. Exactly like, for instance, in the double world potential. And by manipulating the Hamiltonian, or in this case, this potential, you go from this to 0. So here, the system can occupy all these microstates. And here can occupy these microstates. So the volume of the system, the volume in phase space that the system can occupy is divided by 2. This is what I explained on Monday. And I said, well, this reduction, because of the Liouville theorem, must be compensated elsewhere. And the only way to compensate it is in the bath. But I didn't explain this in detail. It's a compensation. And why the compensation is to, it looks like I have, it is going to depend on this volume. But the thing is like that, the total volume in phase space, initially, is volumes in phase space multiplied. If you have two systems, volumes in phase space multiply. Why? It's like the configurations of ISAC's IC models. If you have two IC models, and this has, let's say, 100 configurations, and this has 200 configurations, what is the total number of configurations? If you consider the global system, I can make any pair. I can combine one configuration of this guy with one configuration of this guy. So it's the product. The total number of configurations is the product of the configurations here and the configurations here. This is why Boltzmann took the logarithm, because product is product. And if you take the logarithm, the product becomes sum, and then you have an additive function, which is the entropy. So the total volume is the system plus the volume in the bath. The bath is super complicated. The volume will be super big and so on. But this is a. So if I divide this by 2 in my process, this is initially. If I divide the final is this one divided by 2, then the compensation that I mentioned just using this was, but not very precisely. Well, it's precisely, but not in detail. I mean, this must be equal to this. So to compensate this, you have to multiply by 2 the bath. This is what you have to do in exercise number 2. You have to multiply. If you reduce the volume in phase space of the system by a factor, you have to multiply by this factor. Now you apply Boltzmann entropy. Boltzmann entropy is k log volume. And if you apply this to the bath, then here you will have the entropy in the bath. And when you multiply, you have multiplied by 2 the volume of the bath. This means that the entropy of the bath, the Boltzmann entropy, this is the final minus initial, has increased by k log 2. That's the, you take this minus this, you take k log final. I, okay, it's like that. It's k log 2, because it is the log of this minus the log of this is log of 2. And now you apply Clausius equation that tells you that the bath is in equilibrium. So the increase of entropy is equal to minus the heat divided by the temperature. And then you compare this with this, and then you get that q must be minus k log 2, which is Landau's principle. So yeah, I said just a compensation, but I realized that to do the exercise, you need all this information, especially this information, that the volumes multiply. So if you reduce by a factor, one of the, and this operation, the Landau's overwriting, is you reduce the volume. You have to compensate this, but what is this compensation? To double the volume of the base space. No, this is equal to this. The total at the beginning is equal to the total at the end. Why? Because the total system is isolated. Well, it's isolated except for the action of the external agent. And it's Hamiltonian dynamics with it. Ah, sorry. Yeah, yeah, yeah. Sorry, yeah. And this is the consequence of Liouville theorem. You don't even need to use the second law. You have to use this thing. This is Clausius equation. It's a funny equation because Clausius introduced it as a definition of entropy. Then in Boltzmann realized that entropy can be more fundamental. And in statistical mechanics, we use Clausius equation as a definition of temperature. And in the last 20 years, when heat was not clear, we are actually using this equation as a definition of heat. So it's a question with three symbols. And the equation has been used as a definition of each of these three symbols. But here, you can use it. I mean, the concept of this is true. For systems in equilibrium, there is no any problem. So the mass is in equilibrium. So this equation is OK for systems in equilibrium. For systems out of equilibrium is when you have problems. So I think this clarifies the three comments that I wanted to make that will help you to do the exercises. And we can continue. Any question here? Is it clear, everything? No. OK. The volume of the system is divided by 2 because the memory is symmetric. You can have other situations. You can have that. I don't know if 0 is like that. If 0 is like that, then it's different. And actually, the problem goes in this direction. So this is just because we have studied a specific example of a memory, which is a symmetric memory. Why we multiply by 2? Well, because of this, we need that the total volume is constant. So if this is multiplied by 2, we have to multiply by 2 this one. If this is divided by 2, sorry, we have to multiply by 2. Physically. OK, that's a good question. Physically means that this is OK. So you have the phase space of the system. This is the phase space. Well, the system we call the system in memory. And this is the phase space of the bath. The bath is a system with many, many degrees of freedom. So it is a very complicated, let's put it like that. And it has something called energy layers. These are the points where the Hamiltonian of the bath is equal to some energy. So in principle, well, the whole thing is a mess. But if the energy here is, we can assume that there is no energy here. Because we can assume that every microstate has the same energy, which is a bit. I mean, this means that there is no kinetic energy or so. But suppose that we don't have kinetic energy. So the phase space available will be this plus this. I mean, this times this. I mean, any point here, any pair composed by a point here and a point here will be an available microstate for the whole thing. Now, of course, they both are capital and things like that. Now, somehow, you drive all these points from 1 to 0, because you do like that. And this means the compression in the phase space. So this means that this must increase. How? Well, this has a volume. This layer has a volume. Usually, we call this volume omega e. What happens is that this guy introduces some energy here. It goes to a new layer, which is precisely e plus q. Because you have absolute value. You have dissipated kT log 2. You have dissipated a small amount of energy to the thermal bath. So these points in the bath, they increase its energy, their energy. And now you feel another energy layer, which has more. And this difference, this is the energy at the beginning. This is the energy at the end. The difference of these volumes is precisely this factor. And this can be proved because of the definition of temperature. But it's kind of complicated. If you want to imagine this in a big phase space. But yeah, we have developed this intuition of, is it clear? It could be, for instance, if the memory is a double world potential, you can think that the is a double world potential with a single degree of freedom. Actually, the memory does not need to be a macroscopic system. It could be p squared divided by 2m plus bx lambda, or lambda 1, lambda 2, where v is this potential that you can change in lambda 1, lambda 2, you can modify. So bx could be, for instance, this could be p squared divided by 2m plus, and this is the typical, I don't know, x squared, x4, lambda 1, x squared, lambda 2x. This typical 5, 4 potential or whatever. And then now your external agent modifies, and then you have some interaction with a bath. In principle, you can write down a model and do simulations if you have a thermal bath. You can repeat all this without. And we try to do in general. You know, when we do these conclusions that you have, if you divide by 2 the volume, you multiply by 2 and you dissipate, and our principle is independent of the nature and of the, as far as it's a physical system. Because it's independent of the details of the system. OK? Yeah, if it is not symmetric, then this is not divided by 2. This is what we are going to generalize in tomorrow. But I want you to have a clear idea of what is going on in the phase space and so on. And then to derive all these, and now I start the lesson properly to derive this, we will use this equation. Yesterday, I finished with this equation, where F is the non-equilibrium free energy. And instead of using this Liouville theorem and all these things, what we are going to use is the second law for non-equilibrium states. So now you don't have any excuse to solve the exercises. And I've said this many times that this one is especially important, the one that tells you, because we are going to use this exercise in all the lessons next week and so on. OK, so as I said yesterday, we will skip the fluctuation theorems. You have it in the notes, but we will skip it. And then I will finish the lesson. We are in lesson three, thermodynamics. We saw heat and work and the non-equilibrium free energy. Q is always energy from the bus to the system. Minus Q is from the system to the bus. No, here, where in the blackboard that it was there, in which step I've used quasi-staticity? In which step? No, when I wrote that, what's the name? Fiera says that I've used quasi-staticity here. When I said that the bus, this is minus Q divided by T. No, because the bus is in equilibrium. So here, this has nothing to do with the process. This is just an identity for systems in equilibrium. Q is the dissipated heat, the heat that goes from the system to the bus. But this is not done. But it's true that I have used, because I've got the Landauer principle, which is KT log 2 is for quasi-static processes. If you do the race, the Landauer process at a finite speed, you dissipate more. So where in which step I have used the quasi-staticity of the process? Come on, there are not so many steps. One, there are three steps in the derivation. The first step is that the system devised by 2, this is OK. So there are only one step remaining. The total, when I said that the total is invariant, of course, this is liberal theorem, it's a theorem. So the total, initially, must be equal to final. This is a theorem. But we said that the second law, actually, if you use Boltzmann entropy, is not compatible with liberal theorem. Why? Because the volume here maybe is a nice thing like that. And here, the volume is something very complicated. Only if the process is quasi-static, the volume remains something regular. So this is a theorem. But let's say the effective volume, when you have these type of things, the effective volume, let's put it like that, effective volume, this is bigger than that. This is essentially the second law. The second law is that the motion in the phase space is so intricate that whenever you have a coarse-graining, this increases the volume. It's easier to say that. Yeah. And this formula is only valid if the base is in equilibrium, which is equivalent to this somehow. In one of the two papers, in one of the two reviews, the one called thermodynamics of information, the second one, which is in the archive, you have a more detailed description of that. OK. So yesterday, we defined heat and work. Then we introduced non-equilibrium free energy. And all the thermodynamics of information, I'm spending so much time in this lesson because the Landauer principle is going to be a particular case of this. The Sieler engine is going to be a particular case of this. And it's very easy. I mean, in one line, one can prove Landauer's principle, Sieler engine, and everything using this. So this is very important. This is very important. And we will do this tomorrow. And Leah will do this also. Leah, we will use this on Monday to explain the Maxwell, the Sieler engine, and so on. To finish, there were three. One was heat and work. Three, two was equilibrium free energy. We should study now three, three is stochastic thermodynamics. And three, four is fluctuation theorems. But as I said, I prefer to focus, especially next week, on information flows, which is something which I think is more useful. If fluctuation theorems help to understand some things, but so we are going to do next week, we are going to use something called information flows to analyze, especially molecular motors and molecular machines, which are motors that work in biological cells. And it's a very important topic in biophysics and in nanophysics as well, because you can also think of artificial motors. So this is something that we will do next week. But the mathematics behind these motors is the master equation for physical systems. So we are going to study now. Maybe this is known for some of you, but I think there are parts which are probably new, because it took me a while to understand some of the things that I'm going to explain. So how do you model a physical system which has discrete states like 1, 2, 3, 4, et cetera? And it's in contact with a thermal bath. So you have here this in contact with a thermal bath at temperature t. And for instance, 1, 2, 3, 4 can be conformational states of a protein. So the protein can be like that, like that, and it can jump from one state to the other. And these jumps are random because they are induced by the thermal bath. So the way we describe this first is with a probability distribution that depends on time. So this is the probability to be in state i at time t. And we want an evolution equation for this object. And what is the pollution equation for this object is the master equation. So the master equation sometimes is also called kinetic equation because this problem is essentially equal also to what we call kinetic models where these are species. And you can have reactions that convert species 1 into species 2 and so on. This is also a Markov chain. So in mathematics, people call it Markov chain. In physics, we call it master equation in chemistry, kinetic equations. But it's in population dynamics, which is also the same thing. These are different species. And you can go from one species to the other. And this is also you can think of this as a population dynamics problem. So the master equation is the derivative with respect to time of this P i. We will use the dot to express time derivatives. And this is the sum over I have a I'm trying to see how the probability of being in a state i changes. So it changes because particles are coming from other states and they are leaving i to other states. So I have a kind of incoming flow and outgoing flow. And the probability of these transitions is ruled by something called the transition rates. I will use this notation. This is the transition rate. When you multiply this by dt, this gives you the probability that a particle that it is at site i jumps to site j. Let's say, OK, it's better to tell you in words. It's the probability that condition to the fact that initially, in an interval of time of duration, let's put delta t. So here, the system is in a state i. What is the probability that the system is in a state j after delta t is this thing? So if I multiply gamma i j, or in this case j i by pj, this is the probability that j is occupied. And this is the fraction of particles that per unit of time jump from j to i. So this product is the number of particles or the number of particles, the probability, the flow from j to i per unit of time. And now if I subtract, this is the incoming flow of probability. Sometimes I use particles and probability. You know why? Because you can imagine this thing as a single system that jumps. And then pi is the probability. But if you imagine a lot of 1 million of these systems, independent of each other, evolving, then the number of systems in each state is n times 1 million pi. So you can imagine pi as the probability or as the fraction of particles if I have a big number of particles moving in this board, in this network. So this is the incoming flow of particles. And this is the outgoing flow. And what is that going flow? P i t gamma i j, this is the number of particles per unit of time that's jumped from i to j. So this is a minus because I'm looking at how the number of particles in state i evolve. And this contributes with a minus. This is outgoing flow. And one nice of writing this is using the current, the net flow. So I can use this j different from i. Well, I can use j. And this is the net flow from j to i. So it's called the current also. And this is called current. You can imagine a large n number of particles here. And they are moving. Then you sit here and you look at how many particles go from 2 to 3, these ones. How many particles go from 3 to 2, these ones? And if you subtract one number, you get the net number of particles that flow from j to i, which is minus the net number of particles from i to j. So the current is an anti-symmetric matrix, if you like. j is current. I don't know what it is. OK, yeah, you can call it density of current because it is the current per particle, if you like. You have many particles. Or sometimes it's called probability current, which is the same as density current. So no, no, it's a sum over j. It's a sum over j. i is fixed here. i is fixed. i is my s. And I'm asking myself how the probability at site i evolves. And it evolves because particles are coming. And particles come from everywhere. So this is j. This is a sum over j only. Or if we can put it like that, j with j different from i. This is sometimes a bit ambiguous notation because sometimes we use this to sum over j and i. But here we only sum over j with j different from i. Actually, the problem with, we could say j because the problem is that the transition rate from i to i is zero. Let's say it's not defined. You don't need to define that. But this is a sum over j because it is all the outgoing. So all the particles that are in i jump from i to j. And they can jump everywhere in the network as far as the gamma is. So this is the master equation. And in most cases, we are interested on the stationary solution. Well, one can solve this. This is a linear equation. So one can solve it by diauralizing this matrix gamma, which is related with some collection of matrices called stochastic matrices, which you should not confuse with random matrices. Random matrices is what you are learning with Isaac. Stochastic matrices are matrices which are not random. And they are related with these matrices. They have some properties and so on. But this is the theory of Markov chains. And we are not going to study this theory. In most of the cases, we are interested in the stationary state. When we say a state, we mean probabilistic state. Sometimes we have to distinguish states are these possible states. But usually, we also call a state the probability defined over this. No, these rates are usually the rates defined in the dynamics of the system. For instance, when the states are conformational, I mean, configurations of a protein, you can go to the lab and find what is the probability that if my molecule is in state 1, it jumps to state 2 in a certain time. Good question. I was actually gamma i is not really 0. It's not defined, actually. It's not well-defined. This master equation, one way, if you like to understand this, you can start with discrete. And then for discrete time, if you consider discrete times, this is like a game. So in each term, you move. And this equation, now t is integer. So this equation is like that. You have minus p. Well, you have a sum over j of the probability to jump from j to i, pj. And now you don't have the other term, because this is not an increment. This is just the new probability. And then you can go to the discrete limit by assuming the following. You have to assume that this is the probability of a jump in a given time. You have to assume that this goes in the discrete limit when this is delta t. In the continuous limit, when delta t goes to 0, this must scale like delta t. So you must have, because you have a lot of terms. So if in each term, you have a final probability to jump, you will have a mess when you go to the continuous limit. So you need that this scales as delta t and jj scales as 1 minus delta t. Why? Because you have many terms. So if you want to have something smooth, I mean smooth, something that you have to stay, you need a very large probability to stay. And then when you go to this limit, this is lambda iji, which has units of 1 over time. And this is something else. It doesn't make sense to define the rate of staying. Because the rate of staying, I mean, there is no rate of staying. The rate of staying is not an event. What it is an event is a jump. But that's a good question. More? Yeah? Well, this is what we have got to study. So let me finish this. What is the stationary state? The stationary state is one state that doesn't change in time. So this means that this is 0. So this means that this is 0. So in the stationary state, the current j2i in the stationary state, which is independent, this is 0 for all i. And this is a stupid, I mean, this is obvious. I mean a state, I am a state i. I receive 100 particles from here or from you. And I give you 150. So I'm losing particles or money, if you prefer to think of money. So in the stationary state, the money that I receive must be equal to the money that I give. So this is for any state. So for instance, look at 2. The money that 2 receives from 1 must be equal to the money that he gives to 2. So this current must be equal to this current in the stationary regime. This j must be equal to this j plus this j. So let's say this is a j1 to 2, j2 to 3. It's like a Kirchhoff loss, if you like. It's the typical conservation of current in this case. And this is the idea of, but you can have loops. You can have, for instance, you can have currents like that. And even though to be b-stationary. So sometimes this is difficult to solve, because you can have currents, but they cancel in this way. There is a special case, which is very important in mathematics and even more important in physics, which is the case when this equation, when the currents, sorry, are 0 in every pair. This is a little bit what you were saying, no? Well, forget it. So when it doesn't cancel the money that I received from him and I give to him, but each pair of us are OK, are in peace. And the same money that I give you is the same money that you give to me. So this is a non-currents everywhere in the whole system. This is called in mathematics, detailed balance. Detailed balance. Why? Because remember, this is pj. Now we are interested in the stationary state. I use a sub-index. This means that j, remember the definition of the current? This is 0, or this is equal to this. This means that this is the number of particles that go from j to i, and this is the number of particles that go from i to j. And they balance pair by pair. So this is detailed. Detailed means that there is a, in every pair, we have this. The expression, maybe you have here the expression detail balance, which sometimes means very different things. They all refer to this condition. This condition is in mathematics. In mathematics, when something like a Markov system, a Markovian system like that behaves, has this solution, then we say that it obeys detail balance. This is not necessary. I mean, not every system like that obeys detail balance. But if we are lucky, I mean, if detail balance is fulfilled, we are lucky because then it's much easier to solve the stationary state. In physics, why this is so important in physics? Well, in physics, in equilibrium, if I have a system in equilibrium, now I have my system and here my thermal bath. And we have this stochastic dynamics in the system. It turns out that in equilibrium for a system in contact with the thermal bath, the equilibrium solution should obey detail balance. So the stationary state obeys detail balance. So this imposes a condition on the rates. This imposes a condition on the rates. Moreover, if we have a system in contact with a thermal bath, what is the steady state? The stationary state is the exposition, is the Boltzmann state. So EI is the energy of state I. And C is the partition function. So we have this. So if we combine detail balance and this quantity, look what happens. We have this is the number of particles jumping from J to I. It must be equal to the number of particles jumping from I to J. The partition function cancels. And this is a condition on the jumps, which is on the transition rates, which is this one. And this is also what in physics we call detail balance, which is a little bit more than detail balance. Detail balance plus the thermal state. No, stationary means for a stationary is a concept that comes from mathematics is the stationary solution of a differential equation of the master equation. The stationary solution could be anything. In equilibrium, it's a special case of a stationary solution which obeys detail balance and which obeys thermalization. This is thermal. This is equilibrium. But this is very interesting because you use equilibrium to prove the detail balance condition. So your gammas must obey this equation, this condition. But now you can imagine different things. You can imagine, for instance, you can imagine a system as simple as this, where maybe these transitions are mediated by a temperature. And this transition, because this could be a spatial state, a particle jumping like that. You can imagine three wells, a Brownian particle and three wells. Actually, this experiment has been done with optical tweezers. You can have three optical tweezers. When I say optical tweezers and Brownian particles, you know everybody knows what tweezers are. Optical tweezers are just a way of trapping particles in harmonic potentials that you can have. And it's made using lasers. There are like three Nobel prizes associated with optical tweezers and I think three. One for the inventor. The second one for the application to biophysics. And I think there are three, third one. But anyway, you can have different wells in space and induced jumps. So this could be like that. And now you can assume that the jumps between three and two are induced by another temperature, because you can have a temperature gradient. So if you have this, this will not obey detail balance. It was probably, I don't know. But the gambas should obey detail balance locally. So you will have a beta, you will have a detail balance condition between one and two with T1, between one and three with T1, and between two and three with T2. So you use this detail balance condition to fix or to limit or to define the gambas. But then your full system, because it's in contact with different temperatures, maybe it does not obey detail balance globally. So this is the nice thing of this formalism, that you can use this condition to create models. Now you can have anything that you like, like different temperatures and other stuff that we are going to define now. And then you have motors. You can have a chemical motor. You can have here a current. Even though locally it's detail balance, globally it's not detail balance. I can have a current. And then I can have also a motor. And I can define efficiencies and so on. So Brownian motors, the simplest formalism to model Brownian motor is this. This is so simple. I mean, it's this condition, the equation that I raised, the master equation. And then you go to the computer. You solve the master equation, which is very simple, because it's linear. And then you can have a lot of properties. You can have a lot of properties. Or you can also change beta in time. Let's say you can let the system run with whole temperature for a while and hot temperature for a while. There is a famous example called the flashing ratchet, which is more or less like this. So I'm so slow. But it is good. I mean, if I prefer to be slow and that you capture all the, I mean, it's better to, if you understand everything, it's much better. OK, so the detail balance condition, the detail balance condition in physics, it has also a very intuitive interpretation. Suppose that this is Ej, this is energy. This is energy. This is Ej. And this is Ei. And this is Ej minus Ei. And what the detail balance condition tells me is that the gamma from i to j obeys this condition. And you can see why. I mean, first, the system is encoded with a thermal, but this is why energy is not conserved. But it's interesting that I can gain energy. So if I go from i to j, and j is more energetic, where this energy comes from? From the bath. So the bath is introducing via fluctuations energy. And what is the typical value of the thermal energy, kt? So this is this here. This is a, if this is delta E, you can write this as delta E divided by kt. This is a ratio between the energy that you gain in the transition and the typical thermal energy. If this is very big, if my jump is super high, this is 0. And this means that this transition is impossible. And then I have a completely, the system is always going down, down, down. This happens if the temperature is very small. So if the temperature is very small, you just go to the ground state of the system by these jumps. Because it's impossible to gain energy. You go down, down, down. This is actually what people use for machine learning and all these things. To find the minimum of the error function, you go down. And to avoid local minima, usually you heat up the system a little bit. When you heat up, the system can go up. And then you cool the system, and then you can make protocols for machine learning like that. So the probability, these are not probability. These are rates of transition rates. They are related with the probability. If delta E is much bigger than kt, this means that you never go up. And if delta E is much smaller than kt, if you have a very high temperature, when you have a very high temperature, this is almost 1. And then it's completely symmetry. So the system is in contact with such a hot bath that it cannot see that there is an energy landscape. And it explores the whole thing in a uniform way. So this is the basic thing of how to model the dynamics of a system using the master equation. Now let's add some fun. And I always have this diagram, a system, and a thermal bath. But usually, in all these motors and thermodynamic information, we also have an external agent. What the external agent will do? What do you think? The external agent, by definition, will have defined external agent as somebody who can change the parameters in the Hamiltonian. So in this, where the Hamiltonian appears here? Well, here we have a Hamiltonian. But first, we discretize the system because we have wells or because it's quantum or whatever. And then we also codify all the interaction between the system and the bath in this gammas. In this gammas, this gammas depends on the Hamiltonian. But what is the simplest way of implementing here a modification of a parameter or given by an external agent? What? Changing the rates? Well, it could be. But the rates are really related with the coupling between the system and the bath. So I guess for a, so imagine that this is a qubit or a spin, a spin up, a spin down. And the energy depends on the chemical, on the magnetic field. What is the simplest thing that the external agent can do? Change the magnetic field. And what is the consequence of that? It's not changing the magnetic field as an effect, the rates. It affects the energies. So what an external, now we have an external agent that modifies the energies. The energies can depend on a parameter or can depend on time. We can say that it can depend on time. But let's put it like that. So the effect of an external agent is to change the energy levels. What it is nice is that we can keep the detail balance condition that tells us how the rates must be. So this is the nice thing of all these theories that we derived the detail balance condition for a specific case where there is no external agent, everything is at the same temperature and so on. We can use this condition to model more complicated things. So this is what we are going to do. And in this case, it's the same. Everything that I've said before is still valid, except that now this can depend on time. And by the way, yeah, the rates will depend on time. Yeah, sorry. So yeah, the rates will depend on time because if I include this condition here, unless I modify the rates will depend on time. So I was wrong when I said that. The rates have a part that depends on the coupling between the system and the bath. But they have another part that depends on the energies because the question is there? No? No, no. That's a good point. This is a bit a kind of, it's not a circular argument, although it looks like a circular argument or even a contradictory argument. You use a situation, which is equilibrium, to find this condition for the gammas. And then you assume or you extrapolate, you say, well, this condition is valid anywhere. I mean, even if I have different temperatures, even if I have this depending on time and so on and so on. What is the justification of this? Well, the justification is that the gammas, in any theory, if you go from a microscopic theory, you can calculate the gammas using different theories. Like, for instance, if you have double wells, there is something called the Kramer's problem of transition. And then you get the gammas. You can get the gammas in quantum mechanics with something called the Bohr approximation. But maybe some of you have studied in other cases. But you can get the gammas in different ways. And then the gammas don't depend on the energy and so on. But all these gammas obey this. Because they must obey this. And then you can use these. The gammas are like the dynamics of the system. So the hypothesis here is that these dynamics, if I have the dynamics, and it's how the system modifies its shape, for instance, in a molecule or how the system changes the state, the idea is that this is always the same, even if you have a driving like here, or even if you have anything. But it's true that you obtain this from equilibrium. I don't know if this is a. No, the local detail balance, we assume that it's never broken. We will see tomorrow that it can be broken if you have chemical reaction and so on. But this is the basic condition that you use to find the master equation of a physical system. You can change. You can say that the beta is local. You can say, as I mentioned, you can say that the is depend on time. You can say a lot of things. But this is always the guideline. Let's say the basic guideline to write the master equation of a system. And as I said, this is the kind of top down approach where I obtain this equation imposing the thermal state. And you have done two top approaches, like the approaches that I've mentioned, crammers, bore approximation, and so on. And you reach the same. But this is more economic, let's say. This is much easier, although there are conceptual problems of you are deriving something in equilibrium, and you are extending this to non-equilibrium situations. OK, so I think I have to finish. So tomorrow, we will finish this and start the information part, the second law with information. And I think you can only do the exercises one, two, three, or so. You can also now. Oh, yeah.