 Thank you, professor, you can start. All right, so yeah, for the last lecture, I decided to just talk about my most recent work. So I will do slides today. What I really wanted to cover was fluctuation theorem and thermodynamic interpolation. Since this is done, I guess I can talk about my most recent work. And you should be able to understand most of it given the lectures, OK? OK, so I'll talk about two different topics. One is the cost of precise biochemical insulation. So that's something I have been working on, biochemical insulation maybe for five years now. And the other thing I'm going to talk about is activity tangents, which is something I just started working. OK, so again, there will be two parts in this talk. First I'll talk about the free energy cost of biochemical insulation. And then here I'm going to find a relation similar to the thermodynamic central relation in the sense of telling what's the minimal cost for a certain precision. But the precision here for biochemical insulation is going to be a different quantity as compared to the thermodynamic central relation. It's not the epsilon square I had in the thermodynamic central relation, but it's going to be something else. It's going to be the number of coherent oscillations, which I'm going to explain what it is later. And the second part of the talk is second law for activity tangents. Now, an activity tangents is simply an engine and the medium, instead of being a standard equilibrium reservoir, it's a reservoir made of, if you want to use this name, active matter. So active is kind of a dangerous word in the sense that, for example, the word non-equilibrium can be defined with an equation, but active is not really a word that can be defined with an equation. Sometimes it's a synonym of non-equilibrium, sometimes it's not. But here, at least when I call a heat and then active, I will be able to define it with an equation, at least in this context. OK, so let me start with biochemical insulation. So what's a biochemical insulation? I mean, the most maybe famous one would be circadian cycles. Those are 24-hour cycles that lots of animals have. So you don't want to, when it's night, your body kind of want to enter a mode of let's sleep. And when it's morning, your body want to enter a mode, let's wake up. And we have some sort of circadian cycles within ourselves that are like clocks that tell us the time. So even if I take you and put you in a dark room, you should be able to remember, at least for a few days, at some point, of course, your circadian clock is going to be destroyed. But you're probably going to keep oscillating with these night and day cycles, even if you're in a dark room without an exposure to light. So there are many kinds of different models for oscillations. And if you put a person in a dark room, at some point, the oscillations will lose coherence. But now imagine a stochastic person. By a stochastic person, I mean, I don't know, a person made of 1,000 molecules. So it doesn't have to be a person, just a system of chemical reactions. And there are experiments you can do where you just have a bunch of chemicals and they will oscillate with a 24-hour period. The most famous thing that does that is called chi-C, which is a molecule that you find in cyanobacteria. And if people do experiments with chi-C, which they just put a bunch of chi-C, we've together with other molecules which are called chi-A and chi-B. And if you put all these things in a solution, the phosphorylation level of chi-C, we will oscillate with a 24-hour period. And again, these oscillations aren't happening without any external input. There is no external word signal oscillating for 24 hours period. And if the system is stochastic, if it's made of a finite number of molecules, maybe 100, 1,000, then if you look at the time series, it's going to look like that. So there will be some uncertainty in the period of oscillation. There is also uncertainty in the amplitude, but that doesn't matter too much, the uncertainty of the period of oscillation, which means that if I was to run two different time series, after a certain number of periods, they are going to be phase. So maybe if I go long enough, one series is going to be a period of 1,000. The other is at a period of 1,200. And they are completely defaced. For the perspective of one stochastic thing, it's already like a year. From the perspective of the other one, it's already a year and a half or something like that. And the way to quantify the precision of these oscillations is you look at the correlation function. And by that, I mean, you just do many different trajectories. You sum all of them. You are going to get the correlation function. And when you look at the correlation function, what you see is, you see the finger oscillating. And you see a exponential decay of the amplitude of the oscillation. And this exponential decay is because of fluctuations in the period of oscillation. And the number of current oscillations is defined as decay time divided by the period. The larger this number, the longer maintaining the coherence of the oscillations. If this number is 10, and I am at oscillation 100, then probably two different time series is going to be completely defaced from each other. One is going to be at 100. That's going to be maybe at 120 or something like that. So when that's a particular model, it's called activator inhibitor model. But that's a biochemical oscillation. Imagine a biochemical oscillation that does not have any influence. There is no external signal from it. And the question I'm asking here concerns the precision of these oscillations as quantified by the number of current oscillations, which is just the decay time of the oscillation divided by the period. So those are the biochemical oscillations. And that's the precision of the oscillation. Again, we saw the thermodynamic oscillation. This quantity here, the number of current oscillations is mathematically completely different from epsilon squared. So the thermodynamic oscillation will not really help me here. OK. So I mean, this is the definition of the activator inhibitor model. Now, we saw this single enzyme model in the lecture. Now, this is not a single enzyme. That's a cycle of just one enzyme. But the idea that there are many of these enzymes there in the solution, and there are all these chemical reactions. And this quantity is, I mean, the quantity, for example, that oscillates here is going to be x. So when I'm looking at the oscillations there, I'm looking at the oscillations of the average concentration of x. Not the average concentration, the concentration of x. The average one is on the right. The single trajectory one is on the left. But this is a fairly more complicated model than the single enzyme model we had before. There are many, many, many states. And there are many, many cycles. It's even hard to try to visualize the network of states. It's a much more complicated thing. And so the question I'm going to ask is the following. So let's say you give me a biochemical oscillator. And the oscillator can oscillate for a certain number of quadrilaterals. Let's say the number of quadrilaterals is 10 or 100. The question is, how much free energy I have to pay to have an oscillator that oscillates from with a certain number of quadrilaterals? That's the kind of question I'd like to ask. It's similar to the kind of question you ask in the terminal concentration relation. But again, it's very different because a very different mathematical problem. If the system is in equilibrium, there will be no oscillations. It's impossible to have oscillations if you're in equilibrium. One can prove that. So in order to have oscillations, there must be a lot of equilibrium. But again, I want to ask a more complicated question, which is, what is the price of having a certain precision or a certain number of coherent oscillations when I look at biochemical oscillations? And we started working this problem five years ago, and we were motivated by the thermodynamic concentration relation, in a sense. We kind of wanted to find application where this precision was relevant. What we discovered was that the precision for the thermodynamic concentration relation was not really the relevant quantity to look at, but rather this number of coherent oscillations. OK. So again, that's the activator inhibitor model. And if you give me this model, I can write the master equation for it. So you're not supposed to understand this equation. All I'm saying is, if I were to determine the state of the system, I have to determine all these numbers here. One, two, three, four, five, six numbers. They will determine me the state of the system. And if I want to write down the master equation for this model, it's going to be this thing here. OK. It's a complicated master equation. You cannot really solve it exactly, but it doesn't really matter. The point I want to make here is the following. If you give me a system, I mean, if there is a biochemical oscillation, there must be a system of biochemical reactions. If you give me a system of biochemical reactions, I can just go ahead and write down a master equation for this thing, OK? Which is the master equation I saw before? But again, that's an example of a fairly complicated master equation, OK? So if that's the case, then, if I want to prove some bound for the number of current oscillations, now I want to try to express this idea in a very general mathematical way, or the most possible general mathematical way I can, OK? And the idea is you give me a system of biochemical reactions like this one, I can tell you, I can simply go ahead and write down the master equation, OK? OK, so if I can write down the master equation, which you saw in this lecture, I called it W, it's called L, those are the transition rates, and you know, we saw this matrix. There will be a formal solution to this equation, which is given by that, OK? And then I can calculate the eigenvalues of this matrix L. Now, the first eigenvalue is going to be the eigenvalue 0 that gives me the stationary state, OK? That's that we all know. Now, the first, the second eigenvalue, which is the first non-trivial one, so it's the first non-zero eigenvalue, is going to have a real part and an imaginary part, OK? And of course, you know, if the other eigenvalues have even bigger in modulus, real part, so the modulus of the next one is bigger, then it means the decay time for the next eigenvalue is much higher. So all I have to worry about, if I want to know what's the decay time of the oscillations, is the decay time of this eigenvalue and the imaginary part of this eigenvalue, OK? So the period of the oscillations is going to be given by the inverse of the imaginary part of the eigenvalue, multiplied by 2 pi. And the decay time is going to be the real part of the eigenvalue, OK? So basically, and you know, the number of current oscillations is simply going to be this ratio here, x i divided by 2 pi x r. So that, I mean, it took us some time to really realize that, OK? But basically, if you look at the problem of biochemical oscillations, the mathematical way to quantify the number of current oscillations is, I have to look at the ratio of imaginary and real part of the first excited eigenvalue of the stochastic matrix, OK? And trying to find the minimal cost of current oscillations will be like trying to find a bound on this ratio of the imaginary part and the real part of the first excited eigenvalue, OK? OK, so that's the mathematical problem again. The idea is you give me a system of chemical reaction that can write down a master equation. So the only assumption I'm making here is one that any chemical, I think about chemical reactions that can be expressed in Markov process, which, you know, is pretty much most of them. And the second assumption now is that I can characterize the number of current oscillations by the first eigenvalue, and that should be true for pretty much almost all models around. Unless something very strange happens, then you would have to go maybe to the next eigenvalue, but, you know, the bound would probably be true also for the next eigenvalue. By that I mean like lambda 3. But those are pretty good assumptions that should be true for a very broad range of systems, OK? OK, so, OK, now, that is, I told you about the precision with the number of current oscillations now. How do you characterize the cost? Well, the cost is by the interproduction. It's characterized by the interproduction, OK, that we saw in this lecture. PI being the stationary state probability distribution, Kij is the transition rate from I to J. And so that's the formula in the lecture with Ws instead of Ks. But, you know, whenever I do chemical reactions, I like to use K for transition rate because, you know, people in chemistry often use the letter K for transition rates. And, you know, sigma is interproduction rate. So it's a quantity with unit 1. It's a quantity with unit time to the power of minus 1. In order to make this quantity dimensionless, I multiply sigma by 2 pi lambda i minus 1, which is one period of oscillation, OK? So this delta is here is how much entropy is produced per period of oscillation, OK? It's a quantity, and now it does not have dimension of time to the power of minus 1 anymore, OK? And it quantifies the cost. And the bound we have found is that delta S must be larger than 4 pi square n, OK? So that's the bound we found. Again, n is a dimensionless quantity, OK? It's just the number of current oscillations. Delta S is the dimension of entropy, but it's also a dimensionless quantity. It says that it does not have dimension of time, OK? Because I've multiplied by the period of oscillation. Delta S is how much entropy the season produced in one period of oscillation, OK? Of this biochemical oscillation. And that's the bound we found, OK? I'm going to try to explain how we found it. It's still a conjecture. We can't really prove this. It's a very hard mathematical problem. But basically what this relation tells me is that, for example, if I want to have 10 coherent oscillations, so if I have n equals to 10, then I must have 4 pi square multiplied by 10 for the energy consumed per period of oscillation, OK? So I must consume at least 395 kBt if I want to have 10 coherent oscillations, OK? And if it was 100, it would be 3950 kBt, OK? So this bound here tells me what's the cost of precision. So it's similar to the thermodynamic and central relation, but for a different quantity. It's not the precision of a current anymore. This is the precision of a biochemical oscillation, OK? And you know, the precision of a current won't be able to really tell much about the precision of a biochemical oscillation, OK? That's what kind of we learned when we studied these models. OK, now there is a caveat here. The caveat is this. If the number of current oscillations is smaller, then 2 pi to the power of minus 1, not larger, OK? The bound can be relatively fan. It is smaller than this number. So that is a really, really low number of current oscillations, OK? So if I was looking at a figure like this one and the number of current oscillations is 0.1, I won't be able to see any oscillations, OK? Neither in the correlation function nor in the concentration. But if I was able to calculate the eigenvalue thing, then I can probably see this number, OK? But if the number of current oscillations is below this thing, the bound can be violated only in this very peculiar regime. Again, that's only something I'll be able to see if I can calculate this eigenvalue. If I cannot, if I'm just doing simulations and I look at the correlation function, I mean, if the oscillation number of current oscillations is 0.1, basically I don't see any oscillations, OK? So the bound can be violated. It is an important mathematical property. But physically speaking, it doesn't really matter in the sense that, you know, for any visible sort of thing, anything that you could really see and consider an oscillation really, this bound will be true, OK? It can only be violated in this peculiar regime, OK? Now what I'm going to do is I'll try to explain to you how did you get this bound, OK? But you know, I guess you understand what the result is. The result is if you want coherence in your biochemical oscillations, you must be willing to pay a certain price per period of oscillation, OK? Which is, you know, again, if it's 10 current oscillations, you must pay 295 kpt per oscillation, OK? OK, so what? Can I ask you a question? Sure. Can you repeat once more what is a non-trivial eigenvalue? Is it like the first, is it a complex one? How do you order them with the maximum? So the first non-trivial eigenvalue is the one with the modulus of the real part is the smallest one. So first I start with 0, then I just look at my real part. I don't look at the imaginary part. And the smallest modulus of the real part is the first exact eigenvalue. That gives me my longer decay time, right? The smaller the XR, the larger the decay time. So what are you doing about my large decay time? All right. And there is a lambda 2, so this one. And the lambda 1, what is it? Lambda 1 is 0. That's the one that will put me in the stationary state, right? The maximum eigenvalue of the stochastic matrix is 0. But on the next slide, there is a lambda, something, it's not lambda 2. Thank you, lambda. In the expression of the entropy production, what is this lambda? Oh, that's the imaginary part of the eigenvalue. That's this lambda here. Oh, sorry. That's the XI. That's my bad. Yeah. I should have said XI. Thank you. Yeah, lambda is the imaginary part of the eigenvalue. Sorry. OK, so now I just want to try to explain how do we get this bound. So what we did is the following. So suppose you do a unicyclic network. That means I just have a single cycle. OK, the network has omega states. Now, we have seen what an affinity is, right? An affinity is the product of the transition rates in the fourth direction divided by the product transition rates in the backward direction. And I take the ln of that. And again, when I mean a unicycle, I just mean a unicycle with periodic boundary conditions. So when I arrive at omega, if I is equal to omega, I plus 1 is going to be equal to 1. And let's say I choose uniform rates. So I just say that chi k of I to I plus 1 is going to be k e to the power of a omega. And k of I minus 1 is equal to k. If I do that, I can calculate everything. I can calculate the delta S you are that we had before. I can also calculate the number of current oscillations, OK? And for this case of uniform rates in a unicyclic network, I can find an expression for the delta S of the network as a function of n and omega. OK, if I use these two expressions here, I can calculate delta S as a function of n. Now, in general, of course, finding delta S is a function of n, which is what I want, right? I want to know what's the minimum delta S I must have for a certain n. It's like it's a very hard mathematical problem, OK? Because both delta S and n are quantities that are hard to calculate. They are functions of the transition rates, but they are very hard to calculate, OK? So to calculate, then I, for example, have to calculate the first exciting n value, which is something that even in a unicyclic network, you cannot calculate that exactly, OK? So it's quite hard to calculate. So you have to resort to some sort of numerical technique to try to conjecture the bound, OK? OK, so what we do is the following. We are going to do a numerical minimization process, which is we fix then. So let's say we tell our computer to fix the ratio of lambda i or lambda r, and then make a numerical search with this thing fixed, OK, to what's the minimum value of delta S, OK? And I mean, this sort of uniform rates for a unicyclic model is kind of our guess or where the optimal thing should be, OK? Our guess is that our optimal thing is going to be around this value, OK? And that is this expression here in the limit of omega going to infinity. I get this thing on the right-hand side. So again, this is an increasing function of omega. And if I take omega to infinity, I get 4 pi n. So basically, if I have uniform rates, I can either write down this expression that depends on n and it depends on omega on the number of states. And if I take the limit of infinity states, I'll get 4 pi square n, OK? And now, you know, this is pretty much the best guess you can make or where the optimal is going to be. And we are going to test whether the optimal is going to be in this case. And what we do numerically, again, is you fix then, and you write the delta S. It's actually easier to fix the delta S and write the n. So typically, it would be easier to fix delta S, the nth reproduction, and then try to maximize your n with that given nth reproduction. But again, those are not things that are easy to calculate. Again, this is something you can only do numerically. I would have no idea on how to do this analytically. And so, you know, we can do it for different system sizes. Omega goes to 4, 5, 6, 7. And we kind of see a tendency. We see that if n is large enough, then the dotted line is the delta S you are that we had before. OK. And we see that the optimal and the dots, the points, are the optimal we will obtain with this unicyclic network. OK, so, you know, if we optimize numerically, the dots, they will start agreeing with the line only if the n is small, they do not agree. And actually, the blue crosses that you're going to see here, the blue plus sign that you are seeing here, it's also an optimization process. But now it's not a unicyclic network could be like a totally general network. OK, it's a fully connected network. And see, if I make a transition rate in a fully connected network equals to 0, I get a different network. So when I say fully connected network, that would include all possible networks with size 4, 5, 6, and 7. OK, so we learned two things from these figures here. One is if I do the optimization process with a fully connected network or with a unicyclic network, I get the exact same result, which is very helpful because now I know that I don't have to talk about fully connected networks. I can simply reduce the problem that I'm trying to solve to unicyclic networks. OK, the second thing we learned is that the bound seems to saturate in the regime of large n. But if n is small, then I can go below what I get for uniform rates. OK, and now from this figure, from the fact that the optimal of a unicyclic network coincides with the optical of a fully connected network, which again includes all possible vectors up to that system size, we were able to do a little bit larger system sizes. The problem is when we do the optimization fully connected network by 9 or 10, things start to become very hard. But now based on these results, the conjecture now is that we can reduce our size to unicyclic networks. OK, we don't really have to care about fully connected networks because they will not be able to do better than the unicyclic one. So the unicyclic is already the best thing you can do. And so if reduced unicyclic networks, then you can do very large system sizes. OK, so the color pattern you see is the system size that goes up to 55. And if you see the different lines, it's increasing system size. If you look at this red line here, it's like an interpolating curve that you got as a limit of all the different curves you got for the different system sizes. And you see at the point where n is 2 pi minus 1, if n is smaller than 2 pi minus 1, then the bound is violated. So I'm able to get a certain number of quantization with less than 4 pi square n. But if it is larger than this number, then clearly the bound is going to be exactly at 4 pi square n, OK? So this sort of uniform rates case in a unicyclic network is going to be the optimal case if I am above a certain number. So that's pretty much how we have conjecture this bound. And a sort of proof of this bound remains an open challenge. And it's a quite hard challenge because it's like a problem where you use like large multipliers. It's the optimization of a function of a certain constraint. The problem here is both the function is something that you cannot really calculate exactly. And the constraint to impose is also something that you cannot calculate exactly. Both things are not things that in general you can calculate analytically, OK? So that's why it's kind of a hard problem. So calculating n and delta s, both of them are non-analytical objects, let's say. And apart from doing numerics. But these numerics is pretty convincing. We are pretty sure this thing should work. And that's what the bound is. So you can test this against a real system. So this is a model for chi-c. It's a very complicated model. I'm not going to explain it here. We did this model in this paper some time ago. But what we did is we took the data from the model that we had done in 2017. And then we just put the data. The delta s is a function of n for this model. Omega here is the system size of the model. So system size here simply means the number of chi-c I would have in solution in the solution. And the dotted line is the bound, OK? So as the system size becomes bigger, it seems that the model goes a little bit further from the bound. For 10 chi-c, it's kind of close. I don't know if this is general, though. There could be a model that stays closer to the bound. Otherwise, it stays far away. I really don't know. All we can see here is that even if you do a more complicated model, clearly the bound is fulfilled and this more complicated model, which is close to what this is a model for chi-c. Really, the thing is inspired by the experimental system from chi-c. So it's, let's say, a real model. It's not just a unicycle or a simple Markov network. And things, the bound is true, even if you do a more complicated model. That's what we can conclude. We cannot conclude anything about system size, because, again, that might be model-dependent. We don't know. OK. So with this slide, I finished the part on the free energy cost of biochemical solutions. I don't know if anybody wants to ask me questions about that. Yeah, I have a question. Sure. So the non-zero imaginary part of the first non-trivial eigenvalue is the key for having non-trivial entropy production, which pushes further to this inequality. I mean, the imaginary part has no influence in the production. The imaginary part is what make you have oscillations. If there is no imaginary part, there are no oscillations. If there is an imaginary part, then there are no oscillations. Yeah, but larger imaginary part of the eigenvalue, you have a larger lower bound because of n. Well, the smaller the, yeah. I mean, there is an interplay between the imaginary part and the real part, right? It depends. I mean, it's not only about the imaginary part. You have to compare both of them if you want to calculate n. So n is a comparison between both, right? You want your period to be smaller than your decay time. I mean, it doesn't really, but it's not only about it. It's about both of them. So n, again, is dimensionless, OK? It doesn't have dimension of time. I see, OK. But anyway, so could you offer some intuition behind this inequality, why this one should hold? I mean, not really. So the, well, physically, it's just the minimal price. I mean, if you are in equilibrium, you cannot have oscillations. So you must be out of equilibrium to have oscillations. And so basically, the natural question is there a minimal cost to get a certain number of oscillations? I mean, then there is one. At least if it's large enough, there is one. You see, if it's smaller, if you were at a very low end, then you could get at a very, very low cost. But at the higher end, there is a minimal cost to it. So that's kind of the physical idea. But why is like that? Or very hard to have any intuition. So again, it's a very complicated mathematical problem. I mean, we cannot really do anything analytically. And I mean, you can say, for example, that it's intuitive that, I mean, it's intuitive that basically the boundary is saturated at the unicyclic network with uniform rates. And if you only think about precision, that is sort of intuitive. Because if you just have a single cycle in a network, the best thing you can do is to just do that cycle. I mean, it's better to just have a single cycle that you do that have a network with many, many different cycles. And in a more complicated network of states, you have many, many different cycles. And what you want to do is to just do your best cycle all the time. So it's kind of intuitive that you should just have a single cycle. But I mean, again, that's intuitive for the case of, if I'm only looking at precision, OK? So the most precise thing should be a unicycle thing. It's intuitive. But now when I also add the cost to it, when I think about not only the precision, but the interplay between precision and cost, then things are not so intuitive anymore. Because I mean, a multi-cyclic network can have cycles that actually decrease the interproduction. And it's not obvious that when I'm looking at the interplay between delta SNN, that a unicycle should be the bound. And it is not. If N is small enough, it is not anymore. So the sort of nonlinear regime, although from a physical perspective, it's not so important because the number of oscillations is too low. But if you want to think about the mathematical understanding of the bound, then it's kind of important because the bound is not saturated at a unicycle or at a unicycle with uniform rates for low enough N, OK? So I don't have much further intuition about the bound. It's a kind of complicated mathematical problem. OK, thank you. And I have one short question. So another way of computing the quality of an oscillator is the quality factor. You do a power spectrum density, and you get the peak frequency and the width. So is it possible to write this type of bounds in terms of quality factor? Yeah, sometimes people also call this question. I guess so. I guess because I guess the peak of the Fourier transform is going to be exactly corresponding to the frequency of the oscillation, which is 2 pi over the period, right? And so I don't know how exactly we define the quality factors from the Fourier transform, but they might be exactly the same quantity. I would guess. They might be exactly the same quantity, I guess. This quality factor that you get from the Fourier transform, because basically, I mean, you have the size of the peak of the Fourier transform, which I guess is not going to really be related to the decay time. That would be different. But I also don't think that's going to be your. But the place where your peak is exactly whatever is my period of oscillation. And I don't know how you would deduce what's the decay time from the Fourier transform. But I would guess they are more or less the same quantity. OK. And what's the question? Yes, it's more or less related to what you asked. Because I'm more familiar with the spectrum. So I was wondering about the basic of the model. The eigenvalues are eigenvalues of the L operator in the equation, right? OK. Because you was showing at the beginning the difference between the time series and the correlation of the time series. So these eigenvalues are not about the decay time in the time series, right? It's in the correlation function. Yeah, I mean, there is no decay time in the time series. I've seen oscillate forever. So there is not a decay time. This decay time is related to the fact that if I look at two different time series, the periods where the oscillates start to de-phase. That's what the decay time is like, too. So to see the decay time, I must do an aphorize, I must do correlation function. OK, so I think the thing which I'm not very familiar with why are the eigenvalues, the eigenvalues of the L operator, related to the correlation function of the time series. OK, so any correlation function. So OK, so that's my master equation, right? So any correlation function is going to be a linear combination of this p of t here. So let's say I look at pi of t, OK? So pretty much any correlation function you can imagine is going to be a linear correlation of pi of t, OK? That's something you can prove. It's not so hard to do it. But if you think about correlation function, that's a function of time that can be written as a linear combination of the probability to be in the state of time. So basically, a correlation function is I have some initial condition, and I look at the probability to be a time t given that initial condition. That's what a correlation function is, a time correlation function. And so this probability of time is going to be the exponential of L t multiplied by p of t. And of course, the dominating in value, by that I mean the first not trivial one, the one with the largest decay time is the one that's going to dominate. Because all ordering and values are going to have shorter decay times. And all the imaginary parts that they have are going to disappear because of the decay time in shorter. There might have an influence if you do a Fourier transformer. You might see a smaller peak corresponding to these other imaginary parts of ordering and values. But their decay time is shorter. So that effect has disappeared faster. Is that clear? Yes. Thank you. That's it. Are there any more questions? OK. So. Yeah, you can continue with the talk. OK, so that was one problem that we have been working with. The other problem is about heat engines. That's a more recent one. Or as I mean, both papers are from this year. So I can close this. OK. So OK, so let me I will start telling you about the history of heat engines. And again, when I say a heat engine here, I mean a periodically driven system. OK, so we see in this course I gave, I only talked about steady states. OK, here it's not a steady state anymore. It is the case where my transition rates, they would depend on time and they would be periodic in time. OK, why are they periodic in time? Because I have things like temperature and energy vary periodically in time. If they vary periodically in time, then the transition rates of a discrete model must also vary periodically in time. OK, so just so that you situate with the letter with the lectures, now I'm going to talk about periodically driven engines. Like, you know, those were the engines that were the backbone of thermodynamics. Thermodynamics basically started with people thinking about this kind of periodic heat engines. And I'm going to talk about that again from what you saw in the course, the difference that the master equation would have transition rates that are periodic in time. OK, which is no mathematically a little bit more difficult than finding the stationary state, but it's something that can be worked out also. OK, so let me tell you the history of heat engines in stochastic thermodynamics. So, you know, this might be the prototypical model for it. That's like a colloidal particle in an harmonic potential. So what you do here is, first, you change the stiffness of the potential. That would be something similar to increasing the volume in the steely engine, right? Because, you know, the particle in this potential is more confine it. And the particle in this potential here is less confine it, right? So if I increase the stiffness of the potential, the particle is free to move more. OK, it is typically bounded by this potential. But because the stiffness is bigger, so it's like I have a bigger volume, OK? So the engine is a single particle. It's a single colloidal particle. I first change the volume. Then I change the temperature from hot to cold. Then I change the stiffness of the parabola back. And then I change again from cold back to hot, OK? That's similar to like a steely cycle, but, you know, there are big differences here. The first one is that this is stochasticity here is very important. While a typical heat engine would be for a system made of many, many molecules, this is a heat engine made by a single colloidal particle, OK? So that's one difference. The other difference is that I can do that at finite time, OK? This is not a quasi-static engine. I can make this step either slow or fast, depending on the engine I'm dealing with. So for example, this change in temperature, they can be pretty fast. And, you know, this engine operates at finite time. So this is a heat engine that operates at finite time and that can have large fluctuations, OK? And so, you know, this kind of model was proposed in 2008 by Chims Mido and Udo Zyfert and was realized experimentally for the first time in 2012. Now, lots of people work with that. They've got to work with that, with experiments, and there is theory. So, you know, the idea of people having so-called stability so let's do heat engines that are small and that operate at finite time. And, you know, since the paper by Udo Zyfert and Chims Mido in 2008, there have been lots of work on that, OK? Again, this probably started in 2008, I would say. And, you know, it's a very interesting perspective. It's a heat engine again, but that is small. The system is a single colloidal particle and that operates at finite time, OK? OK, and, you know, the efficiency of the engines, the work Udo divided by the extracted work divided by the heat you take from the hot reservoir, that must be about the current efficiency. This bound is true, OK? OK. OK. OK. So that's all good, but in 2016, these people here proposed a different heat engine. Again, it's the same as the heat engine before. It's a single colloidal particle. It goes from hot to cold, then back from cold to hot. So it's the same kind of cycle. But the difference is that, you know, while before, my solution is just a colloidal particle in water, for example, now it's also a colloidal particle in water, but on top of that, they just put a bunch of bacteria in the water, OK? And now this bacteria is something that you cannot see, but of course they're going to collide with the particle and make huge difference now in the engine. And, you know, if before, you can imagine the water as a reservoir, as an actual reservoir, now you can imagine the water plus the bacteria as either an active reservoir or a non-activated reservoir, whatever you want to call it. But, you know, this case here, I would call this one as the active heat engine. It's active because I have all these bacteria in the solution. So I have these hidden dissipative degrees of freedom that will make a huge difference now in my engine. And why do they make a huge difference? So when they did this experiment, one thing they observed is if I do the same protocol, OK, I can do the exact same protocol by that is the exact same change in temperature and energy, OK? But if I do, if I put the bacteria as compared to not having the bacteria, the worker extract can be larger when I have the bacteria, OK? That's one observation. So basically, if you have a heat engine in an active medium that follows the exact same protocol as a heat engine in a passive medium, the heat engine is an active medium is can extract more work. That's possible, OK? That's one observation that's interesting. The other observation they have is that if you look at some sort of efficiency, which I would call pseudo-efficiency in this case, which is the extracted work divided by the heat, they could get an efficiency that was larger than the Carnot efficiency. So I would call this pseudo-efficiency. So they could get a pseudo-efficiency that was larger than the Carnot efficiency. Now, this heat here is only the heat associated with the colloidal part, OK? It does not take into account all the energy dissipated by the bacteria. OK, so you can look at something like this. It's a very interesting perspective. It's a very interesting experiment. The relevance is that lots of things are, lots of reservoirs are non-equilibrium reservoirs. So the concept of active matter or a medium bignac if you put, let's say you want something to do something inside our bodies, OK? Let's say I want to put a molecule inside our bodies to do something periodic and complete some task, which could be work extraction, but could be something else. So of course, if I put a molecule inside our bodies, it's going to have all these molecular motors walking around and whatever it's going to have if it's inside a cell. I don't know. And it's probably going to be actually in an active medium. That's probably, so being in an active medium can be a quite common scenario. So beyond this experiment, it's a quite interesting question that you think about is to think about a heat engine in an active medium. But given these observations that you can extract more work and that the efficiency or the pseudo-efficiency can be largely counter-efficiency, the very good question to ask, again, is what is the appropriate statement of the second law for active heat engines? So that's what we discovered. I mean, it was a question several theory groups worked on this problem, on the problem of active heat engines. But I think nobody before us was really able to answer this question, what's the appropriate statement of the second law for an active heat engines. Now, you could say, OK, yeah, I can have efficient light of the carnal efficiency, but that's because this heat is not taking the heat of the bacteria into account. And that's true. If you take all the heat dissipated by the bacteria, then there is nothing like efficient light of the carnal efficiency that wouldn't make sense. But that's not a very good answer for two reasons. One is that the bacteria are the dissipative degrees of freedom in the reservoir, so you don't really have access to them. So if you have a second law that has also includes the dissipation of the bacteria, you would need to have information about the dissipation of the bacteria, which you don't have. So answering this question by simply including the dissipation of the dissipative degrees of freedom, this heating degrees of freedom is not a very good answer to this question, because you want to have a second law that can be measured by only looking at the particle. Is there a second law that applies to the system that tells me what kind of thing, how much work I can extract or what kind of task this heat engine can accomplish that only contains quantity that depends on things that I can calculate by only looking at the particle? And another reason you do not want to include the dissipation because of the bacteria is that this dissipation is very large. So the work you extract in the engine, which is related to changes in this parabola, changing the stiffness of the parabola, is definitely much, much smaller than all the energy the bacteria dissipate. And if that's true, if I have a second law that includes two quantities, one quantity is much larger than the other, all the second law is gonna tell me is that this quantity is much larger, it's gonna totally dominate the second law and this one is gonna be positive. So if I was to include the energy dissipated by the bacteria in a second law inequality, all the second law would tell me is that the energy dissipated by the bacteria is positive because whatever is the work doesn't really matter because the work is just much smaller than the energy dissipated by the bacteria. So basically, I mean, there is an answer to this question what's the second law that most specialists would say, okay, just include the energy dissipated by the bacteria and it should get normal second law, that's true. But that's a very unsatisfactory question, answer to this question because this is information you are not gonna have in general. So if you put this in an active medium, you don't really know about the dissipation of the active medium and you don't really wanna express your second law or if you are asking what's the performance of the engine, you don't wanna answer the question about the performance of the engine including quantities that cannot be calculated by simply looking at the position of the particle for this particular model, okay? So that's kind of what we did, we found this statement of the second law that fulfill these requirements. Okay, so I can do a much simpler model of the same physics that was there. The model we did was like a two-state model, so I can increase the energy of the state, then I change the temperature, then I decrease the energy and then I change the temperature again. It's similar to the parabola model, but now it's a two-state model and to make the engine active, I put a delta mu inside, okay? This is a little bit different because here I can see all my degrees of freedom and before I cannot see the degrees of freedom, here I can, but it doesn't really matter because the second law remains the same, that's what I'm gonna argue. But if you do this model here and the delta mu is the delta mu we saw before, so there are two chemical reactions pathway between these three states and I put the delta mu there, so if the delta mu is no zero, the engine becomes active, if the delta mu is zero, the engine is passive and if you do this three-state heat engine, in which again, it's something you can solve exactly, it's a very simple model, you can pretty much see all the observations from the experiment, which is, you know, the work as you see when you start in the zero x-axis, the delta mu is zero, that's a passive heat engine, so the work active heat engine extract can be larger, right, as you can see this red region is far from delta mu equals to zero, so, you know, if I turn on the delta mu, if I make the heat engine active, the extraction of work can be larger and this blue region here with delta mu and beta H phase, in the delta mu, beta H phase is a region where the efficiency, the pseudo efficiency, which is the work divided by the heat, is larger than the Carnot efficiency, okay, and I call this pseudo efficiency because the pseudo efficiency, again, when I talk about heat, I am not including the cost of the delta mu here, I'm just talking about heat that is associated with jumps between the state zero and E, okay, so, you know, heat would be, if I jump from zero to E, I must take amount of heat E and if I jump back, it's minus E and so on and so forth, so the heat I'm talking about is the energy differences that comes when I make a jump between the two states, okay. Again, that's very similar to, you know, this two state model has almost all the ingredients of the experiment, the only ingredient it does not have is the sort of coarse graining, is the fact that, you know, there are these bacteria I cannot see here, you know, I don't have bacteria, I just have this delta mu and I can always see my three states, so, you know, there is a little bit of a difference between both models, but most of the physical observations can already be made with this very, very simple model, which I would argue is the simplest model of an active heat engine you can do, okay. So, you know, if you do this very simple model, it can work out everything and, you know, if I calculate the quantities in this model, I can calculate the work, I can calculate the hot heat and I can calculate also the delta S active. Now, the delta S active is the entropy change associated with burning that ATP or that delta mu cycle I had in the figure before, so, you know, whenever I do this cycle here, okay, or this cycle here, or this cycle here, I burn an ATP, okay. So, delta S active is the entire production associated with burning ATP in these cycles, okay. So, and again, this is entropy change per period, okay. Now, I have a period of oscillation, all right, but it's not an oscillation like I had before, it's an oscillation that is imposed, right. So, I have the period of the heat engine and that's the entropy change per period. So, if I look at these quantities, what I see is, you know, I told you that the energy dissipation of the bacteria was much higher in that model, but this can be expressed in a physical way, in a way that is more, in a formal way that is a little bit more clear, which is the fact that while the heat and the work, they saturate as I change my period, tau is the period of the heat engine, okay. So, if I keep increasing the period, the heat and the work they saturate, the delta S associated with the delta mu there, it just grows linearly with time, okay. And that's easy to understand because if I keep increasing the period of the heat engine, the work per period is gonna saturate, of course, because the work is related to changes in the harmonic potential, but the entropy change because of the delta mu is gonna always increase because, you know, the larger the tau, the more cycles I do between the two states. And so this delta S active just increased linearly with tau. So, you know, again, that's just to argue that this second law is not a very good one because the delta S active, you know, after a certain tau, it's just gonna become much larger. And so it's not something I really want including my second law. And the right second law is this one in blue. So the right second law includes this QH, includes the W, and includes an information theoretic term that I'm gonna try to explain in the next slide. So that's what we found. We found this correct second law for an active heat engine, okay. Which I think was a very important problem. Now, the point is that this right second law comes from something is to cast thermodynamic called excess entropy. Sometimes people call this no adiabatic entropy, okay. I prefer the name excess entropy, but, you know, both of them for a steady, for a periodically driven system are the exact same quantity. In general, they might be different by boundary terms, but for a periodically driven system, the average excess entropy or the average no adiabatic entropy would be the exact same quantity. But in any case, excess entropy is something that, you know, existing thermodynamics for a good amount of time, maybe 20 years or so. And it was invented a long time ago now for a steady state. We, again, in the lectures, we mostly saw steady states. For a steady state, the average excess entropy is just zero, okay. So that's not a quantity we talk too much when we talk about steady states because the average excess entropy is zero. Now, a big interest in excess entropy came from the fact that this excess entropy alone can fulfill fluctuation theorem, okay. So the point I wanna make here is excess entropy is a mathematical quantity that existed in Stochast thermodynamics for a long time. It had some physical interpretation, but the fact that it is really the right quantity to look at when you have active heat engine is something that we discovered, okay. And if you kind of do a smart, the composition of excess entropy, you'll find that it can be expressed in this form. It has the hot heat, it has the work, it has the eye. Now, the advantage of this eye term, it doesn't have the scaling problem. You see, it doesn't grow linearly with time, it also saturates. And the eye term here, if I was to think about the original experiment, is something that I can calculate by only looking at the position of the particle. There is no need to look at the bacteria, okay. Okay, so let me try to explain how we do that. So let's say you have a master equation like we had in this course, right. The k is zero, that's the distance in the w's. But now my transition rates, k ij and kji, they depend on time and they are periodic, okay. With a period tau, which is the period of the engine. If that's true, if I look at the long time solution of this master equation, it's gonna be also periodic. So this is not really a steady state. You could call this a periodic steady state, okay. It's just a long time solution of the equation that will be also periodic, okay. Now, I can think about two probability distributions. One is this PI of t, which is really the probability of the season to be a in state e at time t, that is periodic in time and it's the long time solution of this equation, okay. You can do this using Floquet theory if you know what Floquet theory is. But you know, please book and calculate this thing. And the other distribution would like to think about is the stationary distribution the system would have if I was to freeze my protocol. What does it mean to freeze the protocol? So from the engine I had before, it would simply mean that I fix the energy, I fix the stiffness of the parabola, I fix the temperature and then I let the system run to a steady state, okay. That's the steady state I would have if I was to stop changing my temperature and energy, okay. And this PST is distribution I would have. Now, if the heat engine is passive, okay. If I don't have the bacteria there, I just have a parabola and that's it. I will go to an equilibrium distribution, okay, which would be just a Gaussian, okay, given by that parabola basically. If I have the bacteria there, then it's more complicated. I'm gonna get a stationary distribution which is not the equilibrium one, okay. So if this thing is sometimes called accompanying density or accompanying distribution. So again, that's the probability distribution I would have if I was to freeze my protocol at that particular time T, okay. So there are these two different distributions. And again, for an active heat engine, this PS here is gonna be a non-equilibrium distribution. And for a passive heat engine, this PS is gonna be an equilibrium distribution, okay. Now the PI of T, which is the real probability distribution of the system is different from the PS, okay. You must, it's a very important point for that I term I had before. And again, the reason I'm explaining these two distributions is that they are very important for that I term, okay. That I was talking about, okay. So there are two distributions, one is the probability distribution of the system. The other is the probability distribution the system would have in the stationary state if I was to freeze my protocol at that particular time T, right. All right, so the way you do that is, okay. We talked about generalized state balance. So in case you never saw it, this is the general form of generalized state balance. General generalized, it's a little bit too much, but that's what this equation is. So I have some beta, I have some delta E, then I have some A alpha, which is the affinity, right. The affinity is what I call delta mu. This DIJ is generalized distance. So for example, if the alpha is the delta mu, the DIJ is how much substrat I burn when I make a transition from I to J, okay. For example, so how much ATP I burn? If I don't burn ATP, this DIJ is zero. If I burn 180p, the DIJ is one and the DJI would be minus one and so on and so forth, okay. So, you know, if I write my transition like that, which is pretty general, you know, the traditional rates can always be written like that. And then I calculate my entropy, which is the entropy from stochastic thermodynamics that we saw in this course. I will get this equation here, okay. I can do that. So, delta S active is gonna be the active contribution. The contribution associated with this affinity is a alpha. The eta C is the Carnot efficiency. JQ is the heat is the same as QH, but it's a little bit more general. It would be the case where the temperature is like a periodic function that's not only between hot and cold, but it's a more general periodic function. But though that's the heat you take from the hot reservoir and that's the work you extract, okay. Okay, so that's the normal entropy. It will have this term. If I do the excess entropy, which is defined by this equation here, I can show that it can be written like this, okay. That's something you have to demonstrate. That's more, that's just the definition. It's not so hard to do. It takes a lot of thinking to figure things out, but once you figure it out mathematically, it's pretty straightforward. But that's the general form of the excess entropy. Now, everything I'm talking about here, these second laws we have are extremely general, okay. The only assumption I have to make is that at some level of description, the system must be Markovian. So, if you think about the experiment that I had, which was a colloidal particle with a bunch of bacteria, if I only look at the colloidal particle, the dynamics of the colloidal particle is gonna be no Markovian actually, okay. Now, if I look at the Markov cross, which is the colloidal particle plus the bacteria, if I would imagine that, then at that level description, things are gonna be Markovian, okay. So, the only requirement that we need for this theory we have to work is that at some level of description, when you include also the hidden degrees of freedom of the reservoir, your description, the system has to be Markovian, okay. That's pretty much it. And that should be true for most active heat engine, okay, or for pretty much all of them. If you ever think about biological systems or something like that. Okay, I came back. Okay, so, I got this term and let me try to explain what the I term is, okay. So, the I term is can be defined in this way. So, that's the definition of I. Now, if I write this equation, a length of PS, remember PS is the stationary state, is the stationary distribution I would have if I was to freeze my protocol at time t and P equilibrium is the stationary state I would have if it's a passive heat engine, okay. If it's active, then this PS is gonna be non-equilibrium. So, if I write down this quantity, it can be written like this. It can be written as the Kubrick library distance between the probability P, which is a real probability distribution of the system and the equilibrium distribution minus the Kubrick library of P and the non-equilibrium stationary state distribution an active engine has. If the heat engine is passive, okay. This I is equal to zero. So, I would have P equilibrium, my stationary state is also the equilibrium one and I just get zero here, okay. If I take a time derivative of this quantity, which is this Kubrick library different distance, then I get this result. And, you know, what I can see here is that this is the I I wrote there before, but this thing here must be zero because the system is periodic, okay. This derivative of function that's periodic. When I integrate the derivative of function that's periodic, I'm going to get zero. And so, because this thing is zero, I can get this equality, okay. That's kind of hard to understand, but the point is that the I thing is part of the time derivative of this quantity here, okay. The Kubrick library distance between P and P equilibrium and P and PS, okay. That's kind of an information you're at interpretation of the term I. Okay, so that's the I term. And so, I mean, what's the physical interpretation of this thing? Okay, the physical interpretation is the following. So, if I look at a, okay. If I look at a passive, an active heat engine, it has this extra term I here, okay. And that is the term that quantifies. I mean, first question should ask yourself is, why can an active heat engine extract more work than a passive heat engine? Well, the reason that can happen is that because of the active degrees of freedom, your active heat engine has a different probability distribution as compared to the passive one. So, if you put all these degrees of freedom, you shift your probability distribution. It doesn't really matter how much energy your degrees of freedom dissipate. What really matters is how you shift your probability distribution. And this term I here quantifies the shift in the probability distribution that is active degrees of freedom generate, okay. So, I mean, whether market degrees of freedom generate a lot, dissipate a lot of energy or not, it does not really matter. What is really, really important is that they shift the distribution the right way. And so, if I include this I term in my second law, then I can explain everything. I can explain why the efficiency can be large or the sealed efficiency can be large, the kind of efficiency and so on and so forth, okay. So, that's kind of the physical interpretation of this thing. Again, understand these things in a detailed manner, take some time, okay. It's a little bit technical. That is the issue of course, graining, which I'm not gonna talk a lot about, but I am defining things without coarse graining. But if you think about the original experiment, the variable X, which is the position of the particle is a cross-grained variable. The important point is if I do coarse graining, if I only look at the dynamics of X, which is gonna be no Markovian, the statement of the particles. So, in this notation here, I is equal to XA, okay, that's the full state of the system. X would be the position of the particle and A would determine the state of all bacteria in the solution, okay. Which again, is just a theoretical concept. I don't really have to know those to apply the thing. But if I do the coarse graining, what turns out that the second law that I had before retains the exact same structure, which means that I can do that even for that experiment and I can measure all these things, W, J, Q and this I, by only looking at the position of the particle. There is no need to look at the state of the bacteria, okay, to calculate the quantities that show up in this, in our second law. Okay, so, you know, we also did several models. So, this graph here that illustrates that, you know, while the pseudo-efficiency can be larger than the carnal efficiency, if I do the correct efficiency, which is this one, that also incorporates the I term. So, basically an active heat engine can use two different resources to extract work. One is the heat and the other is this I term, okay, which is about how I shift my problem distribution. But basically, if you do the efficiency, it's always bound by the carnal efficiency, represented by this black line. So, the blue line is the real efficiency and the red one is the pseudo-efficiency. And again, this is for a numerical model for a 2D study, which is inspired by the experiment, okay. And this has been published recently, so that's the paper. And here, with this slide, I finish my talk. So, I told you two different stories. One is that, you know, if you wanna have N-coherent oscillations, you must dissipate four pi square square by N. And the other one is that, you know, there are active heat engines, the first experiment was 2016. And I'm talking about periodic heat and heat engines. And we now know what's the appropriate statement of the second law. And the point there is it doesn't really matter how much energy dissipate. What really matters is the way you shift the probability distribution as quantified by this information theoretic term I called I in this talk. Okay, thank you for your time. And I'm happy to get questions. Thank you, professor. Thank you for the questions from the in-phase audience. Yes, please approach. Yes, yes, approach to the microphone. Thank you. Hello, Andre. Can you find? Yes. I mean, we were discussing a lot in this school about maybe your first in the first lecture that you gave, like you had one of the postulates of all these framework was that if you have like a transition at W from I to J, you have to have the reverse transition, no? Yeah. And we were wondering about if there are extensions of this framework without having that particular postulate like process that are in nearly out of equilibrium, I would say. Well, I mean, microscopically irreversible. I mean, yeah, there are Markov process with reversible transition rates that allow for a physical interpretation and they do allow, I mean, but they have to go a little bit beyond what was taught in the lecture, but there are Markov process which have transition, where the reverse transition rates are zero, they allow for a certain kind of interpretation that will fall within the stochastic dynamics, but I mean, if you are just thinking about standard stochastic analysis, let's say, if for example, I, J is a chemical reaction, okay? There must be a reversal chemical reaction. It's never zero, it could be very small, the chance, but if you are thinking about going from I to J is a chemical reaction, or if it's a colloidal particle jumping, there must be a reverse reaction, like if the colloidal particle because of thermal fluctuations can go from left to right, then it must be able to also go from right to left. And the same thing is true for a chemical reaction. So I mean, again, if, you know, I mean, there are lots of Markov processes and there are lots of things in stochastic dynamics. So there are cases where, where you can get, well, let's imagine one thing. For example, I work with stochastic protocols, okay? Let's imagine this external protocol, you have this stochastic and it's jumping. So a stochastic protocol is something that could have irreversible transition rates. It's something you control from the outside and you could control from the outside in such a way that I only have transition rates in a certain direction, okay? So for example, I have worked with that and that's true and, you know, and that you can do stochastic dynamics and there are irreversible transition rates there or it's rates for which there is no, there is no coming back. But for standard, let's say stochastic dynamics, if you are just thinking about states and, you know, it's the state of a colloidal particle or the state of a chemical reaction, there must be a reversible transition rate. I mean, otherwise doesn't, I mean, you know, I don't know any example of a chemical reaction that there is no reverse or a colloidal particle that only jumps to the right. I mean, the thermal fluctuation will also put you to the left, even if the drive is very strong. So yeah, I mean, there are, it is possible to have marker process that have irreversible transition rates and, you know, these are marker process that we will allow for some sort of interpretation if it's stochastic dynamics but for the standard framework, you know, there must be a reversible transition rate. It's not really like we impose it because it's mathematically pleasant and, you know, now it's mathematically easier to deal with this thing. It's more, it's more a physical thing. I mean, if there is a chemical reaction, then there must be a reversible one. If the particle jumps to the right, then it must be able to jump to the left, okay? Okay, but for example, I mean, you can think on thermal engines, like for example, a bird, you know, it is flying and for sure there is an energy consumption there and it's a thermodynamic machine kind of, but it's not in a thermal bath. I mean, could you apply this scan or there is something similar to this framework that you can apply for those systems? A bird, what's the example of the bird? The bird is flying? Yes, like, I mean, there are a lot of works now on collective behavior and collective behavior inactive matter. So, I mean, Local birds, I mean, yeah. I mean, the problem is with the word active here, I think the word active is a bit, so this word active is used for a lot of stuff which I think are not, I mean, the word active does not have a clear meaning in the sense of there is an equation, at least in the context of my talk it has, but in general, when people say active, it's kind of hard to define it with an equation. It's not like non-equilibrium, non-equilibrium means entropy production, entropy production is zero, it's equilibrium, entropy production is not zero, it's non-equilibrium. Active does not have a similar mathematical definition and that's a bit of the danger of the word, but you know, if you think about birds, it's hard to think about that. If you think about a flock of birds, okay? It would be hard to think about that as a thermodynamic system. There is nothing there, I mean, it's even hard to think about an entropy production and think about an engine, I mean, what do the birds do? They would fly together and, of course the bird alone, if you think about a single bird, then probably as you can do, you could do thermodynamics, it would be very complicated, but a flock of birds as a, you know, it would be hard to even think about the concepts of thermodynamics there, I guess. I mean, you know, what is the temp or, I mean, what would they do? They would move together and do a hit, that would be strange, but I would say, if you think about a bird alone, maybe, and so, you know, if you wanna think about this physically, I mean, imagine put something inside your human body and trying to complete a task, okay? Imagine put a molek inside the human body and then there will be lots of stuff going on in the human body and so on and so forth, but yeah, I mean, that's kind of my answer, which I don't know if it's totally satisfactory, but that's what I have, at the time, I'm happy to, if you wanna rephrase the question, in case my answer was not satisfactory, I'm happy to try to improve. No, okay, I can do it properly, thank you. Okay, I see that there are some questions in the chat room, chat box. Okay, what do you mean by the relation between information? No, that was from one, I guess, right? Yeah, that's an effective temper of the active. Yeah, so, you know, concerning effective temperature, I should say that effective temperature is not something that I love. The whole point of doing this theory was to really avoid effective temperature. The problem with effective temperature, it's something that you can define in different ways and it's always something that does not apply to, I mean, it applies to a particular model or a particular class of models, so the whole point of doing this thing that we did was to, because the temperatures we have when we write down the second law are bona fide temperatures, okay? They're real temperatures, that is just the temperature of the reservoir and within our description, we can completely avoid effective temperatures and never talk about them. They are not necessary because we can have a framework that have the real temperature of the reservoir, the hot one and the cold one. So kind of our framework is much more general than that, so it applies to pretty much everything. You could have a single colloidal part, you could have a system that has interactions and it's not necessary to talk about this, because now, depending on the model, like the model we did, one would define effective temperature by either looking the mean square of the displacement of the particles or something like that. That's something we could do, but it's the whole point of kind of our framework or not the whole point, but the main point is that we don't need to talk about effective temperature, okay? Thank you, Professor. Any more, Natalie? Oh, sorry, there is one more, sorry, from the first one. Hi, I was just wondering, what if you start to manipulate the activity of the active medium? Will that affect this law? Yes, because, yeah. I mean, do you wanna say something else? No, yeah, I just thought maybe in a way that didn't follow the protocol of the rest of the, yeah, so I mean, yes, and okay, so what I mean, what our second law tells is that what really, really matters is the way you shift the probability distribution. So if you change this activity, you change the shift in the distribution, okay? Well, how exactly, okay, depends on the model, but we do have the term that quantifies the shift in distribution. So, yeah, if you change the activity, you are gonna change the shift in distribution as quantified by the term I, and then you change the second law and then you might be able to do more work or something else. Okay, thank you. Okay, there is a question in the chat. Yes. Now the back is at non-equilibrium temperature is not well defined. So yeah, the back, this is true and not true what you're saying. So, I mean, one reason I don't like this name of non-equilibrium reservoir or active reservoir, it's a nice name, but you have to be a bit careful there because this is just a method of core screening in the sense that let's think about the experiment. If I only look at the particle, then it's no Markovian and it's a non-equilibrium bath, but if I was to take the particle plus the bacteria as my system, even if I cannot really do that, I mean, if even in practice, I cannot see this position of the bacteria, but at least theoretically, I can imagine I can do that. Then the bath is an equilibrium bath and there is a well-defined temperature, okay? So this thing of an active bath, at least when I think about like biological system, it's more a matter of course-graining. By that I mean, if I was to think of the particle plus the bacteria as my system, then it's an equilibrium bath. So when I say the temperature, the temperature is what I get when I also include the dissipative degrees of the reservoir as part of my system and then I would have an equilibrium bath. And then if I start with this description, I can simply do a course-graining and look only at the position of the particle. And in this case, I will have a well-defined temperature. I hope that answered your question. Kumar. I have a related question. Yeah. Please be quick because we have- Yeah. The bacteria, the medium can be also small system like the engine itself. The, I mean the bacteria, I could just have one bacteria that would be fine, but the medium, the solution where I am. So when I think about this full description where I also included it, so the number of dissipative degrees of freedom that make my bath active can be small. That's fine. But the bath per se, so if I think about the description where I also include the dissipative degrees of freedom, the bath per se must be something big and an equilibrium thing. Okay, thank you. It's nice. Thank you very much. So thanks a lot Andrei. It's really a pleasure to have you even online. He has been teaching at 5 a.m. Houston for me. Appreciate it. Appreciate it. Thank you very much. A pleasure. Thank you.