 Okay. So welcome everybody. It is a pleasure to introduce Artemi Kulcinski. Artemi got his PhD at Indiana University in complex systems and is currently a postdoctoral fellow at the Santa Fe Institute. His work spans stochastic thermodynamics, machine learning, information theory. And today, I guess, is going to talk about the thermodynamics and information. Just a few rules before starting. So there are, if you have any questions, you can just unmute yourself and ask and ask your question during the talk. So with that, I leave the floor to Artemi. Thank you, Jacopo. You can hear me? Yes, everyone. So thank you all for coming today. Like Jacopo said, I'm going to be talking about thermodynamics information and also more generally entropy production under what we call protocol constraints, which I'll describe below. And I just want to say that this is a joint work with David Walford, who is faculty at Santa Fe Institute, and who's also connected now. And this is in progress. So the preprint should be available soon. But if you are interested in seeing it, please also feel free to email me. Okay. So my talk will consist generally of three sections. So the first section I'll provide a pretty general background of thermodynamics and particularly the concept of free energy and also recent work on the so-called thermodynamic value of information. And in the second part of the talk, I'm going to try to motivate the basic problem behind our approach and convince you that maybe it's an interesting problem. And in the final section, I'll give some of the results we've derived so far, particularly ones that concern what we call symmetry and modularity constraints. So let me begin with the background introduction. And I'm going to talk about what is thermodynamics in particular what is the role of free energy in thermodynamics. So I take thermodynamics at a very high level to be the science of understanding what transformations are allowed or not allowed by the fundamental laws of physics. And let me explain this with the following setup, which I'll use to illustrate the basic laws of thermodynamics. So imagine that we have a physical system whose state we thermodynamic state we describe with two functions. The first one is P, which is the probability distribution over the microstates of the system. And E is the energy function that the system is subjected to. So this might be something like, you know, a box of gas with a piston on one side, it could be a little biological organism, it could be many things. And we're going to assume that this physical system is coupled to two so-called thermodynamic reservoirs, or I guess one thermodynamic reservoir and a work reservoir. The thermodynamic reservoir is what's called a heat bath. And it's going to have a certain temperature T. And the physical system is also going to be coupled to a work reservoir, which could be something like a weight connected to a pulley. Now over some time interval from time zero to times tau, the physical system will undergo a driving protocol. And I will define later precisely what I mean by a driving protocol. But for now you can just think of it that basically during the driving protocol the distribution of the system changes over time and the energy function of the system can also change over time. And so at time tau it's going to end up in some different probability distribution P prime and some different energy function E prime. And in doing so it can also exchange some energy with the heat bath and the work reservoir. Okay, so I can now tell you what the laws of thermodynamics state about this and we'll see that they provide constraints on what kinds of transformations are possible are allowed by the fundamental laws of physics. And so the first law of thermodynamics states that energy is conserved. And in this case we can basically say that the change of energy of all of these three coupled systems has to be zero. And we also know that we typically call the energy exchange or the energy given to the heat bath as heat. Oh, I think he got disconnected. I think. Yes. It's connected. Let's see if he's moving now but he's here. I'm sorry I think my connection drop. Okay, yes. Okay. I'm back. So apologies. So we're going to write the heat the energy given to the reservoir as heat and the energy given to the work reservoir as work w so we're going to write them as q and w. And the first law of thermodynamics says that, unless this is obeyed this equation is obeyed then the transformation is not allowed. And you're not sharing. Thank you for telling me. I'm not sharing now. So, hopefully that won't happen again. So I talked about the first law of thermodynamics. And the second law of thermodynamics says that the overall entropy of this whole setup has to increase. Now, the work reservoir is actually defined in such a way that it has no cannot change its entropy. We can write the second law is basically saying that the entropy of the system and the entropy of the heat bath together have to increase. And the overall increase of entropy of everything we're considering at once is I'm going to write a sigma. And in thermodynamics this is usually called entropy production or irreversible entropy production sometimes. We also can rewrite this thing on the right hand side as just the increase of the Shannon entropy of the system. So the entropy of P prime minus entropy of B plus the the entropy increase of the bath is actually given by q the amount of heat divided by KT where K is both constant. And this is just comes from the fundamental definition of what a heat bath is and what the temperature is. So I also want to emphasize that this will be the general setup that I'll be thinking about you during this talk but the framework extends. More broadly can also encompass things like multiple heat baths at different temperatures and other kinds of thermodynamic reservoirs like chemical baths and other things. And we can still define things like the first law and the second law but I won't for simplicity I will consider the simple case of a single heat bath and a single work rest for a couple to our system. So if we now take this first law and the second law and combine them and rearrange them, we can actually define a fundamental bound on how much work we can extract by carrying out any such transformation. And the, the maximal amount of work that we can extract from the system is given by the drop of the so called non equilibrium free energy of the system and the non equilibrium free energy is just given as the expected energy plus KT s sorry that I think that should be minus should be minus KT. Now, I want to point out that this is well defined, even for distributions that are out of equilibrium. And so this, this is a really non equilibrium bound. And it can practice. So in theory, this bounds can be achieved by certain types of protocols, they can saturate this work bound, but this can require various somewhat unrealistic resources and as we'll see that in practice it may be actually quite difficult to achieve this and this will be something that will return to throughout this talk. I also want to point out that the entropy reduction is just given by the excess of this inequality. And so basically the, the, the work inequality is equivalent to saying that entropy reduction is non negative, which is the second law. Okay, so I'm now going to provide a simple example of this and I'll use a system that will return to throughout this talk. So this is a jillard box with a vertical partition. And by this I mean the following so this is a box that's connected to a heat bath. And there's a particle inside the box that can bounce around the box. And there's also a vertical partition or barrier that we can move left and right and by moving it in different ways, we can basically manipulate the location of the particle. So imagine that the initial distribution P is one in which the particle is in the right half of the box with probability one. How do we, can we extract some work from this and how much work can we extract and how do we do it. Well, let's say that we go from this initial distribution. And we end up at final distribution P prime, where the particles uniformly distributed throughout the box. And for simplicity, I'm going to assume that the initial and final energy function is just uniform across the whole box, which basically means that we can disregard the energetic contributions and just focus on the entropic contributions. So if we consider this system and we evaluate the fundamental work and extract the fundamental bound and extractable work, we see that it's the maximum amount of work that we can extract is given by KT times the increase of the entropy of the system. And in this case, the entropy goes up by one bit. So in units of nets, which is the more common unit in statistical physics. That's just that can be written as KT times log two. And in fact, we can achieve this bound arbitrarily closely by very quickly moving the partition to the middle of the box at the beginning of the protocol. And then slowly, very slowly expanding it leftward all the while extracting work from the particle as as the volume it occupies expands. And I write this by saying that the actual work can be very close to this bound of KT log two. Okay, so I've talked about the second and the first and second laws of thermodynamics and I've talked about the bound on work that that comes from it and the non equilibrium free energy. And the next thing I want to talk about is the thermodynamics of information, which is a kind of recent hot area of study in non equilibrium statistical physics. So what do I mean by the thermodynamic value of information? Well, it's I'm going to explain it using the following setup. So this is a very similar setup as before we imagine that we have a physical system that's coupled to heat bath and a work reservoir. And but now before the system undergoes driving, we can make a measurement of the system using some observation apparatus. And I'm going to write this as a measurement M of the system X. And now the system once we've made a measurement of the system and we know something about its state. The system can again undergo a driving protocol but the driving protocol can now depend on the particular outcome of the measurement. And if this seems abstract to you I'll explain with an example in the next slide so I think it will make more sense. And we again have the extracted work. And but now we can actually average the extracted work which I indicate here with the left hand side with this bracket notation across different outcomes of the measurement. And we can say on average how much work to be extract. And it turns out that by acquiring information about the system and then using this that information to drive the system in different ways. We can increase the amount of work we can extract from the system and we can increase it by KT times the mutual information that was acquired in the measurement. And so and again in principle like the the previous bound the work bound I talked about. In principle, it is possible to design protocols that come arbitrarily close to this bound. But in practice this will often be difficult and it won't be achievable and again I'll return to when cases where I won't be achievable. Let me give you an example of this set. I'm sorry for interrupting but you said it's okay. Can I ask again what is the mutual information between what and what what is M again. Yes, of course, good, good question. So the the so so X is the state of the system. So X is the state space of the system basically is what the distribution PS over M is a random variable which is the outcome of some arbitrary measurement. So let me give let me give a very concrete example next slide and please interrupt me again if it's still not clear. Okay, thank you. So, let me contrast the two bounds we showed before so we had the jiller box without measurement and for simplicity we mentioned that the box begins with the particle uniformly distributed throughout it. Over the course of some protocol it ends again a different distribution is actually in the same distribution but we call it P prime or the particles again uniformly distributed throughout the box. And we can say what is the maximum work we can extract from the system again assuming the same uniform energy function at the beginning and end. Obviously the system starts and ended in the same thermodynamic state. So it's a cyclic process basically and we can't extract any work. Now, let's imagine that we perform a measurement of the box. So, again, the box starts with the particle uniformly distributed throughout it. But now we perform a measurement and we use some kind of apparatus to say, is the particle on the left side of the box or is the particle on the right side of the box. Essentially, this, there's two possible outcomes so we kind of split our, you know, trajectory into two different, there's a fork in the road so to speak. And let's say half the time, it will be on the right side of the box and half the time it'll be on the left side of the box so M here would be just like right or left just like that one bit of measurement. If that makes sense. Now, given whether it was on the right or the left, we can apply different protocols to the particle. So if it was on the, on the right, we can move the barrier to the middle of the box and then slowly expand it leftward. If it was on the left, we can move the barrier to the middle of the box and then slowly expand it outward. And in both of these cases, we can essentially extract KT log two of work. And so the expected energy we can extract given this measurement is given by KT log two. And of course this is also exactly the amount of information that we acquired in the, in the measurement. So it's the equal log two is equal to the mutual information between X and L. So this shows how this bound can actually be achieved. Now, if you're, if you're familiar with classic sort of paradoxes like Maxwell's demon you can recognize is this is basically one part of the operation of a maxillion demon. And this whole framework of thermodynamics of information was originally used to basically demystify and rigorously analyze how the thermodynamics of a Maxwell's demon. Okay, so this is all great. But before proceeding to the next section, I just want to throw out the following kind of problematic example. So imagine, again, this jillard box that we're stuck with. But now there's a twist, right. So now, instead of the particle starting on the left side of the box, we say the particle is in the top half of the box. We carry out some driving protocol or maybe just the free relaxation or something. And at the end of which the particle is uniformly distributed throughout the box. And we ask, how much work can we extract during the course of this transformation. So we can actually see that the entropy increase in this case and so also the free energy drop. The entropy is increases one bit so the free energy drop is KT log two. So the second law states that the maximum amount of work we can extract is given by KT log two. But in this case, this seems very optimistic and unrealistic intuitively, at least it seems like there's no way to move this vertical position, no matter how we move it. We won't be able to extract work from this distribution where the particles on top half of the box right there's a kind of misalignment between how we can manipulate the system and what we know about the statistical state of the system. And so the intuition is we won't be able to extract work and this this drop in free energy would actually have to be dissipated as entropy production. So the question is, is it possible to prove that no work can be extracted in this case. And this is the basic motivation, the problem formulation. So can we derive some bounds that are tighter than the bounds provided by the second law when hold when there's constraints on how we can manipulate the system under study. And by constraints, I mean, for example, that we don't have, we only have a vertical partition that we can move around. We don't have a horizontal partition that we can also bring in and move around or some other shape partition right. We have a very limited sort of set of tools or or yeah tools that we can apply to the system. And of course, this is more general than just the jillard box, you know, we would like to kind of study this in the general case, or general as possible. What are the bounds and how much work can be extracted from a system when we're limited and how we can manipulate it. I actually think that this this question is quite fundamental in statistical physics, although it hasn't attracted a great deal of attention up to now. And I think there's a quote from Maxwell, which is kind of interesting from and relevant to this question. So the quote, which is from the 1878 goes as follows. So Maxwell was really thinking about this notion of free energy, which he called available energy and entropy production or dissipation and which he called dissipated energy. And he said the following so available energy is energy which we can direct into any desired channel dissipated energy is energy we cannot lay hold of and direct the pleasure, such as the energy of the confused agitation molecules which we call heat. The confusion like the correlative term order is not a property material things in themselves, but only in relation to the mind which perceives them. And I think it's very interesting because Maxwell is very aware that the fundamental concepts of statistical physics things like free energy, water and confusion and so on are not absolute properties, but are relative to the agent that's interacting with the system. And in particular, there's two different, or at least in my reading there's two different things at least that he draws attention to is in relation to the mind which perceives them, which I would say is can be thought of as the information that we possess about the state of the system. And this kind of perspective on free energy, which was also argued for by by James very famously the inventor of maximum entropy. I would say it's now become very standard in non equilibrium physical physics. So for example, I talked about this, the definition non equilibrium free energy which have a minus sign throughout by the way sorry about that. But you see that it depends on P, which is the statistical state of the system. And this P is really can capture our knowledge or uncertainty about the state of the system. So we saw, for example, that when we make a measurement of the system, for example, as the particle on the left or the right hand side of the box, we actually update effectively this distribution piece. So we change the statistical state of the system. And the second law the work bound that also depends on this piece so it depends on what we know. But there's also a second part of this quote, which is this this ability to direct the energy into any desired channel so to the ability to layhold of and directed pleasure, which I think is a kind of somewhat different aspect which is our ability to manipulate the system. And I think our ability to manipulate the system, although it's kind of often intertwined with what we know about the system, it can be different so for example, in the previous, in the previous example, we might know that the particle in the top half of the box, but we might not be able to manipulate the system in such a way so as to extract work from them. And one of the things we'd like to do in this project is to derive some kind of bound on the extractable work which is maybe written in terms of some kind of effective free energy. So, which, and this effective free energy, which it's not entirely clear how we will define it just yet, should reflect our ability to manipulate the system given some constraints on on the protocols available to us. So, one might ask, yes. So, I wonder if that effectiveness also has to do with the dynamics of the system itself because information also have relevance in terms of time. When I know that the particle is in one half of the box. This won't stay like that forever if I don't quickly put the the piston in the right place. The particle will move to the other side and then we lose the relevance of this information. Do you also take that into account when you say effectiveness. So, that's a really good point and I will go back to that. So, we consider, so there are many things that could be considered as protocol constraints. We do think of protocol constraints as the general dynamics that the system can undergo. In this project, we do not consider things like how fast can you move the piston. And this this kind of constraint of like how fast basically can you change the energy function or manipulate the system that has been considered it sometimes considered under what's called finite time thermodynamics. And I will go back to that later. So, so I would say we consider sort of half of that, but not entirely, but that's that's a very good point. Thank you. I have a related question about this. So, yes, what you call protocol constraints is not a constraint on the protocol itself, but according to your example, it seems a constraint on the phase phase of the system at the beginning of the protocol. Am I right or I did or I misunderstood. Um, so I will define what I mean by protocol constraints formally in a couple of slides, but I will point out that so here we're sort of assuming that we start with some distribution P and some energy function E and we end with some distribution P prime and some energy function. E prime. And so we say, given that we go between those two endpoints and we assume it is possible to go through between those two endpoints. What is the what is the maximum amount of extractable work. So the constraint is from the initial and the final distribution. So that's a give. So I will let's hold off because I will define what I mean by constraints, but those are just given parameters, basically. If I may another quick question. So, so shouldn't these active free energy depends on something else like what you are measuring or what you can control or does it only depends on P and E. It will depend on what what I mean by by the constraints available to us. It will not by mean so given given certain set of, let's say, I mean there's many ways to define it, but we will define it in a particular way. Well it will just depend on P and E, but also generally on the set of protocols available to us. Okay, thanks. Certainly it might be worth pointing out here that P like in James is the probability distribution in the mind of the user what they know about the system. So, even if their measurement was a year ago, then what goes into the P here is their current uncertainty about the state of the system, given their measurement a year ago. It really is all self contained all these kinds of issues are just special cases. Um, Yep. Thanks, David. So I just I also want to say like, you know, why, why do we care. I want to just motivate this approach of it. Why do we care about deriving this bound, which is going to be stronger than the second law and it's going to reflect some some constraints. So I actually think having such bounds is really important for understanding the thermodynamics of all kinds of artificial and especially biological energy harvesting systems or engines. You know, processes generally, and just to give you an example, you know, for for if I want to understand the thermodynamic value of a food source for a given organism. Well, in general, that's not going to depend really on the total non equilibrium free energy of that food source. It's going to depend also on the driving protocols available that the organism can use to extract energy from that food source right. So one example I sometimes uses, you know, I can drink a glass of gasoline, which has a very high non equilibrium free energy, but I can't extract any calories from that, because I don't have the available protocols basically. And so if we want to sort of extend these bounds to more realistic scenarios or at least somewhat more realistic scenarios, we have to consider the fact that that you know these transformations are done by quite limited and restricted agents that are restricted in both what they can know about about the biker state of the system and how they can manipulate it. Okay. And I also want to point out that having such a bound and in general this perspective of how of thinking about how much work we can extract under constraints also has a lot of implications for the thermodynamics of information under constraints, by which I mean the thermodynamic value of different measurements. And I'm going to illustrate this with this example that I'm beating to death. So let's imagine we have the shoulder box and initially the particles uniformly distributed throughout the box. And we now measure instead of whether the particles on the left or right, we measure whether the particle is on the top or the bottom of the box. And we then bring the system to a uniform equilibrium, let's say, well, again, the second law with information says that the max amount of work we can extract from from this measurement. As we saw previously is KT log two, but intuitively it seems like we should not be able to extract any work on average from the system. And so in other words, this measurement thermodynamically speaking, intuitively seems to be completely kind of worthless or relevant. You know, again, we have the question, can we prove that we can't actually use this measurement to increase the amount of work we can extract. And more generally, can we take some kind of measurement of the state of the system as represented by X and M and decompose it into let's say one contribution which is actually irrelevant to increasing the amount of work that we can then extract from the system and one contribution which is irrelevant. And so I basically outlined the motivation of the project that now the background and motivation I'm just going to summarize it briefly. So the first thing we'd like to do is derive something like an effective free energy that determines how much work can be extracted from a system under constraints, and we'll also see this is closely related to how much entropy production is incurred under constraints. And this will have the form of bound like the following. And as I argued, I think this is quite relevant for sort of making thermodynamics more practically applicable to various systems we might be interested in studying. Also, in the second part want to derive bounds and the thermodynamic value of different kinds of measurements under constraints, including some kind of decomposition of the information to work relevant and work relevant components. And I will say that one of the reasons we're interested in the second part is me and David Walpert have this kind of long running project where we've been trying to use statistical physics to so to speak put meaning back into information theory. And there's a paper where we talk about this from a few years ago which you might be interested in. But basically, what we're thinking about is that the second law of the second law with information just depends on the total number of bits acquired about the state of the system in the measurement so it just depends on the mutual information between X and L. But we believe that under constraints, the thermodynamic value of information will actually depend on what kind of measurement is performed and it will be different for different kinds of measurements. And in general, the measurements that will be sort of qualitatively speaking the measurements that will be useful from a work perspective will be ones that reflects some kind of alignment, one might say between the distinctions that the measurement is making versus the kinds of protocols that the kinds of ways we can drive the system. Okay, so this has been a pretty long introduction and motivation so let me get to some of the results and some that we've derived. I've already talked quite a bit about the physical setup. So this is basically what we're thinking of. Similar results also apply to a more general case in which there's multiple heat baths at different temperatures as multiple other kinds of possibly thermodynamic reservoirs but for simplicity I'm going to restrict attention to this case. And I'm going to define what we mean by driving protocol more formally. So a driving protocol here is going to be a time in homogeneous trajectory of dynamic generators over this time interval zero to tau. And I'm going to write the dynamical generator as L of t at time t. And so basically at any point in time, the distribution of the system is going to evolve according to this. We can think of as like a generalized master equation. And for we can think of L of t as being something like a rate matrix for a discrete state system. It could be something like a Fokker-Planck operator for a continuous state system and principle could also be some other more general kinds of continuous master equations also. And we just assume that the initial value of the initial condition of this differential equation is given by p. So the initial distribution and at the end of the protocol time tau, we end up at our final state p prime. And given this setup, then we can use existing results from stochastic thermodynamics to compute basically the work in entropy production incurred by the protocol for any initial condition. And I'll just leave out the details of this but you can see the citations if you're interested. Okay, so now we're going to assume that this that the set of available driving protocols is constrained and what do we mean by this. So here's what I defined it. So we say we assume that the driving protocols constrained meaning that are all times T this the dynamical generator L of t has to fall into some restricted set. To give you an example. So if, for example, we're considering a jillard box with a vertical barrier, and we might represent this with a poker with a poker dynamics with a poker planck operator. So this would be what sometimes called a brownie and jillard box. And here the different elements of lambdas which is the set of available damper generators would basically be different poker planck operators with the partition located in different parts of the box. And if we say, okay, this is the set of constraints. Basically we're saying, okay, all we can do is move the partition around to not not apply arbitrary different other potentials to the system. So regarding a previous question note that in some ways we are constraining the dynamics because these dynamical generators can have time scales and so on. On the other hand, we're not going to be considering constraints on things like is the driving protocol continuous in time is it a continuous function in time or how fast can L of t change in time. Those are more questions that are usually considered finite time thermodynamics and they're also very interesting, but that's not not what we're going to be thinking about. So, especially when we're thinking, for example, in poker planck operators really this this can be thought of as restricting the kinds of potentials that we can apply to the system. Okay, and so I will now present a theoretical results that we've derived, which is going to be a little bit abstract, but I'll then show how it can be used to derive bounds and attribute production work in some concrete cases. Okay, so the theoretical result goes as follows. So soon we have this set of allowed generators lambda, right. And now we're going to propose that there's this operator over the set of distributions which I'll call phi. And I'm going to call it a projection operator. There are other reasons that I'll explain next. And so this operator maps distribution, state distribution, state distributions, and we're going to propose that it obeys to conditions. Okay, and then we'll show if it obeys these two conditions, then we have a very nice and interpretable bound on entropy production. That stated in terms of this operator. So the first condition I'll actually explain with a picture so we can imagine that this operator phi maps every distribution to like P to fight P, right, and we can think of it as mapping P to a point in the image of that operator, which I'm, I'm drawing this image of this operator is this manifold here in blue. So the first condition we're going to require is that if there's some other point in the image of that operator, which I write as like you, then there's this relation that holds that basically says that the distance from P to that other point in the image is given by the distance from P to plus the distance from Phi P to that point in the image. And here D is always the public library divergence. And this relationship in information geometry is often called the Pythagorean relation because you can kind of think of the public library divergence as Euclidean distance squared, and then this is basically saying that the mapping from P to Phi P is a kind of projection to the image of the operator and that projection meets the image at a right angle. And so basically, as I, and I indicate this with the right angle symbol. And this is also why I call Phi a projection operator. And so that's one condition. And the second condition basically says that this operator commutes with the time evolution under all the dynamical generators available to us. And so it's a commutativity relationship that we can also draw in this diagram. It basically says that we can start from P, and we can either evolve that distribution under any of the available generators L for some time T, and then project it down under Phi. Or we can first project down under Phi and then evolve the projection under L, and we end up at the exact same spot. Again, I will, I will demonstrate this with more concrete examples next so it's at this point it's a bit abstract but, but, but it will become more specific as we apply it to different things. So then we have the following result. So we then we can say that for any protocol that transforms some initial distribution P to some final distribution P prime. The entropy reduction is lower bounded by basically the drop in the distance between P and his projection from the beginning of the protocol to the end of the protocol. And we also have this result that this drop is non negative so we can draw this using the picture on the left hand side where we have basically P evolved to P prime under the driving protocol which I write draws the gray line. We can imagine the projection of both of those distributions. The green arrows are the distances to the projections. And what we we prove is that first of all this this distance has to decrease and this decrease in this distance has to be dissipated as a reproduction it cannot be turned into work. And Jacobo if you could tell me when I have like five minutes left. Yes, I think you have 10 minutes. How much time do you have left. I have 10 minutes now. Okay, is that with questions or without questions. I mean you had questions during the talk so I think I can give you. Okay. So I might have to go a little bit faster but so we then have this bound on the work also which can be derived simply from that which basically says that the the work that can be extracted from transforming P to P prime when the energy function changes from E to E prime is given by it's basically the only that the work is only extractable from the projection of P under Phi, not from P itself. And in fact, we can take the full non equilibrium free energy, which is the left hand side of this equation you see, and we can decompose it into a sum of two non negative terms, where the first term is the effective free energy, which is the the non equilibrium free energy of that projection projected distribution, plus the we might call the inaccessible free energy which given these two inequalities you see above has to be dissipated so the first part can potentially be turned into work. And the second part can never be turned into work. Great. Okay, so those two applications of this theoretical framework that I'll talk about so the first one is what we call symmetry constraints. And the second one is what we call modularity constraints. And each one of these will involve a different definition of Phi, which will which we can then show satisfies the conditions that we require of it, and hence gives us these nice bounds on entropy production and work. And we have some other applications also but I probably won't have time to talk about them. Okay, and so the first thing I want to say talk about is what we call symmetry constraints. And by this we we mean the following that basically, all of the dynamical generators available to us obey a certain symmetry group so they commute with the action of a certain group there's a group that acts on the space and all the dynamical generators commute with its action. Then we can define Phi in this way so for any distribution P Phi P is basically that distribution symmetries it's averaged over the the elements of that group acting on the state space and we just mix all those together. And in this equation mu is the is the harm measure, you're not familiar with that that's basically like a uniform distribution over the elements of the group. And we can then show that this file base the requirements we need and we have these nice bounds on entropy production work. Let me demonstrate this with our favorite example the jillard box so we have this jillard box, and it's just from visual inspection it's easy to see that these potentials that we apply to it by moving the vertical position left and right obey this reflection symmetry basically the Fokker Planck operators obey a vertical reflection symmetry where we can flip everything upside down and and they're left in variance. That means that and this is this this holds for no matter how the practitioners is moved around right. No matter what this horizontal position is that means that we can actually in this case the symmetry group is the the two elements of metric group generated by this vertical flip. And the this this Phi P basically mixes together that each distribution with that distribution plus the vertically flip distribution. We can now take this this this projection operator Phi P. And we can use it to derive bounds on how much work we can extract from the jiller box so imagine that we want to go from this initial distribution where the particles on the right side of the box to the uniform distribution. So we can project both of those under Phi. So the one the distribution where the particles on right half the box is already invariant under vertical reflection. So it actually stays the same. The same holds for the uniform distribution it's invariant. And so if we now apply this work bound, we actually end up with the same exact bound is the second law provided, which says that we can principle extract up to KT log two of work from this setup. Now, we can imagine a scenario where the particle is in the top half of the box. And we map it under Phi. Well, in this case, it's easy to show that the under Phi, these distribution where the particles at the top half of the box is actually mapped to uniform distribution. And if we again evaluate this bound and the work we see that actually the drop in the effective free energy is zero. So no work can be extracted from here. So this this basically showing rigorously the intuition that we talked about at the beginning the talk that there's no way in which using a vertical partition we can extract work from a setup in which the particles at the top half the box. We can also use this to think about the thermodynamic value of information. So we can say which one bit measurement is more valuable isn't measuring whether the particles on the left or the right, or whether it's on the top or the bottom. And in this case, you know, we can again evaluate the effective work value, the effective work value of the left hand measurement is bounded by KT log two, the effective work value of the right hand measurement is zero. So it's a completely useless, useless measurement. And again, we can show this rigorously that there's no way to ever extract work from that. Okay, so I just want to put up some takeaway messages that, first of all, this this inaccessible free energy which is the Koba-Claibre divergence between P and its projection. It's actually a measure of the asymmetry in the distribution P relative to the symmetry group G. And it vanishes precisely when the distribution P is invariant under the action of G. And so what these results say is basically that, first of all, the asymmetry can only decrease in going from P to P prime under any protocol that obeys this constraint. And this decrease has to be dissipated as entropy production. It cannot be turned into work. So the one line summary might be that symmetric driving protocols cannot turn asymmetry into work. Sorry, so now imagine that you take the Zillar box with n particles and you insert the barrier vertically. Then it is known that in that case you cannot extract any, imagine that you measure the number of particles on the right side of the box. Then in this case, what is the symmetry that you would consider is also the permutation between the particles in each of the particles. No, so here, so we can think of, maybe I should have put this up, but if we think of the Fokker Planck operator for like a Brownian Zillar box with the partition located somewhere. Basically, there's going to be like an isotropic diffusion term which comes from the effect of the heat bath. And there's going to be a potential term which comes from the walls of the box and the partition. And basically, in this case, it actually reduces to a pretty simple, simple analysis that all it really has to do with just the symmetries of that potential. So no matter how many particles are in the box, as long as that potential is vertically symmetric, we have this result. Your example, I mean, I agree. I mean, but so I'm not referring to your example, whether you measure left or right or top or bottom, I mentioned in the case where you measure the number of particles on the right side of the box. And in this case, you cannot extract an amount of information which is equal to the mutual information between your measurement and the particle positions. Oh, I see. If you measure the just so in a basic example if you measure the number of particles. Yes, I don't, I don't think I would have to think about it more carefully but you, you, you, right, that might also be a constrained situation where you can't. So you can't you know you can't always extract the mutual information right. So that might be a case where you can't. I mean, just with my expansion of the box, you cannot. Yes. Yeah. Yes. And I think maybe in this case you would need to consider in the symmetric group permutation between particles. So one thing I have to be careful about is can you actually take the partition up and slide it down precisely on that border, where you say that you know the number of particles to the right or the left. One question is where all you can do is move the piston wall in and out another one is where you can actually move, take it out, instantaneously move it over, drop it down in some particular fashion. I think this is the ideal limit where you consider point wise particles. I mean, if the particles have a size as a finite size, then you cannot do that. I mean, of course. Yeah, so the particles have a finite size that might like you can't that might involve like an infinite amount of work but yeah, I mean we don't we don't assume any constraints on how fast you can move the partitions or any continuity so you can move it arbitrarily into the middle of the box. There is going to be right for multiple particles. The symmetry group is going to be more complicated. But I believe there will still be a symmetry group and might also involve permuting. That's an interesting case. So I might involve both like flipping maybe the vertical, the vertical position of all the particles and permuting them. And again, there will be so this there will be this kind of symmetrizing operator that gives us a tighter bound on the amount of work extractable now, but but that I also should say our tighter bounds. It's not always achievable in some cases it is in some cases not so it is going to be tighter in general, but you know it's I that's quite kind of say like it's the effect of free energy is like the work that maybe we can extract we're not guaranteed that that that we can. So I have another application, which I'm actually going to just like breeze over for an interest of time. Because I might already be a bit over and maybe there's like time for one or two more questions which I think maybe would be more interesting but I just point out we also analyze a situation where we have like a system with many degrees of freedom so this could be like a spin system. It could be a jillard box with many particles. And we imagine that the different degrees of freedom essentially evolve independently of each other. That the, the, the dynamical generators can be decomposed into a sum of things that affect the different degrees subsets of degrees of freedom separately. And we can derive a kind of a projection operator P, which basically we can think a certify, which basically destroys correlations between these subsystems. And again, for interest of time I won't go into this. We can apply it to the jillard box with the single particle we can show that, for example, in this case the X and Y position of the particle are like different to actually evolve separately from each other. They're different degrees of freedom that can be decoupled in the potential that commonly would be considered. And we get these nice bounds saying that basically any correlation between the X and Y position of these degrees of freedom cannot be turned into work. So the mutual formation between them has to be dissipated. Like if we start from an initial distribution where the particles either in the top left or the bottom right part of the box, we cannot turn that into work. And right. And so maybe if you're interested, I can go back and talk about that more. But I think I'll finish now. I don't know what your scope of this time for a question or not. Yes, no, absolutely. Thanks a lot. So, yes, we have time for a couple of questions and then someone wants to ask questions more informally. So I think I don't know whether understood properly, but can you turn the question the other way around. So imagine that you have a system and you want to know what is the best way, what is the best protocol to extract work. Well, you know what you're measuring on a system. And then you want to Right. Yeah, what is the best protocol. Um, I will say the following that In some cases. So for some lambda in particular for some set of constraints, we know how to in some special cases we know how to achieve this tighter work bound or or entropy production bound equivalently that state in terms of non equilibrium free energy. And that basically we can think of it as evolving also the system like very, very slowly. And first letting it relax to the five p so the initial The initial projection and then very slowly moving it to the five p prime, and that we can show that that's going to be like an optimal protocol that can't always be done. But in that case, we can do it. And I will say that. So, so that's what that's kind of yeah so we can sort of say some when the bound can be cheap. There is a kind of area of research in the in stochastic thermodynamics non equilibrium physical physics, concerned with optimal protocols and like that whole quite complicated optimization problem and that's not exactly what we're thinking about, but how do we place a bound on how good an optimal protocol could be. Yeah, no, sorry, sorry if I ask again so in this respect that are these results on the reversibility of protocols. The fact that if your protocol is time reversible, then you can saturate the bar. Where do you see how does this. I mean, is there any relation with with your work here. Um, you can think of it like this so we know the protocol in general it won't it won't be totally thermodynamically reversible because we have this bound on entropy production so eventually production is not zero. By definition that means we're not thermodynamic reversible, but you can think of it like if we if we can first relax to this manifold. So relax to 5p. And then thermodynamically reversibly go from 5p to 5p prime. Then that's the best we can do. This is clear. Yeah. Okay. Any other questions. Yes, may I ask just a couple of questions. Thank you that that was that was really interesting. I would like to dig a little bit more on on the Ziller box box example to understand a couple of things so let's consider the situation that you just described, but now your measures are that the particle is in the top quarter of the box or in the lower three quarters or something like that. Yes. So if you apply your top down symmetry argument here. Yes, you would not get zero on the right hand side of the input. Is that right. That's right. Yeah. So if this argument applies only if you select a specific kind of observation among the many ones that you can do in the. Right, so that's that's a very good point. So yes, so this is kind of the cleanest case, I will say that even even in the in the observations you you made you talked about. So if we will get up tighter bound than the second lock is so basically we will show. Okay, the maximum free energy, the free energy you can extract this work is lower than that which would be given by by the second law. So, but it won't be in zero and like I said this shows that we basically derive a tighter bound but we do not guarantee that it can always be achieved. And here I will say two things one is, you know, we're looking at a very specific set kind of bound an entropy production work which can be basically written as that I didn't talk about this week, it can be written as this drop of a state function right. It can be written as a drop of an effective free energy and this is, you know, in general this is a very restricted set of bounds it's a very nice set of bounds because I think it's very interpretable. And it gives this is effective free energy but in general, you know, we can't always be saturated basically. And the other thing I would say is for example with a shielded box. You know, one reason why it's it's not maybe as tight as we might hope it to be in some cases is noticed argument is really general it just says, all we assume is that there's vertical reflection symmetry. So this holds, even if the partition is not vertical, but it has some like crazy vertically symmetric shape. So it has like the shape of a hunting bow, for example, or, or if there's like a few many partitions but they're all vertically symmetric. So there's many hunting boats or something moving around. Right. So the same bound still holds it only assumes that the potential has a vertical symmetry. And another short related question again, a little variation of what you were talking about. Suppose now that the the barrier partition is slightly slanted. Yes. Yes. Can you apply symmetry arguments to that situation as well because you know that if it's just slightly slanted you could be able to extract a lot of work and you could sort of tune continuously from one situation to another. Is there any way of putting this theoretical framework to to use in this less symmetric situation. So, there are some other cases that we analyze that that so as I said, I didn't talk about in depth of modularity that we also have some results about course grading, when we call so there are other cases which are not about symmetries, but if the partition is slightly slanted very good question, the symmetry no longer holds our results no longer apply. And it's very interesting. This will probably be for future work to see if there's some kind of you know epsilon version of this result. I mean in general, it's not clear to me, there, there, what, there, that there is always a balance, you know, my depend on like, like, yeah, it's not clear to me what the balance should be but but right as soon as you break this industry even by a little bit, our results no longer hold. And just to clarify there, the precise results where there's the fee for the symmetry group don't hold the more general framework about if you can find a manifold defined by operators that commute with L and obey the Pythagorean theorem. I don't think we know do we are to me that that cannot be done. Right. So David makes a very good point. It might be that one could find a different operator fee, but we don't know how to do it in that case. Okay. Thank you. Great question. Thank you. Yes, so I think we are a little bit over time. It's a short question. So I just wanted to mention that there was some work, including mine some years ago, let's say 10 years ago on doing similar analysis on there was a feedback protocol of a system that either it has a broken symmetry at time zero. Like you say in your example, the system is initially the three particles are only one part of the box, which could be anything, or a system which initially is an entire box and ends in a broken symmetry state. The theory and experiment about this paper is called an analytics of similar to universal features and yet extremely breaking. And what we found is that it's a bit different to what you do, because we get a bound for the work that cannot be written as a difference of two effective free So there's an extra term that depends on what is the probability, for example, of a getting into this restricted area in the phase space in the backward protocol. It is, I'm telling you this because it could be that your theory, it's a generalization of what we did. Okay, that's very interesting. I've come across your article, but I will admit I have not read it as carefully as I should. So that's, that's very interesting. I will No, because there is a sequence of papers on this. The first one 2008 is called differential fluctuation theorem, but I think it's, it could be related to your work. I just wanted to I see I will, I will thank you for the pointer. Yes, that's really interesting. I will also say maybe as an interesting literature pointer I presented a kind of earlier version of this and it turns out that there's, especially our work on symmetry is related to some things that have been done in quantum physics with so called quantum resource theories. So we in quantum physics is quite popular to take a kind of similar operational approach to defining free energy which is you define a via how much work you can extract during different operations and people. It turns out people have looked at something similar to what we did for quantum systems and they call this this this symmetrizing operators called the twirling operator actually I guess in quantum physics. And if any of you are interested there's there's there's a lot of literature on it but there's also this paper by Baccaro in 2008 on this topic. Thank you. Thank you. But thank you. I will definitely look up your papers. Great. So