 of the Organization Committee, it's Professor Edgar Alden from ICTP, and he will give us an introduction about martingales in non-equilibrium thermodynamics and their applications. So thank you very much for having you and the floor is yours. So thank you very much Jan and all the organizing committee for suggesting my talk to be included in the program. Today I will briefly or try to briefly summarize recent work that is a review on how to apply martingales to stochastic thermodynamics, which is a recent trend, a line of research works that we developed over the last five, six years. So this is the main reference, martingales for specifics. It is a very, it's a tutorial review, not a book. It's very long, 300 pages. I expect it has around 3,000 erratas, so whoever reads the review and finds a type of, we welcome feedback. Here are the collaborators in this review. It's Rafael Xamich, Simone Frank and Ken, and most of the results that we review here are, let's say, belong to this recent trend in applying martingales to thermodynamics, in which we have these two very first contributions, I believe, in the topic. So I will try to give you an idea on basics on this topic and also on how can we apply for there, we'll get new fluctuation theorems and new ideas in stochastic thermodynamics. So there will be a structure in this way. So first there is an introduction, very general of martingale theory. Then there's a big block on how to apply this in stochastic thermodynamics in different situations. And we have also classical applications such as in finance, population genetics, or applications in other domains in physics, and it's mainly dedicated to patient and dedicated students. So we include also mathematical proofs, all the examples, such that you can take it as a course on the topic and learn slowly. So I have divided the content on how to navigate in the review in two ways. First, these are the old tricks for new docs. So what should a new student learn about this or can a new student in the topic learn from their view? So we have, for example, a list of symbols which is three pages of notation, which can help you to write the thesis because notation in stochastic thermodynamics is tricky. There are basics on martingale theory with examples applied in physics, so not just mathematics. We have also basics on stochastic processes, basics for Markov and Langevin. So it's very, very self-contained, I would say. Also, basics on stochastic thermodynamics, such as the first and second laws which were reviewed by Massimiliano and Hugo the other day, etiorems on martingale theory. So this is a course material for mathematicians, and also the classical applications in finance and population dynamics. This is like for a newcomer to the field, but there are also new tricks for all dogs in the sense that if you have been working for a while in the topic of stochastic thermodynamics, you can make use of martingales for deriving new universal results, which you can get, for instance, in non-equilibrium study states or in driven processes. So there is really a collection of techniques that you can you can find by applying martingales, and this is what I will try to explain today at this, the most basic ones. We also have all the records, so there are two chapters that are very technical, but for most mathematically oriented, you can go there and check on the details and also applications in other fields in physics, such as quantum or spin class. So I will try to begin to introduce you to the martingale by considering an example that probably you all know, so it's just a unidimensional diffusion, over time diffusion, in which we have a potential that may change in time following a protocol plus an external force Ft. This makes a total force to be defined by Ft, and this in the presence of a thermal bath that induces Gaussian white noise fluctuations. You all know that at time t, you can solve the density of the probability density for the particle following the Fokker-Pank equation, and that since 2005, in work by Udo Seifer, it was introduced the notion of the non-equilibrium system entropy associated to the particle being next at x t at time t. The average of this quantity, I remind you, is channel-centered. So an important notion in stochastic thermodynamics is how this system entropy changes in time. In a small time interval, the position of the particle evolves following the Langevin equation. So you know that at the time t plus dp, the value will be x t plus the increment, which is given by just integrating the Langevin equation. However, it was not so clear until around 2005 how the system entropy evolves in this interval, given that the evolution of the particle was Langevin. So you may ask, how does this object change in t to t plus dt? You can hear, say, okay, this object depends on x and on t, and apply the chain rule. So partial of this with respect to time dt plus partial of this with respect to x dx. This is called the standard calculus, chain rule, and in stochastic calculus, this is called stochastic, sorry, Statenovich calculus. So you do this change in t plus dt, you get into this equation, which has two terms. One is f times four times velocity, so this is the heat dissipated to the environment, the environmental entropy, and the rest is understood as the entropy production. So whatever is not coming from exchange with environment, we say it's a production term of entropy. This was already introduced in the classic paper by Eul Cipher, but something you should notice is that all these calculations were done in the Statenovich convention. So what if you take this equation and you write it in ito? It should be just, let's say a mathematical exercise, and 12 years later, we show that this equation becomes this sd in ito, which seems to be as complicated as the one in Statenovich, but it has a very particular structure, because in particular, you see that there are two terms that look very similar. This term is the square of the noise amplitude here. So if you are in non-equilibrium stationary state, partial t is zero. The first term is zero, and the change of the total entropy in a small interval has this beautiful form. It has a drift term and a noise term that is in ito, that has the same amplitude. There is a square root of the same thing, where this is the current and this is the density of the time t. So you can take these two equations, one is for the dynamics, the other for the thermodynamics, the process and integrate them, and this will give the fluctuations of entropy production. Please notice that you need to specify how x evolves in time, so this equation, and that the stochastic entropy production depends on what is the evolution of xt. Moreover, the bt, so the white noise that appears for the entropy production is the same as the one of the particles. So this is a very compact form and, moreover, this quantity here, which we called back then entropic drift, is stochastic, so it depends on where the particle is visiting, but it's always positive. So there is a stochastic positive drift term in the evolution of stochastic entropy production. So this type of decomposition appears quite a lot in martingale theory. It is called the dupe Meyer decomposition. So it means, first of all, from this equation, we can show that the entropy production, stochastic entropy production is a sub-martingale. So whatever is the history that we observe of the process at two times s, the conditional expectation in the future is always greater or equal than the value of time s. This type of inequality you can show directly from this formula, and whenever you have a sub-martingale, you can decompose it in two terms. One of them stochastic and increasing function is this integral of v. So it's the first term times vt. And the other term, which is, it is an ito integral. So this term has zero mean. So whatever is the evolution of the process, it has zero mean. So this is also what it's called martingale. So this is just pure noise, in other words. Another interesting manipulation is if you take this equation and you do a change of variable, you go from s to exponential of minus s. So this is a change of variable x to e minus x. You can apply ito's lemma and go from this equation, sd for s, to this equation, which is an sd for e minus s. Importantly, e minus s doesn't have a term like this one times dt. So it has only a noise term. So it's something that depends on xt times noise. And this is in ito. So it means that this has zero mean. So e minus s, it's called martingale. It has zero mean. It has no drift. This means that if you look at the stochastic process, up to time s, you know this information of the process, but you don't know the future. There are many different future histories. Then what you know is that the expectation in the future of the e minus s is going to be equal to the last observation of the process. So you don't expect to grow and you don't expect to decrease. This is called martingale property or e minus s. It's not the typical martingale property you see in mathematics books. The process itself is a martingale with respect to itself. Here is, you have the dynamics and you have the thermodynamics. So you condition over the dynamics and you observe expectations of the thermodynamics process. This is a central result in our theory. You say this martingale condition for e minus s. And you can always show, this is explained in chapter seven in the review, that any convex function of a martingale is a submartingale. So minus log is a convex function. The minus log of e minus s is s, is a submartingale. You expect whatever you see, you will see in the future and the expectation in the future is going to be greater than the current one. So interestingly, this generalizes the integral partition theory because you can set s to zero and you have this condition x zero and then average over x zero and you get the integral partition theory. So this is a consequence of the martingale field. And moreover, the second law, you get it from the submartingale. You set s equal to zero and you get a second law. So this is very insightful, very fundamental, I would say, but you can get more results from here. It's not that martingales are just to understand the process of time t, but they can be applied also to first passage quantities, as I will explain later. Moreover, the decomposition that I explained, you may ask, how general is this? Well, I show you for one-dimensional systems, but you can have also multi-dimensional systems like colloidal particle interacting with many active swimmers or a single particle with many degrees of freedom. This can be described by a more, okay, d-dimensional Langevin equation with possibly space-dependent noise. In that case, you always also get in steady state the same type of equation. Here I miss a t, where you have a drift term that is always positive and depends on the currents and the diffusivity matrix. So again, you get the same structure. So it seems this result for the martingality of e minus s is more generic. So it's not just for universal systems. Moreover, there is an interesting technique applied in martingale theory and also in finance, which is called the random time change. So up to now, I've shown you how entropy production changes in time, but in the normal time. So you have a clock and your clock is deterministic and every time t, you advance one second now. So this is the evolution of entropy production in steady state with a normal time. And now let me do the following trick. So I will start measuring time in a different way. So I will start measuring time by weighting it with this quantity, the entropic drift. So I look at my system and when the system is dissipating a lot of heat or producing a lot of entropy, my clock starts to run faster. I'm just multiplying this quantity. So I'm evaluating this quantity along the trajectory and multiplying my dt. This is what I call entropic time. This is, of course, an illustration. But it's quite interesting that you can now do a time change in this equation. And it becomes just this very simple equation in this entropic time in which this is a drift diffusion process with drift one and diffusivity one. So it means that processes that have very complex dynamics like the once stationary non-equilibrium process is to example they show here. If you measure time in these units, they all become or the platicians of entropy become as those that can be obtained from a particle in a ring with d one and b equals to one. So this is a bit was quite remarkable for us when we were doing research in this topic. And you can show this, illustrate these examples. If you look at the fluctuations of entropy production up to a time t equals to one, the distribution is typically non-Gaussian. For example, if the particle is drifted in this periodic potential, it stays a long time in this minima and the entropy production at a fixed time has this structure with peaks. This has been shown also in experiments. You do experiments with a colloidal particle in a periodic potential or you have a single electron box or electron jumping in and out from a box. You compute entropy production and you see distribution is typically non-Gaussian. So this is a bit of urban legend. So entropy production fluctuations in general are non-Gaussian. However, if you scale time, so if you use this entropy time and you wait until the entropy time reaches a given amount, all these models, the distribution of entropy production at the entropy time equal to one is Gaussian with mean one and the facility one. So you can get it by solving this simple model, which is a particle in a ring. It's quite striking and we are not the only ones to find this. I'll show you later, also in the housekeeping entropy production, you get the same property. But this result is very insightful because it tells you that all statistical properties that are independent on time contractions or delays should be universal. So for example, the global minimum of entropy production does not depend on when it happens. So it is a quantity that does not depend on your scale of time. So if you compute the distribution of the infimum of these three different models I showed in sketches, they all have the same distribution and this distribution is the one of the particle in the ring. But there are infinite properties like this. There's also the supremum before the infimum or the number of times the reproduction crosses an interval. These do not depend on when it happens. These are universal properties. You can see a review in chapter five or chapter seven in our review. Moreover, this nicely composition into drift and diffusion term, it allows you also to compute non-universal quantities such as, for example, the final factor. The final factor is not a universal quantity. And you can show that from the question I showed you before, you can get an uncertainty in quality for the entropy production. It is equal to two plus the variance of the tropic time divided by the mean. So this looks like an uncertainty relation because this is always greater than zero. So this is always greater than two. But you can get directly the final factor of entropy production by computing the fluctuations of the tropic time, as I showed here. Moreover, you can extend the result to non-state line process and get an extra term that is basically depending on a correlation between the system entropy and the entropy drift. This is a bit more complex to understand, but it has some similarities with a very recent work, which is called the variance sum rule for entropy production, where they computed also exact expressions for the variance of a non-equilibrium process and split it into terms. Some of them depend on this math calligraphy case, depends also on correlations between force and displacement. It is a bit of a particular case of these ideas, but they should be a relation between recent progress in the field and this result that we have in the past. As I said, this can be also called a variance sum rule. So we are writing the variance as a sum of different terms. So all I've shown until now is for stationary states, but there has been progress in non-states in my states. For example, in this work by Chun and Noh, Piardin 19, they showed that the housekeeping entropy production for any non-equilibrium process, so this can be also driven, has also the same structure. So it's also exponential marking it for any non-equilibrium process. Also, we showed that the entropy production is general, not an exponential marking it when you have driven process, but you can find a compensator. So find a process that is an exponential marking it, which is just the entropy production plus something else. And I talked about this in a previous work, so I'm not going to insist on this, but this is useful to derive Yershinsky's qualities at stopping times. So when you first cross a threshold, so you are not looking at fixed time, but stopping times, also design gambling demons. And there has been also more advanced progress on how to find martingers in a general equilibrium process to derive, for example, green cubo formulas for non-marketing process. I guess Hong-Kian will talk about this during the conference. Moreover, we are working also on a case where you are out of equilibrium, but you also have unidirectional transitions, which is a bit of a pathological case. And in that case, we find that the non-adiabatic entropy production is a key concept. But this is a bit technical, I will not talk about it today. I guess we will have a draft sooner or later, probably, in the archive. So I will continue my talk by trying to rationalize why there exists many martingales under many phases of the second law regarding martingales. By presenting you what we call the Tree or Football Club of Second Laws, it is a bit of an advanced topic, but it is essential to understand our theory in full regard. This is a hierarchy of second laws, meaning that one of these generalizes the next, which generalizes the next. They are very strange names, but these strange names mean different degrees of conditioning. So conditional, strong second law, strong second law. I will explain this in a bit. And they also have a bit of esoteric names concerning this lambda sigma, because these are specific functionals of trajectories, which have a given form. And depending on the form, you can get as particular cases, the work, the entropy products on housekeeping heat, et cetera. But most of these results are generalizations of lactation theorems that already exist, whereas others give to new results that I will try to introduce to you at the end of the talk. So in this Football Club or Tree or Second Laws, most of the results are well known, but there are a couple of them which are new, presented for the first time in this review. So one of the key features of these laws is that there are generic in the sense that they apply to both Van Jeevan and Markov's Jam processes, and they only have one key assumption that the ratio between path probabilities is well defined, in the sense that you cannot have cases in which the log of this ratio diverges. So this in a very mathematical way, it means it's called absolute continuity. So if the probability for a trajectory in the process is zero, then the probability of the trajectory in the reference process, which we will use for calculations, should be also equal to zero. So typically, this Q is the probability in the time reverse process. So this is, in other words, meaning micro-reversibility. So you see the forward and the reverse trajectory with non-zero probability. Anyway, a big result, a key result in deriving this hierarchy of laws is the fact that ratios of path probabilities are martingales. So if you take the path probability of a reference process Q divided by another path probability of your process of interest P, so I have a physical process, it has a path probability P, and I will consider another random whatever other process that has another path probability Q. So if you do this ratio and you compute the conditional expectation given the history up to time m, you can show that when you average this over P, this is equal to r at the time m. So this is a martingale when you average with respect to the probability P. This is a two line calculation, but one has to be very careful, and for the reason I will explain in a bit. When you find such a martingale, it is quite nice to realize that if you apply a convex function to Rn, any convex function of a martingale is a sub-martingale. So this is the same thing I explained for e-minus s. If you do minus log of e-minus s, then you will have a sub-martingale. So you can first prove a martingale for e-minus s, and from this immediately follows that s is a sub-martingale. Moreover, as you can show here, if you put this expression here, you have this type of inequality, which looks like Kullback Leibler, but it's more detailed than Kullback Leibler, because you are not averaging over all P of xe to n. You are doing an average with the conditional expectation, so you have this type of averaging. So this is what we call usually a conditional strong second law. We know the process up to then, and we expect in the future is what we're doing, we're calling conditional strong second law. If you put m0, you will have Kullback Leibler, so what we know very well from the foundations of the field. Okay, so this is what I'm just showing here. This we call usually second law without the css. This is conditional strong second law, and this we call second law. Moreover, here is an important message to all of you that this martingale, martingality property, is not so generic when you look at processes that are not time homogeneous. So for steady states, this works for equilibrium. If you start in equilibrium, this works. But if you have a time in homogeneous dynamics, such as a relaxation or a driven process, this is not going to work. And this is a central message of our review. So you cannot just say that q over p is a martingale. You have to be very careful and check case by case. In the case, the dynamics is not time homogeneous. Moreover, these are a bit more details, but this is just to introduce you to the notation. There are two classes of functions that we consider. One is called sigma, which has p over q evaluated on the reverse. So this is a typical form that entropy production has it as probability divided by the probability of the time reverse trajectory. And this could be a time reverse dynamics, but there exists a second class of functional that are more simple. It's just the probability to see the trajectory in the process divided by the probability to see in another process the same trajectory. So without time reversal, one particular case of this lambda functional is the housekeeping heat for adiabatic introduction introduced by Massimiliano and Chris Vandenbrock. So there exist different families and these events are important because the lambdas are always martingales, but the sigmas you have to always have a compensator. Moreover, there is even a worse creature or a monster, which is what we call sigma d functionals, which are a generalization of the sigma. So here what we are doing is we have a process p, which evolves from c o to t, and we look at the probability, it's a marginal probability in a subinterval r s. And we compare this to a q process in which we look at the time reverse trajectory, but time reverse with respect to t, but in the subinterval t minus s, t minus r. This is a bit of a complicated creature, but it's very fruitful because from it, you can get most of the results from 10 years ago in stochastic thermodynamics and what concerns second loss. So for example, if you put in the q process, you put the time reversal and you take the r s interval is the full interval, you get the stochastic entropy production, but you can get other interesting quantities. What was very surprising to us is that this object has this interval, it will say, okay, this is just pure mathematics, but besides that, with this interval, we show that the process is submartial with respect to forward time and supermartial with respect to backward time. So it conditionally increases, when increasing time and conditionally increasing when looking backwards in time. Again, this is very recent, so we are still not fully sure about the interpretation, but we have one result I will present in a minute, which may, we are looking for feedback to try to get, you know, what does this mean, okay? This is very, very recent. You have five minutes. Very good. Five minutes and done. Yes. Great. So a particular case of this p and q could be that you have a process p and starting at the initial density rho zero, and then you look at the same bulk dynamics p, but starting from a different initial density. If you do that, this sigma g object becomes a very easy to understand quantity, which depends only on the current time density. So this is the log of the density of the process p at time t divided by the density of the process t at the same time. Okay, there are two processes and you evaluate the density at the same time in the process. So this implies, among other things, this very well known result, historical second law that appears in Thomas Cooper book, for instance, and other results. For example, we can use this to prove that the non-equilibrium free energy in a relaxation process is a backward sub martingale. This means that if in a relaxation, you see trajectories of particles or the system in the future, so you see how they evolve in the future, you can predict the expectation of the past. So this is different from the classical second law, which we say in the future, entropy will increase. Now we say we look at trajectories and now we know where we come from. So we can say that in the past, the particles of the system on average at a higher non-equilibrium free energy. This is a totally new result and your feedback is extremely welcome. Okay, I should start closing. So I just presented one part, which is fixed times, but martingales are really good to extend ideas to stochastic times. This is already an idea that was exploited by the Nobel laureates in economics, black and skulls with fair markets. This I explained in other words, so I would not enter into this. This has many applications for biophysics and also solar cells and to extract, for example, extreme values and survival statistics of the work. And this can be extended to non-equilibrium to dream process, but I have no time for this. The main message is that there are these second laws, there's the hierarchy at fixed times, but also the hierarchy at stopping times. You can look at the particle until it crosses a threshold and measure the dissipation from time zero until you cross a threshold. So this hierarchy exists also for stopping times. So there are two football clubs in the end, one of the fixed times and the other stopping times. So there are two hierarchies, even though maybe it's not clear yet in the reading. I finished by announcing an archive, a mission this week about stochastic thermodynamics of particles in fluctuating fields, and that fields can locally extract heat from a bath when you drive a particle out of equilibrium. You can take this week. We are also working on other topics, bullfrogs, hair cell bundles, for example, this is a making, but this will be part of future talks or discussions. And I end up acknowledging my collaborators in the quest of Martin Gels, my group at ICDP and funding. Thank you very much for your attention. And this is for students. I have three courses that you're going to see on YouTube that could be useful for you to get introduced in the field. Thank you very much for your attention, and I hope I'm not out of time. Thank you. Thank you very much. That was a very nice talk. You were just on time. And now we have time for questions. The general rule for questions is that the junior attendees will be given priority. So if they are junior people, PhD student, master students, young postdocs, please raise your hand virtually by this reactions button and clicking raise hand, and you will be able to ask questions. In the meantime, there is one question by Tom Aldrich. So yeah, you can unmute yourself. So please ask. Hi. Thanks for the great talk, Eka. So do I understand properly that if you're in a stationary distribution, non-equilibrium steady state, the Martin Gail approach is always going to work. But if I'm in a system in which there's a time dependent, my Hamiltonian is time dependent, or I'm in a system where the Hamiltonian is constant, but the system is evolving, the distribution is evolving, it's not guaranteed to work, but it might do. Yes. So I totally agree. So if you start in a stationary state, the exponentiating negative entropy is Martin Gail at all times, and otherwise it is not, but you can still find a process of this Martin Gail. And the process is not E minus S, it's E minus S minus an extra term that is related to the fact that the density at stopping times is not the same density as what you will get if you don't stop, if you don't have a first-party criterion. So let's say entropy production is not Martin Gail or not some Martin Gail in the driven process or time dependent, but you can still find a Martin Gail, which is your... Okay, but whether or not that extra term is easy to interpret is the key question. Yeah, actually, this extra term is not difficult to interpret because it relates to the asymmetry of the process at stopping times. So when you average it, it has this form. So it's the logarithm of the density of time t compared to the density of the conjugate time of the process. This is related to the work by Van den Brog, Kawaiians, and Barondo in 2007. I don't know if you remember the dissipation, okay, the phase-space perspective. So what they showed was that the dissipation of a time dependent process is related to how different is the phase-space density at time t with respect to the phase-space density of the time-reversed process at the conjugate time in tau minus t. I don't know if this is PRL plus and seven. So in this case, we have a similar result. You have to include also this asymmetry in the process, the asymmetry of the forward process at the stopping time with respect to the backward at the conjugate time of the stopping time. In the steady state, they are equal, so there's no difference. So for details, I think paper with Matano, PRL 2021 is where we first found out to extend it to the driven process. Chapter eight in the review. Thank you. Very helpful. Thank you. Are there any other questions? I see. So Tarek? Yes. Can you hear me clearly? Yes. Okay. Just so when you mentioned that at steady state, any ratio of, if I recall correctly, functions of trajectories will be a martingale when you define our log ratio of, oh okay, it's not the log ratio, it's just Q over P. It's a ratio. It's a ratio of any function of trajectories at steady state. Well, it should be to the two path probabilities. Oh, path probabilities. Both of which are stationary. Okay. Okay. So I'm just wondering if this generalizes the sequential probability ratio test, like walled results, because what happens there is that they consider the log ratio of two path probabilities conditioned on two different hypotheses, and then you get decision times. This is a very good question. And actually, walled sequential probability ratio test is for IID processes. So it's, in the end, it's like stationary. So you are drawing the numbers from a lottery, but this is a stationary process. And indeed, the first time we came up with the results that inspired us to do this research to martingale, at least Isakneri and I, was when we started to work on walled decision tests. So we have an article from 2015 where we had no idea about martingale theory, but we were looking at very similar questions. So in particular, how long you need to wait until you have with some confidence, you can decide whether a movie is running forward or backwards in time. So it's a decision making process to be optimal in decision making, you have to measure probability ratios. So if you want to decide optimally on the error of time, you should do log of the probabilities with respect to the error of time. And then what you get is you should measure entropy production if you want to be optimal. So it's related to one. Yes, IE. But what I'm saying is this it's more general. So entropy production is the log ratio of forward versus backward path. But it's more general. Yeah, that's why any, any two things you want to distinguish will be a martingale at steady state. Yeah, that's why there is all these hierarchies, because some of the things here is not entropy production, it's just probability ratios. Okay. So actually, yes. So this one is always martingale with respect to p. And you don't need that it's related to thermodynamics at all. So you're right. Yes. Then the beauty for the community at the end is to set, to choose p and q that are related to a physical process that you can do in the lab or, or that makes sense to do the study on equilibrium. But you can do finance if you want with p over q. That's why there was all this research by black and skulls and a field of quantitative finance. So yeah. Okay. So I think that we are just on time. So thank you Edgar again. And the next