 Okay, welcome everyone to the last lecture of my course and also of this spring college 2021. Today, this lecture will be a bit more advanced than the previous ones. It is more on recent work of my collaborators and myself, on my thing else. So before starting, let me just mention one thing that I didn't explain one of the hottest topics up to now or right now in stochastic thermodynamics just due to the lack of time. But let me just mention it. These are called thermodynamic uncertainty relations and you will find nowadays many, many nice references about this topic. These works are aiming to understand or to find universal bounds for the accuracy of current. For example, the current between two states like I show here in the Markov chain between state one and two or the current of one current can be the work, the heat, etc. So these works have found universal bounds for the signal to noise ratio in this current. So the variance divided by the mean square is bounded by the entropy production of the system plus the universe. And here you see on the right, it's a test of this type of relations. They measure the variance over the mean. And they see that this is critical than 2 k B T, which applies is the same formula but for the entropy production. So I encourage you to take a look to this. If you want a reference or you want to present a paper on this topic in the exam, here are some good examples I can give you. But of course, there are many more. This is really a big part of stochastic thermodynamics right now. However, what I'm explaining today is not exactly the same thing. It's something else. It's related to what we called martingales and formulation of stochastic thermodynamics using martingales. So, first of all, before telling you what is a martingale, I will recap about fluctuation theorems and the ones that I explained in my course. One is the detail fluctuation theorem, which is the one I explained in the bottom figure. So the probability in a steady state to produce an entropy for an amount S is much larger than the probability to produce entropy for amount minus S in a given time T. Or we have e to the minus S on average is one. These theorems have the, if you see in the equation, you have the T. T is fixed time. So they apply to intervals of finite duration in a steady state, and they concern fixed time properties. So you look at a steady state from time zero to time T, and you see what happens at time T. However, this is not the entire picture of stochastic process and stochastic thermodynamics, because there are many things that happen, not at a fixed time, but at a stochastic time. One example is cell division, not all the cells divide like a clock. They don't divide all of them at the same time. Or in the right hand side, in the right side of the figure you see the Feynman ratchet, which is a device that is expected to lift a weight due to thermal fluctuations. So every turn of the wheel in the ratchet wheel on the left can happen at stochastic times because of the fluctuations. So it is important in physics and biology to understand if there are lots of thermodynamics that apply to processes that are completed not always at the same time, but at stochastic time. This is a bit the motivation of what I'm going to explain. And this leads me to the point that there are many fluctuations, types of fluctuations that are not understood yet, or were not understood yet. For example, what is the time to reach a certain threshold? How long it takes for a process to produce KBT, to dissipate KBT of heat, for example? Another one is the splitting probabilities. If you have two absorbing boundaries, what is the probability that you are absorbed in one or in the other? And the other, which I will focus a lot, is extreme values. You look at a process and you would like to know what is its extreme value, the most negative entropy, for example, and this happens at a random time as well. The minimum or the maximum occurs at a random time in the trajectory. So this talk will be about new universal probabilities. Well, when I started these results, they were new. Now they are not new, but we have done a lot of research on this and we continue. It is a very exciting topic and useful. So this is the recap of the integral factional relation you have on the left. It's a non-equivalent state. You look at the system is driven by an external force and you look at trajectories of a fixed time. These trajectories, if you measure the entropy production up to a fixed time, it's the average of these values. It's positive. This is the second law. And the average of e to the minus s is one. This is the Yershinsky quality or the integral factional. Now I'm thinking about a different paradigm. We have the system and we put a condition, for example, the particles to escape a ring. And then each particle would escape the ring at the different times as I illustrated with the clocks in the bottom. And I wonder if there is a second law at random times. We wonder what are the extrema as well. This is another question. And I would wonder if we can do gambling. So if we can use this information when the particle is crossing or if we can use crossing conditions to gamble and to cheat the second law. I'll try to give answers to these questions. For this, I apply what is called Martinian theory. And let me try to be precise. What is a Martinian? A Martinian is a process, a type of stochastic process that has these properties. It is real, defined, easy. It is finite. So it doesn't take infinite values. It doesn't divert. And the most important condition is the third, which is the expected value in the future is equal to the last observed value of this process. So if you see a Martinian process, but I'll show you later, you see it up to time s, what you expect in the future is it doesn't grow or it doesn't decrease. That it stays flat. This is what you expect. Examples of this are Brownian motion, for example. Brownian motion without drift. And also fair games and fair financial markets. These processes have been applied a lot in mathematical finance. And for this, I recommend you to discuss with Professor Marcini. All right. So here is our sketch. This was introduced by Levy in 1934 and also by Jean Ville in his PhD thesis. Here you see an observer in the bottom looking at the process up to time s. And if I know the Martinian up to time s, what I expect in the future is that its average equals the value at time s. That's why there is this Gaussian distribution. It's not necessarily Gaussian, but it's saying that the distribution in the future will have the mean of the last observation. There is also a popular meaning of Martinian. So don't get confused with this, which is a double-up strategy in gambling. So it means you go to the casino, you play to red, you lose all your money, then you play double of your initial bet to black and so on and so on and so on. And you expect with this to win. But this strategy, as you can realize, is valid only if you have infinite money. I will go into this, but this is not the Martinian I'm talking. I'm talking the Martinian in the left of my slide. Okay, recap. Martinian condition is what I explained. And there's also the so-called sub-Martinian and super-Martinian. Sub-Martinian means it has positive drift. So I am expecting in the future to be above my last observation. It's an example of a biased random book. All right. This is just math, but now I will connect from this mathematical word to physics, to stochastic thermodynamics. It was shown in this paper by Chetrit and Gupta, and also in our paper in a different way. We showed that there is a martingale in stochastic thermodynamics, and it is the process e to the minus the entropy reaction. Okay. So imagine you want to calculate the average of this process at the time t, given that you have a knowledge of the trajectory of 2 times s. We can write this in this way. We average e to the minus s time t with the conditional probability. And now we use Bayes theorem. So we write the conditional as the probability for entire path c dot 2 t divided by the probability c dot 2 s. Remember, s is smaller than t. And I also use the definition of entropy reaction. Entropy reaction is log of p over p reverse. So e to the minus s is p reverse over p. In the next step, what I do is I kill this. So p x 0 t is the same as p x 0 t. Simplified. And I just get the sum over, I'm summing, remember, from s, from time up to after s, because I know all the information up to s. What I don't know is what happens next. So I'm marginalizing over s to t. When you do this, you just get this nice thing. I'm just marginalizing the distribution in the top. And then this is e to the minus s at time s. So this is a simple proof to show that e to the minus s is a martingale process. Simple. And I just add almost no assumptions to do this. Okay, so the only assumption is Markovian non-equilibrium state state. So we have a process of this non-equilibrium state state and we measure the entropy reaction. It's too casting. Okay, so now what I do next is we discover there is a martingale in thermodynamics and now I will take theorems from books of mathematicians and apply them to this specific martingale in thermodynamics and then see what comes out of it. So the first thing we can get is thermodynamics laws are stopping. This comes back to an idea of gambling with martingales. So the question is the following. Can a gambler make fortune in a fair game by quitting at an intelligently chosen moment? This is an important question in finance and you say, okay, yes, I just wait that the stock market is above one threshold and I stop there. I go to the casino, I wait until I get this amount of money and then I go home. This way you can make profit, but you can also make infinite losses. This strategy on the left works if you have infinite money in your pocket. But this is not the case, usually. So if you have finite amount of money in your pocket, you will say, I will wait until I gain 100 euros or if I lose five euros, I go home. This would be stopping your process with a stopping time that is finite. So you either go home for one reason or for another. Okay, I see that there is a question, but I don't see my mouse. Sorry, so I have a problem to see my mouse in a second. Okay, I'm going to open this chat just for a second. So there's a question in the chat. Will you please show the slide of the books again? Yes, yes. The first book is Doob, Stochastic Processes. This is a very important book in marketing and theory. This is a book, okay, I took a bit by Rannum to show you that there are applications of martingales in finance. This is a very important book, the one on the left that I used in our theory, Doob, Stochastic Processes. Okay, let me continue. So it turns out that when you do the average, so if you use a martingale, the average at the stopping time, it is the same as the initial value. So you don't win, you don't lose. The gambler makes an average no profit using a martingale. Okay, this is a theorem that is due to Doob, actually it's in this book by Doob. So you cannot make fortune in a fair game if the gambler cannot foresee the future, cannot cheat and has access to finite value. This means that t is a stopping time, so it's a time that to determine the stopping time, you only need information up to time t. And a finite budget means m is bounded, so it's a bounded process. And this is the so-called Doob's optional stopping theorem. The average of the martingale at the stopping time, given its initial value, it's the initial value. You won't win, you won't lose with a martingale with a stopping time. The average at the end point will be, at the stopping time will be initial value. This is mathematics. Doob showed it very elegant way and this is true for all martingale processes. Okay, so the stopping time is a random time and the key point is that to answer whether the stopping time is smaller than t, one needs only information of zero t. This is the requirement. Not all the random times are stopping times, but if you have further questions, I can give you later examples of things about not stopping times. Okay, let me just go further with this because the next thing we did was to apply this, this relation to our martingale e to the minuses. So e to the minuses dot at the stopping time is e to the minuses dot at zero and at zero there's no derivative production because there's the entropy change in time zero, so this is one. Therefore there is a fluctuation theorem, integral fluctuation theorem at stopping times. So which implies also a second law at the stopping times by Genshin inequality. This means that any gambling strategy, any stopping strategy cannot achieve negative entropy production from an anonic rhythm stress. It is nice, but the structure of the result is the same as the second law. So it means that when considering these conditions you get the same type of laws as when you use the fixed time. Okay, but be aware that this is a, this generalizes the standard practice on theorems because the a clock reaching a given value, this will be t, is an example of stopping time. This is one of the points. Okay, now this is an illustration of stopping time efficiency. So you look at a car and you look at the car up until it stops. And the question is what is the efficiency of the car? You know, the car takes heat from the fuel and does work. So I would like to do an analysis, like car noise or heat engines, but for engines that are stopped at random times. So this is one example. I think I'm thinking about autonomous heat engines like a Feynman Ratchet where there are two heat baths. And I'm looking at, or for example, also experimental systems. One can do this with two resistors at different temperatures. And this you can, what you can do is you take the theory I just explained you and apply to non-isothermal systems. In non-isothermal systems, the total entropy is the system entropy plus the heat fluxes to each of the baths divided by the temperature. This is the first formula that I put on it. So you just take the equation I showed you before and apply it to this quantity and you get, you can define the so-called efficiency at the stopping times. What is the efficiency? Okay, sorry, there is someone who has the microphone on. Saeed, please, if you can switch off. You can wonder what is the efficiency of the engine from time zero up to the stopping time. So you put a condition, for example, the Feynman wheel to pass one of its teeth and then you measure the efficiency in trajectories from time zero to time t. So it's the average work from time zero to the stopping time divided by the heat from time zero to the stopping time. Applying the second law at stopping times, you get interestingly that the efficiency at stopping times is not bounded by Carnot. It's bounded by Carnot minus a quantity that depends on the heat from the hot bath from time zero to time t, the average heat. So the question from this is whether we can get super Carnot stopping time efficiencies. This is an open question and it's not so clear. So when you have something clear, one nice thing you can do is to do simulations and check. And we did this, we did simulations of this, which is called the Browning gerator. You can find a good reference here on the right. This is a 2D Langevin system. Can I ask a question? Yes, please. Yeah, this stopping time would mean that your cycle of the engine may not be complete. Exactly. Very good point. May not be complete. Exactly. You could define stopping time in which you say, okay, I wait until I dissipate KBT to the hot bath, for example. And maybe you dissipate KBT to the hot bath before completing one cycle. Maybe. So it depends on the condition you put. Yeah. So that would mean experimentally how easy it would be to realize such an engine. Well, the engine will run anyway. What you can do is get the trajectory from the engine and apply this stopping time with a post-processing of the trajectories. If you want to do it in real time in the experiment, you will need feedback control. So you will need a camera that measures the particle of the position of the particle, then compute one quantity and apply a stopping rule, and then the feedback to the system and stop it. Of course, technically, it's not trivial. This is a theory we did two years ago, and experiments take time. It's not trivial at all to do this. So here is an example where you can test this result. This is an oscillator in 2D. You can imagine it's a particle in an elliptic potential like this yellow shape I show on the left. And there is a torque pushing the particle making orbits, and the temperature is different in the X than in the Y axis. This is also very important. So this works like an engine at the end. There is exchange of heat in the X variable with heat in the Y variable, and then there's work because particles are being dragged in a given direction. So you can take this model, make simulations, and for different parameters measure the efficiency at a fixed time. This is the blue sequence that I show, and they are bounded by Carnot as expected. So I take trajectories of a fixed time, average work, the value of the average heat. This is bounded by Carnot. But now I consider this stopping time, which is a main event. So I look at the trajectory until it crosses this. So it goes from the second to the first quadrant and crosses this line that they put in black. And I do the same thing for the same parameters. I stop the system. First of all, the bound tells me from our theory that I could, in principle, be above Carnot. And this is the simulation result. So you can indeed be above Carnot from using a very precise, a very lucky stopping time. And remember, this is on average. This is the average work divided by the average heat. So this is nice, and it turns out that this is what Colin was saying that to do this, you need to do stopping times in which you don't complete cycles, and you need stopping times in which the entropy is reduced and the energy is increased somehow. So you need a condition for the free energy. And this is, you see it in the figure on the left, the particles, the stationary distribution is very broad, and you go to a final distribution, which is very narrow. So there is a big decrease in the system entropy here. See the points, the black points are where the particles end at the stopping time. So this is a condition, and this is just a proof of concept that the Carnot efficiency can be exceeded at stopping times. The next nice thing you can do is also stream values. You can look at entropy production. As I said, entropy production can be negative, but an important question is how negative can it be? So I'd like to know what is the extrema? In particular, the infima is more interesting because it is negative entropy. How negative can it be? What is the distribution of this minimum? And this you can tackle it using another theorem by Doob, which is the so-called maximum inequality. So the probability that the supremum of a martingale is above a lambda, lambda is a given threshold, is bounded, this is the supremum in a time window, is bounded only by the average of the martingale at the end of this interval, divided by lambda. So now what we do is we apply this theorem to the martingale in minus s. And luckily, what comes on the right is e to the minus s at time t on average, and this is one. So this becomes a bound that is time independent to the probability of the supremum. You can then make a variable change and transform this into the probability for the infimum to be below minus kb log of lambda. And after some simple mathematical manipulations, we get that, okay, the important results are these two. First, that the probability that the infimum is below minus s, s is a number, is bounded by the distribution for an exponential random variable with mean kb. So the distribution of the infimum is bounded by the distribution which comes from an exponential random variable. And this exponential random variable has mean minus the Waltham constant. So it means the average infimum is always greater or equal than minus the Waltham constant. This is a very fundamental result. And we claim from this theory that this will be true for all non-equilibrium state states at R Markovian. So it's a strong result because on the right, there is nothing else than kb. There is no entropy production, there is no the current. It's just the same fundamental floor for the infimum in all non-equilibrium state states for any process. Okay, so we were a bit shocked by this and we did simulations and it turned out to work, but also you can do this in an experiment. We checked this two years ago. This is an experiment done in Finland by Shilpi Singh in the Yuccapecola lab. They could realize a non-equilibrium state with four states. It has this topology you see in the middle and measure the entropy production, stochastic entropy production for different biases. So you can do a bias between two states in this experiment. And from the trajectories of entropy production, you can measure this infima, which is the minimum value of the entropy production in a time window. The nice thing of this setup is it's very fast so you can get a lot of statistics that we get millions of jumps in the time series. So this is really, if you like data, this is a really nice experiment to analyze data. It's really, really big data. What we have here and what we can do is to measure distributions of the infimum, where I plot here minus the infimum. You see close and far from equilibrium. These distributions are always, do not cut this exponential distribution. And on the right I show that the average minimum, no matter how long is the trajectory, it's always above minus the Boston constant. You get closer and closer to the bound when you are close to equilibrium. This is something when the drift is small and not close to equilibrium. Sorry when the drift here is small and you are in the continuum approximation. So you saturate the bound when you are the behavior of your system is like a particle in a ring. This is something also we found very insightful. Okay, so this is an excellent experimental test I will end my talk with our latest result, which we are very excited about, which are the so-called gambling demos. Okay, what is a gambling demo? Well, first of all, this is recently published in a paper, first author is Gonzalo Manzano, who was post-doc in STP and this work has been in between ICDP and Alto University in Finland. It was a very nice collaboration. So I want to highlight his excellent work, which was very complicated to finish this work in the pandemic. So I want to thank to him mainly and also collaborators. So what is the gambling demo? The gambling demo, it is sort of a Maxwell demon, which doesn't have as many powers as the Maxwell demon. It is less powerful because the Maxwell demon remember has two elements. One is it opens or closes the door at random times. Random means that the particle has to arrive here and there's a condition, the energy should be of a different, of a given value and then it opens. It's one thing and the second is feedback. So when it opens or closes the door, it is changing the distributions of particles. So it's changing the physics of the problem. So feedback and stochastic times. Now we will wonder what happens if we cannot do feedback. So the only thing we can do is open the door and then stop in a way. So we will think about this story, a gambling demo. The demo can only measure the state of the system and at some point stop the dynamics. And gambling means we are doing an analogy with the figure I'm showing with games in the following way. One example could be, okay, a demon goes to the roulette, to the slot machine and it can play three times. Playing means putting a coin in the machine and this analogy is doing work in the system. So it puts a coin in the machine and it doesn't get the number he wants, which is 777. So then he plays again, doesn't get it, and he plays again, uses work, and he gets only one seven. So he gets a little bit of free energy. And then the demo says, okay, I will play a finite time, only three time instances. So then it stops there. That's why this symbol I put on the top. Another outcome could be that the demon plays once, plays again, and suddenly he gets 777, a lot of money, free energy, and then thinks, should I continue playing? I said no. So this is the key point. It will stop and there will be infinite stories, possible stories in the demo. In some of them he will stop at the end. In some of them he will stop at an earlier time. And the goal of the demon will be to extract heat from a thermal bath by conditionally stopping the system with a clever strategy. The only thing he's doing is doing measurements, doing work, and at some time stopping. This is the setup we are thinking about, okay? I hope it's clear. Experimentally, so the nice thing is we did this, well, our collaborators did in an experiment. So the setup is also a single electron dot, but what you have to understand as a theorist is the thing in the middle. It's a two level system, and an electron is either on the left or on the right. And what we do is we lower one of the levels as a function of time. And the electron can be jumping between the states while we are driving this system. Here we call lambda t, the distance, or the energetic separation between the two states. Remember, recall my lectures, I was using lambda, counter parameter. And here I can show you an example of a trajectory. So the electron is jumping between the two states. This is the red curve. And the blue is like the average gap between the energy levels. And you see when we make the gap bigger and bigger, the electron is more on the top than in the bottom because of the energy difference. So the electron prefers to be at the lower energy. That's why at the end of the trajectory, the red is more on the top, okay? So now we applied a gambling strategy. Actually, this goes again to the question of calling. We didn't have an experiment we could do real-time feedback. But what we do is we do an experiment and a posteriori, we apply this condition and we look at the trajectories with the post processing of the trajectories. So the gambling strategy we do is the following. We are measuring the work done on the system. You only do work when the electron is in the level that is being changed, okay? Actually, well, actually, the figure is not totally precise because we are also changing slightly the signal, okay? In reality, this is a simplification. In reality we are separating both. So we do work in different directions when the electron is in one level or another. Okay. What is important is how do we gamble? We gamble as follows. We say either we stop at a finite time tau, or we stop earlier if the work exceeds a given threshold, which is this WTH that they put there. So we put the threshold from the beginning. If the trajectory crosses the threshold, we stop there. If not, we stop at the end of a finite interval, okay? So we have a fixation of a protocol and either at the end or either in the middle. This is the gambling strategy we use. We can prove using martingale. So I'm not going to give the full details. You can find it either in our paper or in a very related paper with a similar result in Isaac Neri just in the bottom. You can see the proofs. There are two alternative proofs of almost the same result. What we define is the work from time zero to the stopping time. The free energy change from time zero to the stopping time, which you can also define. And the main result we get is a Yershinsky relation at stopping times. It looks the same as what I explained in my previous lectures, but there is an extra term, which is this delta. So what is this delta? Okay, which implies a second lot of stopping times. So you apply Jensen inequality, and you get that what is in the brackets is positive. So W minus delta F is greater or equal, not done zero, but then KBT average of delta. Now what is this delta is related to the asymmetry of the process. It is the distinguishability between the forward and backward protocols at the stopping times. So you have to, here it's also kind of a cool backliner. So we have the probability to be at X if you stop the process at time t. Okay, first of all, okay, this is not if you've stopped the process. The process has an evolution. This is a focus plan equation. And the key point is where do you select the time in this focus plan and you select the time as it comes in your stochastic trajectory. So you stop the trajectory at two seconds, you compare the probability of X at two seconds with the probability of X at the time reversal in tau minus two seconds in the equivalent time. It looks like the same as what I explained yesterday from Kauai, Parando and Dembrock, but it is not because of the fact that there is a stopping time. It is analogous, but not the same. The nice thing is this theory, if you apply to a stopping time that is fixed, it has a delta distribution, you get the results we know from my course. So it generalizes what I explained in my course. But it has new insights because it is giving this extra minus KBT delta and maybe if delta is positive, which it is, it can lead to negative dissipation on average. So this is a very important result for terminology. So this is what we try to check in an experiment. So first of all, this theorem works. So on the left and on the right, you see the average of E minus beta work minus delta F. And so this is the Jarsinski relation. If you apply it naively to stopping times to these gambling trajectories, it doesn't work. You see it on the left and on the right. It's not equal to one. It doesn't work. However, our theorem is working much better. It gets much closer to one when you apply, okay, when you measure it, when you have trajectories of different rates. So this is a proof of concept that this result is working. But now what I think is more useful for you also to understand is if we can really get negative dissipation here. And indeed, in the experiment you see it, you actually see that using stopping times, you can get work minus the free energy below zero. And this you can see much more clear when you drive the system fast. You see that in all the green region, work minus free energy is negative on average. So the experimental points are at the circles. But the theory, no, actually, sorry, the experimental points are at the blue circles. This is the work minus the free energy. So it can be negative, but it cannot be more negative than minus KBT delta. Minus KBT delta is the red circles. So the bound seems to work. And what I'm doing in the figure is I'm changing the, what is the threshold. So on the left of the figure, the threshold is very close to zero. And at the end of the figure, the threshold is very far. So the trajectory is never close to threshold. And they are all finishing at finite time. That's why I'm recovering the second law. So if you put a long, very far threshold, it's like not gambling. So you recover the second law and the work minus free energy is positive. Very important from here is that there are two ingredients to get negative dissipation. The first is gambling. So if you don't gamble, if you don't use stopping times, you don't get this. Okay. And the second is non-equilibrium. You see that on the left, we did the process very fast and we can get very negative dissipation. Whereas on the right, we do the process very slow and it's not so negative the dissipation. And this is because this term, sorry, this term here, we say that it can be greater or equal than minus delta. And delta is a measure of distinguishability. So how distinguishable is the forward process from the backward process. So if you have a very reversible dynamics and driving, this quantity will be larger. Okay. So these are the two ingredients, actually. And just a bit for fun. I can show you what the referee said about this work the first time we submitted. The first referee said, the result presents a severely dangerous threat to the second of thermodynamics and the whole thermodynamics has a field. So just for you as a student, it's just to give you a message that when you write a paper, you are not always very welcome. Okay. It happens. It can happen this. And the second referee was positive said, the author's putting question, the very foundation of thermodynamics from both and experimental. Okay. So I will go to the first comment because it's, when you get comments, you also have to discuss and think about them and take them as a way to improve your work. So severely dangerous threat to the second law and thermodynamics to fill. Well, no, it is not the case. And I'll try to explain you and justify why. And this is related to this work in the lab of Felix Retort, in which he did a similar experiment to what we are thinking. And they discuss what happens if you have a demon that is measuring all the time with a particle from the left or on the right, imagine a steel engine, and then it applies feedback as soon as the particle crosses from the left to the right half. So what they discussed in this paper is if you want to include the information processing here and to include what, how much information do you have to erase to start a new cycle? Then this can be quantified. And if you are using this type of continuous Maxwell demon, the information that you need to erase in every cycle, when you look at continuous time, this goes to infinity. So this is very important. And you have to realize what I was explaining before with the thermodynamics and so on, was always a discrete time. So information in a continuous time since is infinity. If we include that in our energy balance, we recover the second law because if we add, you see on the top, minus KBT in the information, we will recover the second law if we think about if we have the process, we stop it and then we reset all the information that we used was really much higher than this. Even if you think, if you simplify the information, you say, my information is just a sequence of bits of just, no, just, no, just, no. So I stop, I not stop, I stop and not stop. This will be NKBT log two will be much, much on this. So it will be, there will be a line much, much below this figure here and we will recover the second law. We are not violating the second law, very important. What we are getting is a tight bound to the work minus the free energy. This is what you see from the field. The experiment and the theory are very close. So it means this minus KBT delta, it gives you a tight bound to the work from time zero to the stopping. Okay, this is an important thing. So more or less, this is it. We are excited about this. Also the press. So in the last two weeks, a lot of media have talked about this. You can read in physics, in fish organ, at least these nice figures they are doing, meet Max Verkamping-Zemon, a new information demon tries to chaos. You can have fun reading this and there's also an interview in the fish organ website to us. So you can learn more about this. I believe it's an exciting topic and there's much more to do and it gives opportunity also to you to explore in the future and to look for great ideas along these lines. So as a summary, we started to find this martingale, which is a martingale, but this within this theory was just a tip of the iceberg. There were many results we got out of it and also from the fact that you can extend this to the steady state. We believe these are some results but not all and this can open a lot of research along these lines and applying martingales from gambling, finance, probability theory, which is a refreshing point of view to the field in my opinion. Okay, martingales in the air. Here are some reference. You can, okay, if you are interested in this topic, you can look for some of these along these lines. We are also writing a review to be finished in 2050 because we are very slow and this is it. I want to thank you especially for your attention and your motivation to be here and I leave the floor to questions by giving you two pictures of Trieste. One is the famous Barcolana. It's a sailing race that happens every October in Trieste and the bottom is a picture I took very recently. It was a very strange weather and you can see the Miramare Palace at the end which is where ICTP is sitting very close to it and in the background you see the Alps. You can see the Alps some days with clear air in Trieste, despite they are very far away and that's it. Thanks for your attention. I look forward to questions if there is any. Thank you very much. I've got so many questions. Can you hear me? Yes, please go ahead. I wanted to ask if you could repeat the protocol in the simulation of the Brownian girator because I haven't understood if you actually change the say elastic constants of the potential or not? The Brownian girator is steady state so there is no time-dependent protocol. There is a potential which is elliptic. I sketch it as a circle, as a yellow ellipse and then there is a torque. There is a non-conservative force but this potential does not change in times. It's fixed but it's not an equilibrium because there is a torque moving the particle and there are two thermal paths interacting with the X and the Y degree of freedom. Okay, I know pretty much the model. You said before that you moved to another region of probability if I got it correct. Okay, okay, okay. Using the stopping condition. Okay, I put this stopping condition so I look at the trajectories until they cross this line from left to right. Okay, okay. But the trajectories are picked from the steady state so the initial value is picked from the steady state distribution and then I let them run until they cross this line. Okay, so the time spent by each trajectory to cross it is different. There is a distribution of times, right? Yeah, yeah, for sure, yes. But this is my only condition. I'm basing it on my weight. They cross this line and that's what I do basically. Okay, so the system is stationary. You don't change the shape of the potential in any way. No, anyway, no. Okay. And just to know European, do you think it could be useful to combine, I don't know, changing the shaping of the potential together with the torque of the stationary states? Yes, definitely. Yes, well, somehow in the last work that I explained to you, we, sorry, I cannot find my name We went through this because here in this theory, here we are changing a Hamiltonian. We are changing the potential. We are not changing the external force, but we are changing the potential. So it is, we are actually doing this. There is a time dependent drive and there is a stopping time here. But not applied to the generator. To the generator, it will open up new questions. I think it's an interesting question that we didn't think about it. So it's a good idea. Yeah, I would really be interested in. Okay, good to hear. Yes, if you have good ideas, we can discuss it. It's an active topic for my research right now. So I'm happy to hear about this. Thank you very much. You are welcome. So any other question? So you said you would tell us, if you had time, you would tell us about the different examples of water stopping times and what are known? Yes, yes. You want to learn more. Okay, that's good. So one stopping time is clear is a first-pass time, for example. So, but be careful with which first-pass time. So actually, I can draw here. So if you, if you have a first-pass time with two absorbing boundaries, so you have one here, one absorbing boundary here, this is a good stopping time because you just look at how long it takes to reach each of these. And this, please realize this is stopping time because you only need information up to that time. So you say, did I, if I want to know if my stopping time is below t, I just need to look at the trajectory up to here. So either it crossed before or not. This is a good example of stopping time and it's also bounded. A stopping, an example of a not stopping time would be the time above a threshold. So if you want to know if a trajectory is above a threshold more than t, you may need to see a trajectory by more than time t because maybe a trajectory up to time t, it didn't pass the threshold and then it passes. And I just want to know the time above a threshold. That's not the stopping time. Another example of a not stopping time is the last crossing time. Or this you also need to know the entire trajectory. It's not a stopping time, but the first-pass times are stopping times, all of them. And this is a very charismatic case. Or the first time you make three jumps in a mark of process. Well, this will also happen. But you just have to wait. And it's also stopping time. You just need to look until it happens. You don't need, if you want to know the time until the third jump, it's greater or equal than t, you just need to look at the trajectory up to t. These are examples. If you are more interested, I'll give you references. Okay. Thank you. Thank you for the question. So any other questions? Okay, so if not, so I think we came to the end of this experience, at least for all those who do not have to give exams. So I really thank you and also thank all the lecturers for this nice spring college. I mean, we hope next year we can do it in presence, but that we can also have say a virtual component so that this can allow also people to join the lectures also from other parts of the world. Okay. Thank you very much. Have a nice weekend and hope to see you all in pleasant sunshine. I think, thank you, everyone. You should stop recording.