 First slide has no important information. So first, let me recall this very important result of stochastic thermodynamics, which is really a cornerstone, which is the flotation theorem for currents. And you can find it in this paper by Davion Hryou and Pierre Gaspar, which is the log ratio of the probability of measuring a current, various currents of your system up to time t over the probability of measuring the opposite of these currents up to time t is going up asymptotically to the forces dot the currents themselves. And by asymptotically, I mean that this is valid for very long times. And this is true when you have a fundamental set of currents. So if you have your system and you do your usual Schnockenberg decomposition, so you obtain all of the forces and all of the currents, then you have this flotation theorem. But in many cases, you cannot observe all of the currents of your system. And maybe you can only observe one or two of them. But this is not true in general for a single current. So if I consider only the first term of this vector, so this c here is a bold symbol, which is a vector of many currents, of the fundamental currents of your system, this will not be true in general. And our goal here is to derive a marginal flotation relation for a single current. So what we have in mind in this problem is a system like this where you have a continuous time mark of chain. So it's defined on a graph. And every node is a state. And you have transitions like these black arrows. And if you do this analysis of forces and currents in your cycles, you have the flotation relation for all of the currents. But if we assume that we can only observe this transition, for instance, one example here, we do not have a flotation relation for this current. And all of these states are hidden. But the only thing you observe is these two transitions. And if you count them, then you can measure the currents between these two states. And for so, we have to introduce two ingredients. They are really important for obtaining this marginal flotation relation for a single current. And the first one is stopping N. So usually you have your system and you start measuring a trajectory. So you start measuring the trajectory at time 0. And then when this trajectory reaches time T, you stop the measurement. So this is what we call the stopping T criterion. So you can have any number of trajectories inside, any number of transitions inside your trajectory. But what we want to do now is we have this criterion of stopping N, where we stop the measurement of trajectories up to a specific number of occurrences. So in this case, the number of visible transitions, which is also known as the visible dynamical activity. So in this case, we count 1, transition, 2, 3, 4, and then I stop here. And of course, then this time where I stop my trajectory is going to be a random time. And we can also define a Markov chain, which has discrete time. And at each step that I iterate my Markov chain, I have one transition occurring. And so this is basically the explanation of stopping N. This is a fundamental ingredient. So in this case, the system itself is skipping track of time. So I don't have one external clock measuring the times. But the system is telling me when to stop my measurements. So in a sense, my system is the conductor of my trajectories. And so I imagine my system as a maestro. But it's not a very good maestro because in this case, take a look at this transition, it's not going to keep track of time like 1, 2, 3, 4. It's more like 1, 2, 3, 4. So it's a horrible maestro for musical purposes. But it is a thermodynamically aware maestro. And we're going to see how using this idea of internal notion of time, we recover the fluctuation relation. And the second ingredient that is very important is how to define the time reversal when the only thing we observe are the transitions. So imagine I have a trajectory in a state space comprised of the states 1, 2, and 3. So this black line is my trajectory. And I observe the transitions between 1 and 2. So here I observe 1, 2, 2. So I would call it up. And then this is all hidden. And then I observe up again and down. If I run it in time reversal, then I am time reversing the state space, the processing state space. And this will cause not only the transitions to be observed in the opposite order. So this will be observed first and then this second and this second, this third. So here I have the mirrored version of this forward trajectory, but also the transitions will be performed in the opposite direction. So I have to swap the order. And I also have to reverse the direction of the transitions. So this will read up here. So this is just the recipe for time reversal in transition space. OK, now I ask you to buckle up, because I'm going to show a few occasions, but it's going to be very, very simple. It's just a hand-wavy proof for the fluctuation relation. So the probability of observing a trajectory of various transitions up to stopping and criterion, it's a Markovian process. So I have here the probability of the first transition and the product of conditional probabilities of all of the further transitions. The ratio between the forward probability and the backward probability is basically this boundary term. And then I have all of the combinations of up, up, up, down, down, up, and down, down. And in the denominator, the time reversal of these, because these are the only four possibilities of pairs of transitions. And then when we do this bar here, I mean by this bar, the time reversal operator, if I do this time reversal, then I observe that here, these terms cancel out. So up, down, over, up, down, down, up, over, down, up. And this is the inverse of this one. So I can rearrange terms like this. So this is the probability of trajectories over the probabilities of the time reversal of these trajectories. It's expressed like this, where n, up, up, and n, down, down is the number of sequences of pairs, up, up, and sequences of down, down, in my trajectory. Now I use the identity for the currents that current C equals number of up, up, minus number of down, down, plus this boundary condition. If it's raising eyebrows, maybe we can save it for the questions. And also, I use the definition of effective affinity here. So this is defined as the effective affinity. I will refer to some papers for that soon. And then we get the fluctuation relation for a single current. In this case, C is the single current. I have these boundary conditions, which is the first and the last transition. And it's at stopping n, not at stopping t. Because as I said, there is no fluctuation relation at stopping t for a single current, only marginalizing the well-known fluctuation theorem for currents. And this has a boundary condition, but this boundary term is not extensive. So it's also asymptotically going to the simpler expression, effective affinity times current, but also there are other ways you can make this term vanish. So you can have the fluctuation relation in all its glory without the boundary terms. Of course, it's still fluctuation relation, but this is more neat. So this will be true when the number of transitions is really large and the effective affinity is non-zero, of course. Or maybe you can post-select your trajectories with this arbitrary condition where the first is the opposite of the last transition. And you throw away all of the trajectories that do not satisfy it. Or you prepare your system in a preferred initial distribution, and this will be true at all times. And here, I just show you the if you try to plot the marginal fluctuation relation for a continuous time of coaching at stopping t. So we start at stalling, which would be the best candidate for satisfying the fluctuation relation, the fluctuation relation is not satisfied. And using the stopping end criteria, these bullets here, like bullets, it satisfies the fluctuation relation. Sorry, I have to rush a bit now. And for the discussions, the idea of thermodynamic consistency is really tied to fluctuation relations because you need to satisfy these fluctuation relations if you have a thermodynamically consistent model. So this is kind of saying something about the thermodynamic consistency of a single current. And disentangling how the fluctuation theorem arises from the thermodynamic consistency and fluctuation relations of each and every current that are comprising this big set of fundamental currents. And also, you can relate this right-hand side to a lower bound for the dissipation. And maybe there are connections to martingues because in any way, we are talking about random stopping times. So this stopping end criteria is going to lead to random times where you stop your trajectories. And in this paper here that was published really recently, you can find these results. There is another paper in preparation where we extend the notion for more currents, not only one. Take a look at Anuisha's talk tomorrow, where she will discuss more about this framework and also these two papers where the framework of dealing with transition is explained better. And I am available for answering questions now. Thank you very much, Pedro. So questions, priorities of students, there's some time yet. The question in the chat, please raise hand. So I have a question. So when you mentioned martingues, what do you have in mind besides stopping time? Because in the end, stopping time is the concept. So it's first passage or something more general. But you have something specific in mind that could apply from the martingale approach. No, actually, I don't have anything in mind. It's just because it's like observing your process until it reaches a threshold. And you have these random stopping time criteria. And that's why I have the question marked there because it's really a provocation because I don't know what to say about martingues in this case. OK, any other question from the audience? Maybe I have a short one. So you assume that the experiment, you can see the transition exactly at the right time. So did you consider the case where you may have some measurement error and you miss the transition or you mess with some probability? Yes, so what you are doing when you are building these observables, you are building these empirical probabilities. So if you have some random errors and they do not correlate in any way to what is being measured, then it's not going to be a problem. But I did not consider these cases in the theory itself. OK, thanks. I'm thinking if there's any question. It's time for one short question from the audience. If not, let's take Pedro again for the talk. And we continue with the next speaker, which is Jordi Pinheiro from UPF in Spain. The stage is yours. OK, thank you. So here we go.