 let's proceed to the last one of this session. It is on ubiquitous ratios and their large deviations in reset processes by Francesco Coghi, Queen Mary in Brazil, London, UK. Please correct me how to say your name correctly. Thank you very much. It's about right. Hello everyone. It's Francesco Coghi, but that's it. Great, thank you. Okay, can you, now I'm gonna share my screen. Can you see the presentation? Yeah, we are. I am. Right, let's see if I can put it on full screen mode. How is that? Yes, now. Alright, okay. Okay, so hello everyone. I'd like to start off thanking the organizers and in particular Edgar for giving me the opportunity to tell you something about my research. Indeed, what I'm going to talk about is one of my first PhD projects and it was in collaboration with Rosemary Harris and it's about large deviations of particular observable, which is a ratio in reset processes. You can find all the information in this reference here. Okay, so just a few motivations and a little introduction. Actually, these stochastic reset processes look like to be a very good mathematical tool to model many things we see in the real world, like for instance, the population dynamics after catastrophic events or even the protein transport in cells. But even from, not just from a classical point of view, but even from a quantum point of view, one can think of this quantum zeno effect. Well, now I'm going to stick to a particular observable. I'll be looking at these stochastic reset processes that I'll introduce better later. And I'm going to stick with an observable, which is a ratio between two additive functionals of time. So just a few words about ratio, about ratios. Why do I look at ratios? The thing is that they appear to be quite ubiquitous in quantitative sciences. One can think of, I don't know, in finance, they calculate this sharpie ratio, which gives you this expected return of an investment you do. You can find it everywhere in probability when calculating this maximum likelihood estimators. And in particular, when one looks at more physical topics, you can find these ratios appearing when talking about thermodynamic uncertainty relations and kinetic uncertainty relations. And here I'll be focusing on a particular example, which is related to these stochastic thermodynamics. And it's the fact that our ratio there in that environment is an efficiency. And people have been looking at this efficiency of small-scale engines working in these highly fluctuating environments. Here I report a picture. It's a plot coming from this paper of Verlaine. And on the y-axis, we have a particular function that I'm going to introduce later. But for now, you just have to think about it as a function that measures how unlikely it's a particular efficiency which is plotted on the x-axis. So for instance, if you look at the minimum of this function, it's the most typical value of the efficiency that is reached. And then the higher you go in this and this function, the less probable is that you see that particular value of the efficiency. In particular, here you should take a picture with your mind of this, take a snapshot of this picture. And you see that the details somehow are bounded from above and a maximum appears here. This maximum actually represents the most unlikely of, let's say, let me, what is it, the most unlikely efficiency you could see. This is the way it was explained in 2014. Okay, my aim is to put on a film and footing these ratios, and in particular to try to see if it's possible to understand more theoretically this kind of efficiency that we see in stochastic thermodynamics. And I'll be doing that in this called the large-jugation limit. A few words on the model. I'll be looking at this script time reset processes. A reset process is this process that returns a random times to a fixed internal state. Here you can picture this process as a random walk. And even better, for the sake of my talk, has a two-layer process. So you have a bottom layer, and we can call it an on-off process where you have this accent, and it's just a sequence of random variables. Each one of them is a Bernoulli random variable, and with probability r, it's a Bernoulli random variable taking values either zero or one with probability r. When one appears, it's because a reset happens. If xi is equal to zero, there's no reset. On the top layer, instead, we have the actual random walk, in a way. And it's summarizing this yn, where, again, we have a sequence of random variables, y i's, and at time step i, we have that random walk take a jump of a certain length, y i, according to a certain probability distribution p. This probability distribution can be pretty general. It may depend on the time since last reset, for instance. Now, if you take the sum over all these y i's, you get what we can call current. It's basically some sort of, you sum up over all the jumps, and you can have some sort of cumulative position of this random walker. And the nice thing is that, and it's pictured here, the reset here happens as a, yeah. So why is here would be the increment in the, so how much the random walker is jumping, or is the value of the random walker? No, no. It's exactly the first thing you said. It's the length of the jump. Okay. All right. So indeed, if you take this sum, you have this current. You take the sum over all the jumps, you have this current. And when a reset happens in the bottom layer, a network correlation here in the process is that there's no jump. And this is somehow inherited in this current as essentially the current freezes in time. So there's no change. Then when there's no reset, then the current still, I mean, you have new values of jumps, and you see that it changes in time. Okay. Now, the observer will be looking at, it's exactly the ratio between this current here, this current here, and the number of reset steps. Indeed, the number of reset steps of reset steps is just the sum over this vector, right? Because if a reset is one, the sum over it is just the number of reset steps. All right. Now, I'll be looking at this observable in a particular limit, in the so-called exponential scaling limit, for its probit distribution. So if we put ourselves in this limit, we see that this probability distribution can be rewritten this way. And in particular, you see now that all the information about the probability distribution, again, in this limit, is encoded in this function here, which is called large deviation rate function. The fact that omega n here is our ratio is the ratio between two additive functionals of time, so two extensive quantities, it makes it not extensive in time. And this is a bit of a problem when trying to do calculations, because now using moment generating functions doesn't seem to work that well. So one has to rely on a different method to calculate exactly this rate function. And just a few words about it. The method I use is this contraction principle, which is like a pretty nice and somehow standard technique in large deviation, where instead of looking at the observable itself in its own space, you somehow extend the space and you look to a bigger state space, where you consider both the current and the number of reset steps. Now, in this space, it's a large deviation principle. It's valid for this joint observable. And you can write this probability distribution this way. And then as an easy subtle point, you get the precise form of this rate function i of omega, which is, again, what we want to calculate. And okay, that's it. Now go through the results. So, okay, don't get distracted because there are many cures here. Actually, for each one of these plots, we will just focus on one cure because, I mean, just one is representative. The plot here on the top of the screen relates to a reset process where there's no correlation. Basically, the somehow no correlation, it's quoted. The only correlation is the natural correlation reset process. So, if we have a reset in the bottom layer, there's no jump in the top layer, but there's no other correlation. The probability of the distribution of the jump doesn't depend on anything. It's completely independent of time. And we see that the rate function here looks like, so it's not convex, which is something pretty not standard, I would say. And again, there's a single minimum, which is like the typical value of this ratio observable. And then again, if you look at the case, the case are bounded from above again. If we add some sort of correlations to our model, so for instance, now the probability of the jump depends on the time since last reset. And this is a kind of short-range correlation because in the bottom layer, as I said, we have a sequence of Bernoulli random variables. So what we actually have, it's a geometric process and correlations are exponentially short in time. If I just take a probability of a jump, which depends on the time since the last reset, the correlation we have in the bottom layer are easily inherited by the top layer. And even in this case, if we take one of these curves, we see again that there's a single minimum, so a single typical value, and then the ratio is bounded above in the tails. And this happens actually, even if we look at long-range correlations. So instead now of having geometric process of reset, we have some of our sort of heavy reset, we can put affected distribution for the reset. And we then place a probability distribution, which depends on the time since last reset. So now we have long-range correlation in our random walk. And we see if we just pick one of these curves again, the tails are bounded from above. The only difference here is that the typical value, so the minimum of this function is not singleton. There's no a single value. There are many more. But this is not a big deal. Now I go quickly for the conclusions. Yeah, okay. Yeah, we have two minutes at most on your concluding, of course. Okay. Sure. What I want to say is just, I just want to discuss a bit this picture here in comparison with this one. So we see that in this study, the rate function is robust, because in a way, okay, this I didn't tell you, but in other cases, you can actually prove that the rate function you see is differentiable. Okay. Furthermore, and this is like somehow unnecessary condition for having a lack of phase transition in the fluctuations of our ratio observable. Then again, the tails are bounded from above. Well, also this is somehow a universal feature characterizing ratio observables. And it's a signature of the fact that the distribution of our ratio is heavy tailed. In this efficiency I've shown you in the beginning, the function plotted here on the white axis is indeed a rate function. And again, you see that these tails are bounded from above, which means that that particular efficiency is heavy tailed distributed. And there's just one difference. The fact that here it is efficiency, the one you usually see when looking at these small scale engines has got a maximum. And in all our cases, this maximum didn't appear. Well, this is the fact that this is the last thing I'm going to say. This efficiency calculated here, it's just the output work of this more scale engine over the input heat. Okay. So the fact that the input heat can have both negative and positive fluctuations is indeed the denominator here can be indeed positive and negative makes this maximum appear as a phase transition between two different regimes in the fluctuations. I can tell you more about this if you're interested. And we don't have this because our ratio at the denominator cannot have negative fluctuations. It's just a positive quantity. So we don't have this maximum for this reason. And that's it. Thank you for your question. Thank you very much. We have one questions from the audience from Ashwin, I guess. Yeah, Ashwin. I had a question in the contraction principle. All right. So the question is, I didn't get the idea where you have a random variable which is not extensive. But you add another set of variable and you write the joint distribution and it suddenly becomes extensive. No, no, no. It doesn't become extensive. The only thing you do is that you have to find a way to calculate this rate function here. Okay. So since you know that the observable you're looking at is the ratio of this J and eta, the current over the number of resets. Okay. You can think of moving yourself to a bigger state space where instead of looking at the ratio itself, you look at these current and number of reset steps. Now both these quantities are extensive in time and it's easy to prove a large-vision principle for this joint probability distribution. And then it's just a matter of taking a set of point approximation to find the good rate function here. All right. Thank you. Okay. Any further questions? I guess, yeah, I guess this is it. Well, yeah, very interesting set of talks for today. This is now, if I'm not mistaken, this is now