 a director of research from LVTMS in Paris, who's going to talk about optimal resetting in Brownian bridges. Okay, so thank you very much. And let me start by thanking the organizers for the invitation. It's a great pleasure to be back here in ICTP. Oh, sorry, okay. Thanks. So it's a great pleasure to be back to be ICTP. And I must apologize that probably in contrast to the other speakers, my talk will be entirely Markovian. But as you heard this morning from Leticia that in our van campaign, that non-Markov processes are rules and Markov processes are exceptions. So, but you need an exception to prove the rules. So, so I'll talk about optimal resetting Brownian bridges. And so this is a sort of recent work which was done in collaboration with Gregory Sher, my friend and longtime collaborator, and also with our graduate student, Benjamin D. Bruin. And this will, this is on the archive and it will appear in PRL soon. Okay, so here is the sort of three parts to my talk. So in the first part I'll just make a brief recap of the resetting Brownian motion. This morning you have already heard twice about the stochastic resetting. So, but today I'm going to talk about slightly different aspects of it, the first passage aspects. And I just remind you what it is in a very simple context. And then I'll come to the main body of the talk which is about the resetting Brownian bridge. And if time permits, there's a very interesting algorithm question. How to generate such resetting Brownian bridges in an sort of efficient rejection freeway? Can you write down an effective launch my question for that? But I suspect that I won't have time to get into the third part. So anyway, so let me start with the first part just to remind you a little bit what is, what was our original motivation for stochastic resetting. And so this was basically motivated from the search processes. So as you know, search problems are ubiquitous in nature. For example, it could be the animal searching for food or in visual search when you are looking for a face of a friend in the milieu of big, large number of people. And or for example, the protein searching for a site on a DNA target site to bind. Or even in combinatorial optimization problems where you have a large dimensional random landscape and you want to find the global minimum by some simulated handling or some other dynamics. And so in all these problems, you see this sort of intuition tells us that if you search for a while in vain and you don't find the target, maybe you should stop your search and start from the scratch. That means from the same initial condition. And the rationale behind it is that you might find a new pathway and which will expedite the search for the target. I mean, we see that all the time. I mean, for example, if you're looking for a bug in your computer program, I mean, you often get lost in subroutines and blah, blah, blah. And then basically best strategy is to actually restart from the beginning. And then often you can find the bug. So this was the original motivation that we started out with. And we wanted to sort of quantitatively estimate does resetting, that means from time to time going back to the initial condition, does it really expedite the search process, the speed of the search process? And so about 10 years back with Martin Evans, we introduced, we studied this question in the context of a very simple model search process, which is just a Brownian searcher. And so here's this model. So let me first tell you what happens without resetting. All of you know this. So imagine that you just have a Brownian walker in one dimension for simplicity, but you can do it in higher dimensions also. And so you have a walker which starts at the initial position X naught. In my talk, vertical axis will always be the space and horizontal axis is time. So it starts at X naught and it just does diffusion. And then you have a target at a position L up there. And you want to know when is the first time the process hits the target L. So that's the first passage time. And that will be taken as the search time, basically, searching this target to find this target. And this problem is very easy to compute. I mean, it's a classical problem. So to calculate the first passage probability, it's better to calculate the cumulative distribution. That means cumulative first passage probability or what is called the survival probability. So which means that what is the probability that the particle starting at X naught does not meet the target up to time t. And it's very easy to, and the subscript zero here means without resetting. So it's very easy to write down a differential equation for this. This is a backward differential equation where you vary the initial position X naught. And you have to put the absorbing boundary condition at X naught equal to L. Because if you start exactly at L, then immediately you find the target. So the survival probability of the target is zero. And so you just have to solve this differential equation for X naught less than equal to L with this absorbing boundary condition. And the solution is classical. I mean, it's just the error function of L minus X naught by square root of four dt, where error function is just the integral of a Gaussian. Okay, so therefore, to calculate the first passage probability, which is the minus time derivative of the survival probability, because survival probability is integral of the first passage from t to infinity. So when we take a derivative, there's a negative sign. And so this is very simple. It's a well-known expression. And what you see is that for fixed L minus X naught, as you take the time goes to infinity, it has a fat tail. It decreases t to the power minus three by two. Which means that if I calculate the expected value of the time, it's mean first passage time, this is strictly infinite because of this tail t to the power minus three by two. So, and this is because, you see what happens is that, so the trajectories that contribute to the mean, there are trajectories where the particle actually starts from here and wanders away to minus infinity. And therefore never finds the target. And it's because of those trajectories that contribute to the mean, that the mean first passage time is actually divergent. So even though the first passage probability exists, but the mean first moment diverges. And what resetting does, essentially it cuts off those wandering trajectories. So let me just show you how it does that. So resetting means, basically the model is very, very simple. So you just say that your stochastic process X of t, now evolves by the following rule. So with probability r d t, where r is the resetting rate, you go back to the initial position X zero. And with the complementary probability one minus r d t, you just do diffusion, that means you just increment your process by a Gaussian white noise. And that's it. So r is the only parameter in the system. And so the question is that, what happens to the mean first passage time now? Now, of course what resetting also does, is that it drives the system into a non-equilibrium stationary state because it sort of creates dynamically a potential around the initial position and the particle gets trapped there, which was shown this morning. So I'll not talk about that aspect here. Here I'm just focusing on the first passage aspect. So how do we calculate the first passage probability? Very simple. So again it's a sort of same renewal equation that you have seen before in some way. So q r of X zero t, so now the subscript r means in the presence of resetting, survival probability. So there are two possibilities. Either there is no resetting in this time interval zero to t, which happens with probability to the power minus r t because this is just a Poisson process. And then you have to ensure that within this time interval it did not cross the target, so it's q zero X zero t. Or there are more than one resetting. So in that case, you just look at the time tau at which the first resetting occurs here. So the probability of the first resetting occurs between time tau and tau plus d tau is just r d tau times e to the power minus r tau. And up to that time tau, it's evolving without resetting. So it's a q zero X zero tau. And then after that, it just renews the whole process. So therefore again you have q r X zero t minus tau. So it's very simple renewal equation, our swing and Dyson equation if you like. And you can actually solve it exactly by taking Laplace transform using the convolution property. But I'll not go into the details, but the mean first passage time happens to be finite in this case in the presence of resetting and it's given by a very, very simple formula here. So of course when r equal to zero it diverges, but for any finite r, this is actually a finite mean capture time. So that's the first thing that it does make the mean first passage time much smaller compared to without resetting. And if you moreover, if you plot this mean first passage time, this formula here is a function of r for fixed l and X naught. So what you see is that it diverges as r goes to zero. And also it diverges when r goes to infinity. And the reason for that is that r goes to zero as I said, there are wandering trajectories taking it away from the target. So the mean first passage time diverges. On the other hand when r goes to infinity, you are resetting all the time to initial value X naught. And therefore you really never managed to go find the target. You are just completely localized around the initial position X naught. So it diverges in the two limits. And if you plot this function, you see that there's a minimum at some optimal value r star, which you can compute trivially. And so in dimensionless units, this r star is given by d over l minus X naught square. And so gamma is a dimensionless number, which is just given by the solution of this transcendental equation. So this was the, so it was nice that there's an optimal first passage time, that optimal resetting rate that makes the mean first passage time minimal. And then it turns out that this is a very robust phenomenon. This is optimality. And it exists in all dimensions. It occurs in various theoretical models. And also very recently it has been verified in experiments in optical traps, both from the group of Yale-Rochman in Tel Aviv, and also Sergio Ciliberto in ENS Leo. But I don't have time to talk about this. And over the last few years, it has been really studied extensively in many, many different contexts and many, many different models. I mean, there's a long list, including active particles, quantum dynamics, and many other things. And so there's a sort of recent review article that I wrote with Martin and Gregory, where some of these things are mentioned, but not everything, because the subject is evolving very fast. And as was mentioned this morning, that this year actually, so Jeff Eze came up with a special issue edited by Anupam Kundu and Slomi Rewany, which is about the 10 years of this diffusion in stochastic resetting. Okay, so this is the sort of background. And let me now come to the main part of today's talk, which is about resetting Brownian bridge. What do I mean by that? And first I'll tell you the motivation for it. So in the previous problem, resetting Brownian motion, your search time, your process goes on forever, okay? There's no, it's unlimited search, and then you are trying to find the time to find the target. But in most practical situations, the search time is never infinite. It's always limited. It's all fixed duration. So for example, when the animals are searching for food, you know, they are this searching for food. They typically come back to the nest at the end of the day, so there's a time limited. Or for example, you are searching for some rescuers, some survivors after a shipwreck. You typically send a helicopter, but of course, helicopter doesn't move around forever. So it's typically for two hours or three hours, it searches and then it gives up and comes back to the initial position, okay? So in many situations, basically the search time is finite, limited, and at the end, the searcher comes back to its initial position or some final position, basically. So this is the idea behind resetting Brownian bridge, okay? And so to be more precise, so here the model would be something like this. So essentially what is resetting Brownian bridge? So it's again resetting Brownian motion as before, except that it starts at, let's say zero for simplicity, and it comes back to a fixed position XF at some final fixed time t equal to tf, okay? So this is the constraint. It's conditioned to come back to some final position at some final fixed time tf. So you look at all possible resetting Brownian motion configurations and those which reach at XF at time tf, so those are the ones which are the relevant allowed configurations, okay? So this is the model. And now you ask the question that now if I have a target at L, okay? So I have a fixed time of search and I'm supposed to come back to some final position XF, which we'll typically take to be initial position zero, okay? And so the two questions that I want, three questions that I want to ask. So what's the first question? So is resetting still a good strategy in the presence of a bridge constraint? Because you don't have this problem that we had before. I mean now, because your final time is limited, so you don't have wandering trajectories to minus infinity, okay? So that problem is not there, okay? So the mean first passage time, even without resetting, will be finite. So it's not clear that resetting is still a good strategy. I mean, does it improve the first passage time, okay? So that was the first question. And if it does, is there an optimal resetting rate still, R star that minimizes the capture time of the target, okay? So this is the second question. And the third question is that how do we actually, imagine that you are doing numerical simulations. How do you actually generate the paths that satisfy these resetting Brownian bridge constraints, basically? I mean, nightly, you might generate all possible resetting Brownian motion and keep only those configurations which reach XF at TA, but that will be very stupid because this will lead to a lot of rejections. So can you write down a rejection-free algorithm to generate such paths, okay? This is not completely trivial. I mean, this is a highly non-trivial question, okay? So these are the three questions that we want to ask. And let me start with the first question. Is resetting a good strategy for a Brownian bridge? Is there any optimal paradigm here, okay? So let me first start with what happens without resetting again. Without resetting, just a Brownian bridge. So again, you start a Brownian bridge from initial position zero and you are supposed to reach XF at final time TF. And so here, let's first calculate the fluctuations, okay? So if I am any fixed time T between zero and TF and I look at the fluctuations, mean square fluctuations, sigma square T at that time T. So this is easily computable, so let's compute that first. Just to get an idea of the fluctuations, okay? So how do we compute this? So for that, again, since I'm a Markov man, so it's very easy because to calculate this guy, at time T, you want to know what is the probability density that the particle reaches X at time T? So all you have to do is to split your interval into two, zero to T and then T to TF. So in zero to T, you just have a free Brownian propagation G of XT, which is just a normal Brownian propagator. And then final interval T to TF, you have another, so final position XF to TF starting at XT, and you just take the product because it's a Markov process and you normalize it by all possible paths going from zero to XF in time TF, okay? So it's trivial to compute this and you find that it is, as you expect, I mean it's a Gaussian distribution with this mean here and the variance is actually just two DT times TF minus T over TF. So of course the variance vanishes at the two ends because these two ends points are fixed and in between it fluctuates and the fluctuation is maximum at the center and it has a semicircular form essentially, T times TF. Okay, very good, this is without resetting. So now let's introduce resetting and what happens, okay? So the question I'm after is that once I introduce resetting, does the fluctuation increase or decrease? It's not completely obvious, right? Nively, I would think that it'll decrease because your paths are already, fluctuations are less because you are reaching the fixed point, final point here. So if you introduce resetting, it'll further localize this trajectory towards the origin. So you would think that the fluctuation should decrease, right? Yeah, you're assuming that you reset always at the end of the issue. Yes. You could reset somewhere else. No, you could reset, but this is the first, Andrea, you have to always first address the simplest question, right? You can write after many papers on resetting to different places, okay? So, all right, so let's answer this simple question. So again, so I have this, you know, it's a Markov process, so I split the interval into zero to T and T to TF, and all I need therefore is the propagator for the resetting Brownian motion in each of these intervals, and then I take the product and normalize it. Okay, so how do you calculate the propagator for resetting Brownian motion? It's again, very simple, again, renewal property. So imagine that you have a process starting at X naught, and you look at up to time T, and now instead of first resetting, you look at, it's convenient to look at the last resetting. So suppose the tau is the interval since the last resetting, and then you see that, you know, if there are two possibilities again, the probability there is no resetting, and then it's just a free propagator, okay? And then there is this resetting, more than one resetting, and in that case, you know, you just look backwards from here, and you see that the probability that there's no resetting in this interval followed by resetting event at this is just this probability, and from here to here, it's just a free evolution, but starting at zero, therefore it's X square over four D tau, okay? So this is very simple, the resetting propagator, and so once you have that, we just, you know, plug in here, and take the product and compute this, so it's quite simple calculation, and what you find is that, again, if you write in terms of dimensionless time and dimensionless resetting rate, that is just a simple function of A and R, okay? And this is dimensionless because two DTF comes over here. Now this function, you can calculate explicitly, but don't look at the details of this, but mainly what you find is the following, okay? So I want to see that, remember that first let's look at the R equal to zero, when there's no resetting, it was a semi-circular law, in this unit it's just A times one minus A, so it was this, and now I just switch on a little bit of resetting, let's say, what will happen? Does the fluctuation increase or decrease? I mean I expected that it will decrease because you introduce resetting, it will localize towards more towards the origin, but what you see from this exact computation is that it actually increases, okay? So it first increases, and then if you increase further the resetting rate R, so then of course it decreases, okay? So it starts from the semi-circular law and then it sort of increases and then it decreases, okay? So this was a bit surprising. I mean, why do the fluctuations first increase and then decrease, okay? You should think of actual correction. Actually, we never did the simulation before doing the computation, so, okay? So now I'll tell you why, it's all due to Uber taxi, I'll tell you why. So, okay, so here is this, so this is the sort of main result, and in fact, I mean, if you look at the, just the, let's say the maximal value of the mean square fluctuation, and plot it as a function of R, so what you find is that it increases, achieves a maximum at some R star equal to 0.895, and then starts decreasing, okay? So what's the sort of mechanism? This mechanism is very different, so there is an optimal value of resetting rate at which the fluctuations become maximal, which means it can actually find the target. If there's a target up there, it can actually find the target easier, but the mechanism for that is very different from resetting Brownian motion. So here, what's the mechanism of this enhanced fluctuation? The mechanism is the following. You see that when R equal to 0, no resetting, so imagine that you are walking, just taking a walk before dinner, and you are supposed to come back for dinner at home exactly at eight o'clock, that's your bridge condition. So you don't walk too far, because you have to come back by walking, but if you can call a Uber taxi, if you have that option, you can just come at the last minute, so you would like to venture out further. So that's the mechanism, okay? And this was, so we call it fluctuation enhancing mechanism, so you see this for the bridge, indeed there is an optimal value of resetting rate, but the physical reason is very different from the resetting Brownian motion, where the mechanism was just cutting off the trajectory that wanders off to minus infinity. So this is the sort of main result, and I'll just, after that, we can do many things. Do you have an idea what the magic number R star is? No magic number, I mean it's actually, no, you just have to take this function and take a derivative equal to zero, that's all, there's nothing magic. In fact, the actual value of this R is not very important, depends on the observable. See here I'm looking at the x squared, but I'll just show you, I mean in fact if you compute for example, I mean I can also calculate the hitting probability, what's the probability that the target will be found, and then you find a different value of R star, but the fact that there is an R star in no matter which observable you look at, I mean and that's because of this enhancing mechanism. Okay, so here is the hitting probability, again you can compute the hitting probability exactly, hitting probability means what's the probability that the target will be found within this fixed time tf, because there is a finite probability that you may not find the target. So you can ask what is the probability that I'll find the target, and again you can compute this explicitly, I'm not showing you the formula, but if you again plot this as a function of R, you again see that there's an optimal value R star at which it becomes maximal, okay? So and then other thing is expected maximum, you can calculate and that also does it, and it also occurs in all dimensions. So for example, I mean I can go into higher dimension, so I start from zero zero, and suppose I go to some 111 tf, and you see the d equal to one, two, three everywhere, is always the same phenomenon, that is blue curve is the R equal to zero, you increase resetting, then fluctuations first increase and then decrease, it happens in all dimensions, okay? So I think my time is up, so this was the third part, I don't have time to talk about this, but mainly the question here as I said, how do we actually numerically generate such resetting Brownian bridges, okay? I mean beyond the costly naive way, and so for that actually there's a way to compute an effective Lanzmer equation, which is exact where you have a drift term and the resetting rate, but now the drift as well as the resetting term, both are space and time dependent, and you can compute them explicitly, and that way you can generate these trajectories with the correct statistical weight without any rejection basically, but okay that's for maybe another talk, so let me summarize that. So again resetting Brownian bridge is an efficient search strategy for diffusive searches with the finite duration Tf, and the optimal paradigm holds via this new mechanism which is this fluctuation enhancing mechanism, which is completely different from that of the usual resetting Brownian motion, and then we can actually compute in a sort of an exact Lanzmer equation, effective Lanzmer equation with an space time dependent drift and resetting rate, which generates these parts in a rejection free way, and so to contour again the stochastic resetting is always, it keeps on popping up with very rich and interesting and starting and dynamical phenomena, and the thing about this is that this is a very simple system, so you can actually do many calculations analytically, so that's quite nice. So let me acknowledge my collaborators over many years. So these are all the graduate students and master students, and also including people from here, Ginaro and Andrea, that's and here are some selective references on stochastic resetting. So thank you very much. Thank you very much for the talk. We have time for a few questions. Are there any questions in the audience or online? I can maybe ask one. I'm sure you already are thinking about it or have calculated it already. Have you looked at the extreme statistics of this resetting bridge? Yes. I went too fast there. So we completely expected maximum for example of this process. Again, you can compute and again you find a sort of art star there for which it becomes maximum actually. So is there any connection between the expected maximum without the resetting and the optimal rate with resetting? So basically that you would have a rate that on average either switches after you have reached the maximum or something? Not really. I mean, it's really all observables, they have quite different, I mean you cannot connect so easily the art star of each of the observables basically. So it's more subtle? Yeah, it's more subtle than that, yes. Okay, so let's see if you have already considered what I was suggesting. Yes. Imagine now. No, yeah. Okay, no because if your home is in a different place compared to where you reset and depending on whether it's in the direction of the target or not, this competition between- Yes, yes. No, this is an interesting question. Enhancing and reducing to another one. Take note over here. No, no, this is, but you know, I mean in all these resetting problems, I mean I always say that resetting to any fixed position is like a green's function for resetting to arbitrary position. You just have to integrate over the kernel lesson. So essentially one can, but that could be interesting effect because we know that, for example, I mean if you have a spread in the initial condition, this was in Silberto's experiments, that if you have a spread in the initial condition, it gives rise to a lot of interesting effects. So I'm sure here also if you have a spread in the resetting positions, that will also give rise to quite a bit of interesting physics. So there's still quite a few open problems here. Okay. So have you thought about this model, but in the presence of some sort of non-equilibrium besides the resetting? Because you look- You mean- This diffusion which is a passive motion. So you could have a potential or an external non-conservative force, etc. In that case, you can also discuss cost of the process. Yes, sir. Nobody has done that. So it's free. Yes, you're not. Okay. But you didn't even think of a particle with a drift. So it's always- No, no. This was just the minimum. Two months old work basically. So we just looked at these basic things. We have not really looked at the other modifications of it. Any more questions? To what extent do you think non-instantaneous resetting would affect this? Yeah. This is a good question again. So I mean, this is related to this Siliburto's experiment. So I mean, in all these models, you always assume that it's instantaneously resetting, but of course, in reality, it's never instantaneously resetting. So in the experiment, for example, what the protocol they use is- So they let the particle diffuse for a while and then they switch on an optical trap and then the particle relaxes in the optimal trap. During the relaxation period, you don't take any measurement. At the end, you come back to some equilibrium position and you see, when you come back to the equilibrium position, you are never a delta function. It's resetting to one fixed position, but there's a spread basically, that this is what Anja was referring to. There's a spread and this spread actually has a very interesting effect. This is what came out of Siliburto's experiments basically. So non-instantaneously resetting has important effects and it should be studied, but again, it has not been studied. I suspect that with the bridge and bridge constraint plus the non-instantaneously resetting, would have an interesting effect. Right. Time for the last question. Yes. I mean, for d bigger than equal to 2, you have to have a target with the finite size always. As you know, I mean, because if you have diffusion, you might miss these things. So you just have to put a circle basically in the sphere of a radius, finite radius basically. Can you see the number? Yeah. Okay. If there are no more questions, I suggest we move the discussion to the coffee break. Let's thank again. Thank you for coming.