 Rydyn ni'n deall i chi wedi'u'n meddwl i'r cofnirio'r pandemig i'r cwmwysgol i'ch gwahoddiadur. Felly mae'n rhan oedd yn ffwrdd y cwmwysgol i'r pandemig i'r ysgol yma. Rydyn ni'n gweithio'r ddweud beth y tyn ni wedi'u gweithio'r gweithio'r gweithio'r gweithio'r ddweud. Rydyn ni'n gweithio'r ddweud o'r llaw, ychydig i llwyaf o'r cyfnoddau, about reset which we've already heard about, about run and tumble dynamics which we've sort of alluded to before but I'll say a little bit more about it and about mathematical modelling of that with renewal reward processes. But before I get into the details I want to sort of introduce the various words you saw in the title of the talk. So current fluctuations. The idea is that we're interested in currents in stochastic particle systems but not just in expected values of currents but in their distributions, probability of seeing a current away from the mean. And to set the scene, think for now of a process without memory of a time homogeneous Markov chain, so discrete time. And you can think about the current between two states in the chain or for example between two sites on a lattice, simply by counting minus one when a particle jumps back across the bond between the sites and plus one when a particle jumps forward. So it's like counting the cars that go past you on the road and if you do this for some number of time steps you'll have an integrated current. And as we know in non-equilibrium systems one typically has non-zero mean currents so some kind of flow. But more than that in Markov processes the distribution of the time average current, so if I take my time integrated current for n time steps and divide by the number of steps I've measured it for that generically obeys a so-called large deviation principle. Which very loosely speaking means that in the long time limit the probability that the time average current takes some particular value j looks atentotically like the exponential of minus some rate function, some function of j i of j multiplied by the number of time steps or the time n. So this rate function quantifies how likely you are to see a fluctuation away from the mean. You can also think back to what you do in undergraduate probability courses and think about characterising the distribution with a generating function. So I can define a generating function as the expectation of the exponential of some conjugate parameter k multiplied by my time integrated current. And in the long time limit that also has a scaling form with some function in the exponent this lambda thing is known as the scaled cumulant generating function. And you can write that more formally as this limit. And these things might seem a bit formal but actually they give you some insight into the structure of non-equilibrium statistical mechanics. In particular this scaled cumulant generating function plays a role that's analogous to the free energy in equilibrium. And it's related to the rate function by the genre transform. In particular if it's differentiable, then via the genre eventual transform I get the rate function in this way. If the scaled cumulant generating function is not differentiable then the genre eventual transform only gives in fact the convex hull. And in general just like thinking back to equilibrium actually non-analytical points in lambda k correspond to phase transitions. Now they correspond to so-called dynamical phase transitions. So that's the kind of picture for Markov processors. The question now is what happens if we add some form of memory or some form of reset. I've already heard about reset in the last talk so I can do a fairly brief introduction. Reset could be as simple as resetting a sign, resetting an internal clock, resetting some dynamics. The classic case of course is to take a particle and search it and put it back to a fixed state but you can also think of restoring a system to some known distribution. And very many examples as we had nicely explained a few moments ago. In the first part of today's talk I'm going to talk about reset dynamics and phase transitions. In the second part of the talk we'll get on to run and tumble processes. The idea here is that this is a sort of cartoon picture of how bacteria move. So bacteria tends to move in a particular direction for some randomly distributed length of time and then it sort of stops and has that kind of dizzy fit and throws its flagella around and tumbles and chooses a different direction and then tends to move in that direction for a bit. This actually is connected to reset processes. It's a kind of reset process where the thing that you're resetting is connected to the particle but it's preferred direction and you can use very similar mathematical formalism to treat this kind of process. I'm going to be interested in a sort of general class of run and tumble models that consist of some known Markov process with a preferred direction. So you can think if you want to be concrete of a random walk but it could be something much more complicated. Punctuated by stochastic resets of that preferred direction. So the parts between the resets we call runs and the resets in this language we call tunnels. If the length of time between the resets is geometrally distributed in discrete time or exponentially distributed in continuous time then of course the process is still Markov but on an extended state space. It's not Markov if you just think about position but if you include the preferred direction then you're back to a Markov process. But if the length of time between the resets is not geometrally distributed not exponentially distributed you have something more complicated. In the second part of the talk I'm going to talk about something called a thermodynamic uncertainty relation for this class of models. Okay, so that's sort of where we're going we've done the introduction and then we'll have these two parts. So the first part phase transitions and what I want to do here is show that there's a connection between these relatively recent models of reset with a very old model a 50 year old model now of DNA denaturation and how we can basically steal a lot of results from this old model and apply them to the reset model and this is work with Hugo Toucheet from a few years ago. Okay, so back to our reset framework. We're going to start with a Markov chain evolving in discrete time but now we're going to allow that the transition probabilities might have some weak time dependence in such a way that there's still a well defined stationary state. And again we're going to count the integrated current after n time steps and we're going to assume that even with these weekly time dependent transition probabilities that are chosen in such a way that there's a generating function that we saw before with this large deviation scaling form. And I just put in a zero here to indicate that this is the generating function for the process without reset. Now we add on top of that reset in the following way. At every time step with probability f we reset the process. That means that there's no current that flows. The current stays at whatever value we've measured so far but there's no increment in the current. And the process is reset perhaps by resetting these transition probabilities or by putting a particle back to a particular place, something like this. And with probability 1 minus f the process just evolves in its ordinary way and the current is incremented. And the question is how does the generating function change when we add this reset? In particular what we do here is we say in the long time limit there are lots of finite time corrections that don't matter. But if we keep resetting our original process then perhaps those finite time contributions do play a role. Particularly is there a phase transition to some regime where current fluctuations are optimally realised by having trajectories that just don't reset at all. And it turns out that this question is connected perhaps surprisingly to this very old model of DNA. So here's a kind of schematic of DNA and you'll remember that DNA has this helix structure with two strands and the two strands are connected by bonds between pairs of monomers. And if you heat up the DNA then at some point the bonds start to break so you get these kind of bubbles of bits of the DNA where the bonds have broken. That's called denaturation. And typically in these models when you reach some critical temperature the whole thing just kind of unzips all at once. And actually that phase transition is related to the kind of phase transition I'm interested in. So let's see that a little bit more concretely. The claim is that the generating function in my model maps to the partition function in this old model, the Perlen-Scherringer model of DNA. So on the top here you see a cartoon of the Perlen-Scherringer model. So there's horizontal lines. You can think of as the two strands of the DNA and they're linked by bonds which in some cases are attached and in other cases are broken. You have these bubbles where the DNA is starting to pull apart. And what I do now is I map the space axis here to a time axis in my model. And the parts of the DNA where the bonds are connected I map to time steps where there's a reset in my model and the current stays at whatever value is currently got. So this is current on the y axis. And the parts of the DNA where the bonds are separated I map to time steps where there's no reset so here we go up a bit and then the current comes down. Now we have another couple of time steps three time steps with reset and now the process starts again and the current increments. So if the next denaturated is up no, so if it's denaturated the current just evolves in its ordinary way so the current could go up or down. So, for example, for a random walk you can have plus one or minus one. Whereas if it's not denaturated the current is just zero because you have in the reset. And in the PS model you're interested in phase transitions as a function of temperature and the order parameter is the fraction of bound monomers. In our model we're interested in phase transitions now as a function of this conjugate parameter K these dynamical phase transitions and the order parameter by analogy is the fraction of steps where we have reset. Once you've made this mapping you can essentially just carry across a lot of calculations from the PS model and the way it works is the following. First of all I try and write down the current generating function for a loop of n consecutive steps without reset. Well that's very easy because I already defined my generating function for the process without reset and then I just have a probability that I have n steps without reset so that's this thing I've called you here. And then I do the same for a period of n consecutive reset steps and unfortunately with the scaling we've got at the moment this has disappeared off the bottom but fortunately it's super easy if I have n consecutive reset steps then there's no current so I just need the probability that there are n steps with reset so imagine there's an f to the n down here and you won't go far off. So I have these pieces, the generating function for the part with reset and without reset and I want to sort of put those together to get the generating function for the whole thing. But it's a bit tricky because the pieces have got to add up to the time I'm interested in and of course the trick there, the same as the trick in the PS model and in many other models is that we do discrete Laplace transform so that the total number of time steps now fluctuates and we don't have to worry about this constraint on the length. So this is the thing I really want the generating function with reset. Here's is Laplace transform g tilde and here are the Laplace transforms of the loops without reset and the parts with reset you can do it already. Now if we try and put these bits together to calculate g tilde you see it's actually very easy because every history of the process consists of alternating segments of reset, no reset, reset, no reset and so on. So you end up just doing a geometric sum and you can find very simply that g tilde has this form so you see the familiar geometric denominator and the numerator just comes from boundary terms. Actually you don't even have to do the inverse Laplace transform to find the long time behaviour. You can get the scale cumulant generating function directly it's determined by the largest with my sign convention the largest real z at which this object diverges. In the absence of a face transition that's super easy this diverges when the bottom is zero let's call that value of z 4 and the scale cumulant generating function is then just the log of z star. On the other hand it can be that as you change k before you get to z star when you're coming down from infinite z you hit the point at which you tilde diverges. You can show you never have to worry about v tilde but you tilde might diverge first. At that point, at that boundary point you get a crossover to a different form of cumulant generating function and it's very easy to see that actually that just corresponds to the scale cumulant generating function for the process without reset plus the price you pay per time step for not resetting. So at this value of k where you see this crossover between the poles where this thing diverges you have a face transition to a regime where the cheapest way to realise your current fluctuation is just not to reset at all and to pay the price for doing that. Small pause. See if it works in an old fashioned way. It's come back to life now. It just seems to be some kind of stochastic delay. From this mapping you get not just the existence of face transitions but you're able to classify them. I don't have time to go into the details but it turns out that that depends on the corrections some leading terms in the generating function for the sections without reset. So this is what we're used to the part without reset probability of not resetting there'll be some finite time corrections and if they have a power law that goes as minus c which depends generally on the value of k then it's this exponent here that determines the nature of the face transition and you can show that if the exponent is less than or equal to 1 you don't have a face transition at all if it's greater than 1 but less than or equal to 2 you have a continuous dynamical phase transition and if it's greater than 2 you have a first order phase transition where you have a cusp a kink in the scale of human generating function and that all comes exactly from an analogous treatment of the PS model the only difference here is that our exponent in general depends on k and you can check that with very simple models so here's about the simplest thing you can think of supposing you take a random walk where the ice jump after the last reset always has a step length that's drawn from a Gaussian distribution with mean 0 but variance that depends on the time since the reset and approaches as you'll see a constant and then you can do all the calculations and you can compare them with numerics for the scale cumulant generating function b equals 0 you just of course have an ordinary random walk and you have the blue line which is perfectly smooth nothing to see there b is a bit bigger so the dependence on the time since the reset is a bit stronger you see this orange line which at some point at this blob has a transition to hit the black line and the black line corresponds to the scale cumulant generating function without reset at all but everything is smooth here if b is even bigger that's the green line you see again a transition to the case without reset but now with a a nondifferentiable point with a kink coming in here all seems to work another thing you can do is check results from elsewhere so here's an application that takes us back to run and tumble run and tumble as I said is just resetting the direction and you can modify the previous framework and turns out to be easier to consider tumble and runs as combined events but then you can do exactly the same thing and what you find there is that you can cover some results calculated in a quite different way by Christian van der Broek and collaborators that predict the order of a dynamical phase transition in this run and tumble model depending on the dimension and that all comes very nicely out of our framework okay and then don't worry I see them waving a sign at me the second part much shorter I want to tell you something about thermodynamic uncertainty in these kind of run and tumble processes that's worked with a former PhD student myang Shrester okay so what are thermodynamic uncertainty relations well they compare the value of the mean current and the scaled variance so the variance divided by time and they give you a measure of uncertainty or precision depending on which way up you compare these things and you can do them in continuous time in ordinary Markovian case you have a now very well established relation for continuous time which puts a bound on this fraction j bar squared over the scale variance in terms of the entropy production tells you that if you want to have a more precise current so you want this variance to be smaller you need to pay a larger thermodynamic cost in discrete time there's a different bound again due to Christian van der Broek I wish you can check so this is ordinary random walk taken from the original paper and the red line is the left hand side of this thing and on the right hand side the green line gives you the bound and this works for any current question then is what about run and tumble dynamics you can actually do some exact things you can treat the model as a renewal process with time between tumbles some random variable capital N and then the current at time t could be discrete or continuous I don't mind too much at the moment is just a sum of the currents from all of the different runs so capital N is the number of tumbles up to time t plus the bit of current from the last tumble and furthermore we assume the kind of models we're interested in that the current for each run is a random variable that has a multiplicative structure it's a product of capital T which depends only on the tumble so setting a direction for example and capital R which depends only on the run then you can use machinery of renewal reward theory to get exact results for the asymptotic mean and the scale variance in terms of a whole bunch of moments so you have to know moments of T and R and N and cross moments but if you know those things you can actually calculate the uncertainty exactly the question is well what if you don't know those things can you still do something if you only know some of them well if you're happy to assume that they are part of your current current part that's related to the run that the mean of that and the scale variance of that that they scale with the length of the run which is true for random warp models the length of the run is N-1 because you have one time step for the tumble itself it's exactly true for random warp models and expected to be a good approximation for other models at least for long run lengths if you assume that then you can get a simpler expression for the asymptotic mean current and you can neglect some terms and get a bound on the scale variance and then you can combine that with the original discrete time prismans van der Brot bound on the tumble process which is an ordinary Markov process to get a bound on the uncertainty in the whole model and this is very nice because you don't now need to know the statistics of this r thing for the runs you've just got a prefactor which depends on the statistics of the run lengths and the fixed term that involves the tumble fluctuations sorry, I see you writing something but I am very nearly done I have to click twice to get through every slide that's my excuse and you can check this simple as possible example perhaps a random walk with geometrically distributed tumble where in each run you have a preferred direction and a probability p' going in that preferred direction and the preferred direction is set right with probability p at each tumble or left with probability 1-p and capital T is just plus or minus 1 so you have a bias in the tumbles as well as the rubs and if you then plot this uncertainty thing while the green points are from simulation the blue is the exact thing which you can calculate in this case and the red dash line is our bound the black line by the way is the original pros and cons using the fact that this is mark of on an extended state space so it still works but it's pretty useless apart from very close to biases of 0 and 1 so this is a function of the tumble bias and you can do also non geometrically distributed run lengths works very nicely you can do many particle models you can do continuous time and again the bound works but I think I am out of time so hopefully I've convinced you that you have this nice mapping for the partition function in this old DNA model to the current generating function in a reset model and there's various other things you can do I saw that Francesco was loitering online you can think about large deviations of ratio quantities and non homogeneous reset and then I talked about thermodynamic uncertainty and what you learn from renewal reward processors but there are many other properties you can think about for these run and tumble processors aren't there some work with Edgar and friends trying to unpick some of those that's ongoing but let me stop there thank you very much you were still on time for the break enough for the questions we still have time for a couple of quick questions still shy any questions so that's basically what we did in this random walk model so the step length depends on this parameter to be which in some sense quantifies the memory since the last reset and by tuning that you can tune the nature of the face transition because you change as you do that you change this C this power here you're referring to they sort of by hand change the exponent for the loops we change the exponent that comes in here by this slightly more indirect way by changing the time dependence of the steps since reset and you get the same thing the only difference in the PS model is that they always have a constant here they never have a dependence on temperature whereas we have a dependence on our congeal parameters it's satisfied any other questions yes Andrea actually genuinely one of those moments where somehow I just made a connection I think Dorida was explaining to me the PS model for some totally unconnected reason and I've been thinking about reset and as soon as you see that it's just mapping time to space space to time then you've essentially got everything I prefer this idea I would have the mapping and the mind is what I was telling then I could have you go down but the process stops in the current state that it is a bit maybe my mind just works in a perverse way this is what you're saying Andrea but this is a situation after the night in the pub that I understand we should talk about that any other questions any questions in the chat does someone see that ok so this means if there are questions you can send Sarah an email sorry I can't see the bottom of my slide otherwise I don't have any instructions for the break Andrea do you want to say something or just leave the room then let's thank all the speakers again and we reconvene at