 Well, thank you very much thanks to the organizers thanks to the participants for joining and just writing down the title of my talk, and I can understand that perhaps these words sound a bit strange. I mean what's not being really comprehensible or one is curious for a day meaning, and that is of course what I will explain. But I must also admit that there is a certain will I mean a certain happiness even to speak about some things which are maybe not always emphasized in this project which is called stochastic thermodynamics when it works up on stochastic thermodynamics, I'm neither working nor shopping in stochastic thermodynamics but I still wanted to start with some basic remark about this enterprise. So I'm partially also I believe people tell me I'm partially also responsible for stochastic thermodynamics because, say in the period 1999 2003 I have been writing papers where I explained how one can find the so called Galavoti Cohen type fluctuation theorems all kinds of version of it, due to the fact that, as I wrote then also, if one has a dynamical system, or a stochastic dynamical system and one looks at the ratio of the probability versus the time reversed trajectory probability of the time reverse trajectory that very often and that is basically called the condition of local detail balance, one can identify that with the physical entropy flux into the reservoir. So the time anti symmetric part if you wish of the probability of trajectories can be identified with entropy flux and that after one or two lines gives you fluctuation theorem and various aspects related to the time anti symmetric fluctuation sector of non equilibrium systems. Now, since there is a time anti symmetric fluctuation sector there must also exist a time symmetric fluctuation sector. And that is what I'm speaking about today so it is this aspect of frenzy and frenzy exceeding that will involve instead of the entropic or dissipative or time anti symmetric aspect of the non equilibrium system, it will be related to the time symmetric aspect of the fluctuation Okay, so let me start by somehow explain the meaning of it all and or or maybe say a bit give a bit of context of what are we talking about. So, if we speak about steering in particular. You know, we should have in mind things like we are sitting at the steering wheel, we try to control we try to manipulate the architecture or the system components to go in a wide sense in a certain direction. So that has of course been entertained that ID in various ways. So let me say that there is a step one or the primitive step in that, in that way of doing things is just to use the existing potential differences that are available at anyone at any moment. So what do I mean by that well I mean just the idea of, I guess in English you call it a marble run. So what I mean is that you you make a kind of trajectory. And you're using say here in gravity or is a certain height, you have your particle your marble, and it just runs down because of the potential difference. In other words, you're using this gravity here, or this potential difference in more generally to steer the particle and what is fun, and what I often did as a kid was somehow to make a way for the marbles to find ground level and you can also start to be more inventive here you can select. You have the big marbles and you have the small marbles and you can somehow steer them, depending on how you manage to make that road that run runway for for your marbles. We hope that what we in with a more complicated word is more programs we do we call that today gradient flow. So gradient flow is is nothing more than using in general say free energy differences or energy differences it's it's really an energetic steering towards the goal and thereby you can do all kinds of selections. The initial condition is at a higher energetic level say or a free energetic level, and we are boundary conditions possibly via the landscape that you have built building for your runway, you get a particular outcome, and a particular steering or what is happening. So, in fact, if we look at the idea of neural networks, or all the way to what today is called machine learning or deep learning. That idea. I'm not going to enter into the full details now but probably one very important example that made very explicit also how statistical equilibrium statistical mechanics is helping here is of course the whole field model already 40 years old by now, where what you basically are doing is is building a landscape in energy, which is of course the same thing as a free energy at low temperature, where you associating parameters or other patterns to the local minimum. If you think of them as like this is a cat, this is a dog, this is a horse, then giving a kind of more blurred picture of a cat would say, put you somewhere here on the landscape uphill, not quite in the minimum. And basically what you're doing is to give a dynamics which is gradient flow along that free energy to hit the ideal platonic cat, and then we have association we have pattern recovery. The whole field model is just an explicit interacting spin system, where the coupling parameters maybe I can as well write it down so we have a Hamiltonian, which depends on the spins but also on the patterns so these are the usual kind of easing It's a mean field easing type model and these are the patterns. And this Hamiltonian is of the following mean field structure so you sum over all the sides in a graph, think of it as a square lattice or so. You're doing the usual easing construction that you multiply the spin at i and the spin at j, that's sigma i sigma j, but you have no a coupling between all these ij's so a symmetric coupling ij, which depends on the patterns. In particular, this j ij. That's called the Herbian rule will be somewhat like a sum over the, the say p patterns, where you're counting them also as having components i and j so psi mu that are new is running from one of p is a pattern which is basically a collection of bits again. Which tries to which corresponds to these minimal in the in the free energy landscape, and these binary patterns they are kind of the goal of the gradient flow for a dynamics which is basically energetically steered by this hopfield Hamiltonian. And that can be extended, I mean, not for one layer but this can be extended to so called layered, how they called layered or hierarchical associative memory networks, but it's all the time using the idea of going down using a cost function or a free energy landscape or an energy landscape and minimizing it like you're doing in in a kind of a gradient flow usually to find the minima, which will have a certain stability certainly at low temperature. And that's, in general what I will call the primitive step in steering is to just use like, you know you don't do it you use the interactions, you use the, the available energy differences you use gravity, like you do for the marble run to to steer yourself to the free energy. There is a step two, which was taken much earlier but these are the, that's like using fire. And just like, from a ties was able to start cooking and do apply term dynamics. We learned basically from another poet, said the car know about the power or the property saw the film of some of the power of fire to make heat engines to make real machines. And there, you already add in fact non equilibrium. Maybe you're considered in the car know cycle say reversible limits but at least the idea is there of taking for example via temporal, I mean, changes in time you know potentials that depend on time for example, effectively adding rotational forces to get work being done for example by a machine and that non equilibrium driving. Well, perhaps can help you also to to find back certain patterns, and last week, I think it was last week I guess yeah I was in the workshop also interested on a very interesting workshop on signals of life. Let me abbreviate the title as such. And there was also a talk there by Suri. And he was telling us about the enhancement of this associative memory recall that you for example have in the whole field model by adding non equilibrium driving. And if you have this easing type models like the whole field model you can make well stochastic easing model you can have a detailed balance dynamics with it that generates your this dynamics where you do the gradient flow, but one can in the spirit of adding non equilibrium, the pattern still stored energetically, but perhaps hope that by adding non equilibrium forces rotational forces so homo adding, adding fire that you can effectively deepen the wells. So you can get more stable and get to them more more rapidly. And that is what Suri was talking about, and you can still find his talk online, if you're interested in that. So that's like, instead of energetic steering with these ideas of, let me call them in general making machines using extra fire and on equilibrium. As I was explaining last week also. You get from energetic steering into what I would call entropic or dissipative steering, where you hope that the non equilibrium driving or activity also will somehow effectively deepen the wells and you can perhaps reach them even faster. What I would like to speak about today is someone taking it all the way. And that is what I would call a third step, where the steering will be kinetic. So I have to explain that. So far, if you could follow me a little bit so far you have seen that the patterns, you know the cat the dog the horse whatever these are called the ideal patterns. They are stored as minimal of energy or as minimal free energy, and you reach them somehow or you want to reach them as fast as possible. The patterns are stored as coupling parameters basically or in the potential in the self energy or in the interaction potential that where the patterns are sitting. So what I would like to do is to quite change that ID and to store the patterns. Not in the energy, not in the free energy, not in coupling parameters but to store them in time symmetric dynamical activity parameters or in in reactivities. So that sounds still a bit mysterious perhaps but let me start with the with a very elementary example, just to announce the ID, I mean, it will be very, very simple. I invite you to consider a random walk so which is just a random walk a very simple random walk in fact on the ring. So think about the ring with. Okay, I have sites on my ring maybe I have insights. So I have periodic boundary conditions. So I have here say a site. Let me call it x and here I have the site x plus one. So I go home a clockwise here, and the random worker is is is steered a little bit and how can you steer this random walk or so it starts somewhere. I have to specify the transition rates so what I have to specify to you is the transition rate in continuous time to go to jump hop from x to x plus one and I also will have to specify the rate to go from x to x, or maybe I will write it like, x plus one to x so in the opposite direction. Okay, so let me take a specific choice here. And I can imagine that to every x that is associated some energy. So perhaps I have. Okay, something like which would do the would do the detail balance then so I will have an energy difference here. Right. So it would be the opposite and beta would be the inverse temperature and probably if I divide by to this thing. I hope you can read it. I recognize this as a standard detailed balance dynamics for obtaining a Boltzmann distribution for the states on the ring, where you get the colonial ensemble namely the probability to have a stationary stationary probability to be an x is just given by e to the minus beta that energy function. In other words, if the energy is very if the temperature is sufficiently large, sorry if the beta is sufficiently large, you will typically sit in the minimum of the energy right so that is kind of clear. But now let us add here and a driving. Since we are wondering it's quite easy to add a driving and let me take the amplitude to be epsilon. In that sense, if I go from x to x plus one I will have here maybe I will add here an epsilon over to and here I will have a minus epsilon over to don't look at the one over to it's not important, but epsilon is a constant and in fact in this case, you will have that. There is a bias to go to the right, but still somehow the energy and certainly at low temperature will be dominating the system here. And even though you may be a bit faster in in searching the landscape, you will still reach the minimum of the potential basically of the energy. And now let's do something different. Let us know. Forget for the moment about energy we could follow we could we could use it or not use it, but let me make it simpler and let us assume that I am still having a bias say to go clockwise. But here I just add something that depends on x and for the opposite transition. I have minus epsilon over to but I have the same ax. In other words, if I look at the ax here and here they are the same. So the bond it is really the bond which is characterized by a certain with or depth I don't know how you want to call it, but it's time symmetric whether you cross the bond clockwise or counterclockwise you have the same ax. Still you're biased, but still there is the ax. The ax would be the same for every x. Of course, the probability would be rotation invariant and independent of epsilon you would have one over M as the probability of a certain side, but no suppose that epsilon is sufficiently big. We ask, when or where will now be the stationary distribution. It turns out that a simple calculation tells you that the stationary probability density to land in X will go like one over a x. In other words, you get that if you are in non equilibrium, you can start to select the most dominant state. If I can use equilibrium statistical mechanics language, you will be dominated the stationary distribution will be sitting exactly there where it is basically the smallest escape rate. So you will have one over X. So, while this example is sufficiently simple to contemplate it is teaching us a lesson, namely it teaches us that from the moment we are in non equilibrium and in fact far from equilibrium. As we can imagine, say for biological systems I mean living life functioning would require that once we are there. We have another parameter to play with. We have this time symmetric reactivities to play with and in fact, can be used to select where you will go with your dynamics where you will go in the sense that the stationary distribution will have to be at the place where the ax is is minimal. If we were to add that if we would have epsilon equal to zero. So in detail balance, it would be completely irrelevant this this ax as long as it is not zero. The probabilities of all the states would be identical. It doesn't matter whether you have a profile of ax. When you are detailed balance, you wouldn't see it. So far from equilibrium that it's really starts to dominate the stationary condition. Okay, so now that is, let's say the main idea where we have to for for frenetic steering so frenetic steering is basically the following that you are going to store patterns in the resistivities in this time symmetric components to end up where you want to end up and this here is just an example about the stationary condition, but that is just one example. The same implies in fact to the full probability of of a trajectory as we had it in the beginning, because the probability of Omega is not only governed by this entropy fluxes, but it's also governed by this is a minus sign of the time symmetric aspect of the trajectory which is the time symmetric aspect, and that is called a frenesy. All right, so let us be, even though the time is also running in one direction, let us be a little bit more specific and complicate matters just slightly. So, so I'll say you've got another 10 minutes. I have another 10 minutes wonderful so if we speak about steering. Then classical particles the best way to steer them is of course to speak about forces. So we will apply forces to steer the particles but what forces will come from we will not do it like in the primitive step by using gravity or free energy or something like that. We will use it, we will use another force we will use the statistical force, namely we think of a probe or think about a particle with a certain position at time T which is x t, which is coupled. I mean, these lines are just imaginary ways of fantasy ways of denoting the fact that this is coupled to other degrees of freedom, which I will call in general having configuration eta which is also depending on time t. And this configuration eta t is supposed to be, let me say, in the first instance, much faster on a much faster timescale than the position is changing of the probe. So I would like to invite them, the idea of a statistical force f of x, which would be the following suppose I have an interaction potential between the probe position x and the configuration of spins eta. So think about this in the position of the probe and let us think of it as spins spin configuration. If it is much faster the spin configuration there may be a stationary distribution, living on the ethos, which depends on the position of the probe because they are coupled. So we can integrate that and take here the nubla of x like we do in mechanics so this is just the mechanical force but I average over the spin environment. So I take I repeat a mechanical force averaged over the spin environment, and that defines for me an induced mean force statistical mean force it is on the probe. So that force is now what I will use as a steering thing. So the idea is now the following. If you're in equilibrium so in equilibrium, let me remind you when the row x stationary is just. In fact, the chemical ensemble with certain temperature then this fx. In that case fx is in fact a gradient of the free energy of the free energy parameterized by the position of x, where the free energy is the logarithm of this partition function essentially. I think that is well known so in equilibrium statistical forces the mean statistical force is just the gradient is always derived from a potential in particular from a free energy. If the eat us, or satisfying a non equilibrium stationary distribution so if dynamics is non equilibrium, then this terminal and the statistical force will well just like, I mean why not it will have a gradient of something that we call here age, age in honor of Helmholtz, who was one of the first to consider such decompositions of a force in a gradient part and in this rotational part. And there will be a force which we will call maybe the non equilibrium part of the statistical force on the probe. And from the point the statistical force the non equilibrium part will in fact be non zero only if there is a non trivial frenzy. So that was explained in and here it is good time for giving a reference or if you look at the paper that together with Carl net watch me. This is published in the journal chaos. It's in 2019 already. Then this is explained how the non equilibrium parts of the rotational part in the statistical force essentially depends on the frenzy. I mean the following if there is no frenzy there would be no non equilibrium force, and if they're this frenzy is not somehow not independent of the dissipation I mean in the sense that it is not slave of the dissipation. In the sense that you're far enough from equilibrium not just in linear order on equilibrium. If it is in other words and on trivial frenzy which you can handle independently from the entropy fluxes, then this rotational part will exist. And, moreover, whatever is parameterized by the frenzy you can use to steer that call it, because you can start to steer the rotational part, so it also means that you can generate currents. The extra thing that happens if you do steering in a non equilibrium environment is that not only do you able to reach your preferred state by changing your dynamical activity or activity, but you also have an extra tool. You can know store patterns like cats dogs and horses, not only as being static, a particular state of your system, but also in currents. In other words, you can have clockwise current to mean a cat and counterclockwise counterclockwise current to mean a horse. In other words, the non equilibrium not only allows you to use the frenzy and the frenetic parameters to steer, but also to have your patterns to be encoded, not as static states conditions, but also to have them encoded via currents. The first paper we wrote about that was in collaboration with Bram Lefevre, and that you can consult in Journal of Statistical Physics 2023, so this year, where the first idea of this frenetic steering was applied to select basically states not currents yet but to select states by encoding the patterns in the frenetic components. So if you want to know the details I would recommend that you would look perhaps at this reference to understand a basic idea of what it means to store patterns in in the in the reactivities or in the dynamical activity. You can also see also that this models that are presented there are not quite yet in the range of what we would call associative memory or in the sense of recovery or remembering or pattern recognition, but more in the sense of, in the pattern or somehow, but what we are doing right now in the same sense is that we are doing the ID that I explained here of putting the patterns that you want to teach you put it in the frenzy to generate rotational forces which give rise to various functions in other words, the signal, the patterns were no or no associated to, how can I say current structures, or particular currents, and these currents are obtained in the following sense that the frenzy and the functions of the particular spin configuration that you have, but also of the patterns. So if you remember these hopfield patterns in such a way that if your initial eta is sufficiently close to a given pattern, then it is steered towards a current which is corresponding to this particular pattern. So that is, I think, all I want to say, well, I would like to say more but I mean this is basically the summary of the idea of frenetic steering that I wanted to give today as a kind of new recipe or a new algorithm if you wish, I would like to add something or complementing to more traditional energetic or dissipative gradient flow like associative memory that one is using with enormously with enormous successes of course, starting with a perceptron all the way to neural networks and machine but which I believe is more relevant for biological systems, it would be hard for me to believe that my brain, for example, would use free energy would use detailed balance gradient flow dynamics for storing or remembering patterns. So rather I would believe that there is a background of non equilibrium a given epsilon like we had it before a given driving which is always there and that allows the patterns or whatever you have learned to be stored in information which is not so much thermodynamic, but which is kinetic. In other words, sits in all kinds of reactivities, rather than is stored energetically, or free energetically, as we usually have it in the paradigm of neural networks. I think I think I said enough but perhaps I can answer some questions. Thank you very much. Thank you very much. We haven't. I don't think anyone has posted a question in the chat so far so could we, anyone who's got a question could you raise your hand and emphasis on junior people. Having a go. I'm looking at the list of causes someone's raised the hand, who's raised the hands directly. Really interesting talk I'd have to really dive into the details but like the obvious naive question, I guess, is, how would you train currents based on data, just like you train, you know, free energy based neural networks So that's of indeed there are always two parts as you so well emphasize here, there are always two parts to this story there is the, the idea of training, which are of different types possibly and there is the idea of recovering or remembering right so there is the training and there is the recognition step. So for the moment, it is true that we have been some more emphasis obsessed by the recollection and by the training so for the moment. Our training is basically the most naive training that you can imagine, namely trial and error. It just means the following you start, you know you should take a bit of an abstract framework here you have a nonic you have a graph with loops and all that. You have a graph. And now you have a random Walker which is walking on that graph. And you start by you start in a particular vertex say which corresponds to a spin configuration and it starts to walk. And now you're going to see what is doing, and you're going to increase or decrease certain reactivities. So from seeing where it goes and the next time you have a random Walker to do that it will profit from the previous experience. You're going to see what is being set of trajectories that you're using. So basically you take a sample of trajectories, one by one, one after the other. So it's a supervised learning, and you are going to change after each trajectory, the reactivities, so that it behaves better, so that it will enter a particular loop, and there is where the current is sitting. So it's just, sorry, the loop is carrying a current so you have a loop in a graph it carries a current, and it stands. I mean, at least in over imagination, it stands for a particular pattern, because now you see in no as I said in non equilibrium, you're able to associate patterns, not only with just a vertex in your graph, or a particular spin configuration, but you're also able to associate a pattern with a loop, which is a kind of. Okay, it's just a dynamical aspect. And, and so you would like to have the learning in such a way that for certain pictures that is represented, you guide your graph to have an architecture in terms of the reactivities which are time symmetric towards that loop. Okay, so these are all the verbal things I mean the proof is in the eating of course but that is what we do with training with supervised training, where we just sample trajectories and we just make our graph better. In terms of what we see that happens. It's just hard for me to imagine how you evaluate whether or not the network has entered the pattern that you would like. So there are there is of course the there is, you have to have parameterization so what we do is what we that we want to have that it happens fast, and that you stay there for a long time. Okay, so these are the criteria. So you fix a particular timing. That's okay has another logic but there is a certain time that you want to say, I want to reach that pattern by this time, and I want to stay there for at least that time. So these are the parameters and if that is not happening, it could be too big or too low, then you adjust your activities. So it's a parameterized learning of course. Okay, let's let's move on to the next question and Peter I just popped out for a second so we've got time for couple questions, john. Your hands up next. Yes, sorry. That's really interesting I was wondering if. There are people where you have a dynamical pattern stored which is parametric resonance, where you excited some frequency F, and your response at half that frequency f over two so they're two phases of the response. And so each oscillator can store one bit kind of information in which phase and then you can have any of them, and people use that actually to, you know, to store information and solve problems and I was wondering, is that enter into the how does that. I admit I am very ignorant in that I didn't know that but that sounds indeed one aspect I mean you're not doing parametric parametric resonance of course, because we were using statistical mechanics basic, but indeed, I can imagine that you can store information in little currents or in loops, or in oscillations or not I mean that is not so strange. And for us as an extra that is natural when you do non equilibrium. You see it's if you have just detailed balance, you will not generate currents never you. The only thing you can do is to steer towards a particular behavior, which corresponds to a stationary distribution, which is parameterized by the energy. So the problem just allows you also to store in currents in a very natural way, but I died. The idea of storing patterns in. Okay, dynamical patterns. As you just said, indeed, I can imagine that this is also something that people have been using and not necessarily lines up with ideas of non equilibrium but okay for example parametric resonance. Sure. Long him. Hi, can you hear me. Yes. Yeah. So you mentioned in talk that are the times anti symmetric part will be related to entropy and time symmetric part is related to the frenzy. So for the entropy we have like a general expression like the Shannon entropy or the heat exchange with the feedback to like a good formula to calculate the entropy is there any a general expression for us to directly calculate the frenzy. The answer is yes. But it is true that it is somewhat it is non thermodynamic right so it's not that you find something immediately related to thermodynamic quantities it's not related immediately indeed to heat or entropy fluxes like you would have with the time with the time anti symmetric part. So do the exercise and you can start with your favorite non equilibrium models. I hope those that make some sense, and you can indeed explicitly calculate these frenzy. So I do not know by heart references but if you would go to my, to my home page. There is a road to kind of review article about frenzy, and there you find all kinds of models. You can calculate the time symmetric part just like you can calculate the time and the symmetric part you can calculate the time symmetric part. And what you find, say for Markov jump processes is that this frenzy is related, as I was saying, to escape rates. Escape rates are basically related to how long you wait until you leave a room right. So this waiting time is this is a choir sense which is certainly time symmetric. I mean, if you wait in your trajectory in the forward trajectory if you wait a certain moment. Well, if you reverse the trajectory, you will also have the same waiting time so escape rates belong to frenzy, but other things which in general we can call dynamical activity. Basically, as I was saying for the ring, these activity parameters that are related to the link to the bond in a time symmetric way, and which we can call the traffic also or the activity the time symmetric activity. That is what enters the frenzy. I mean, the first applications of this idea of frenzy were in response theory. If you do response theory linear response theory around equilibrium. This is a fluctuation dissipation relation that you're getting. So basically you get that the observable must be seen in correlation with the access in dissipation that correlation gives you the linear response around equilibrium. This fluctuation dissipation relation is violated, but on the positive side, you can understand how it is violated by introducing this this frenzy. The linear response around non equilibrium conditions, you will find that, for example, the mobility is no longer given by purely the diffusion in your original model, but it will also be related to a correlation between the current and the dynamic activity. It means, for example, instead of, you know, current is like number of steps in a particular direction. It's, it's, it's an oriented current right if we speak about current, but you can also look at the traffic, which is just not caring whether you go to the right or to the left. It's a traffic. Well, if you do response theory around non equilibrium say for random walks in non equilibrium, then you will find that there is an extra frenetic contribution to the response, which is encoded in the coupling between the current and that activity and not only the current with the entropy. So that's where the frenzy enters. And all what we are doing in frenetic steering is somehow using that response to turn the system around then, namely, if by knowing the response we can start to steer the particle. And that gives us the frenetic steering I was talking about. I'm going to have to button here. Some very extensive answers to those questions. I'm afraid I'm afraid I'm going to have to move on. But if you have any questions you can always email me and I will be very happy to answer. Thank you. Thank you very much. Thank you, Christian. Thank you. Now have Peter right and Walder, who is right if you can share your screen. Here we go. Okay. Welcome and we await your talk on optimal finite time copying. We can't hear you at the moment if you're talking.