 Yes, I've already done access, but sorry, but I had prepared everything, but you know. Passcode, what's the passcode? So, C-O-A-8? Yes. C-O-A-8? C-O-A-8? Yes. C-O-A-8? C-O-A-8? Yes. C-O-A-8? Yes. C-O-A-8? Yes. C-O-A-8? C-O-A-8? C-O-A-8? Yes. So, thanks to the organizer, first of all, for this interesting meeting, and a special thanks to Andrea Vanossi, who I wished he was here. I hope to meet him soon. And the work I'm going to – oh, thank you – the work I'm going to – it's better, also for me – the work I'm going to present has been done in collaboration with Dario Rosenthal, who previously had – called Normale de Leon, and now joined us – joined me at the Institute of Complex System in Rome, and with Angelo Wulpiani, who stays in the same building, but with different institutions, which is the University of La Sabienza. And the subject is the description of the Prant-Thompson model in terms of Markov process. What is the interest to do it? The interest is essentially the fact that in many situations you may have not enough data to build a model, and you don't know exactly which system, which process you are measuring, or maybe you are simulating something and you have too much data to collect all of them to describe the system. So you can use a probabilistic description in terms of the states that you have to choose in a suitable way, of course, of the system, and you have a more scientific description, and this is some kind of test system when you can try to use this kind of description. Another motivation is the fact that once you have this description, you may approach many quantities in an easier way. For instance, you can compute the entropy production of this non-agreable system in a simplest way than if you try to do it with a continuous description like the usual equation that describes the model. And also, at least, at last, we decided to investigate a different version from the most common investigated of the model, which is the one with the constant driving force. So I hope to be fast enough to record the model shortly in what is a Markov chain and illustrate the main result that we got. The model has been mentioned many times here, and yesterday Nicola Manini also gave more extended these illustrations. And he knows that the original model is something which has zero temperature and is this driven particle in a periodic potential, where there is, of course, a dissipative term, and you can extend the model by introducing a temperature, which is usually described by uncorrelated noise with the amplitude, which is proportional to the square of the temperature and to the viscosity in force of the fluctuation dissipation theorem. And what is important to notice is that, actually, there are only two free parameters in this model because if you wish, you can reduce the equation to a dimensional equation but just to show it, but what's important that the two parameters, essentially, you can choose the force and the temperature. We will investigate two cases, the case of simple periodic potential and the case of a corrugated potential in which, at each period, you have several sub-minimum in the force. As I already said, we consider the case for which the drying force F is a constant. And what is and how Markov chain works? What you want to do is to describe your system in terms of a set of states. At each state, as even a certain probability to occur, and in the case of discrete time, you have at each time step, you have a probability for a transition from one state to another. And the distinctive property of Markov process is the fact that, as usual to say, it has no memory in the sense that the probability for reaching a given state S i does not depend actually on all the story but just on the previous state, on the last visited state. So you see that the probability of reaching a given state is, you can express it like a chain made of conditional probability for transition from one state to another, starting from some initial state. And the classical example is a random walk in which the states are the coordinates of the particle and so the probability for going from X i minus 1 to X i does not depend on the whole story but just on when you visit the last site, the last coordinate visited. And so you can set up this Markov chain and you can, this conditional probability are in fact transition probability and you can express the probability for each state as a sum of the probability of a transition from all the possible previous states. And if you express this in terms of a vector, the probability and the transition probability in terms of matrix, you have a compact way of writing all the process. And if, like in this case, the process is time invariant, the transition probability don't depend on time and you have a simple expression for finding the stationary state of the system. If you know the transition matrix, this equation gives you the stationary probabilities which are those which are expressed by the eigenvector which corresponds to the eigenvalue, unitary eigenvalue. And let's go to see in practice how this can work. Let's consider first the case of a simple periodic potential. What one can do is to look at the system at the discrete interval simulating numerically and to look at what the particle is doing. One could choose, of course, the coordinate, but for instance in this case it's more convenient not to choose the coordinate as a state of the system but to choose, let's say, the transition from one minima to another. So at each time one looks where the particle is and associates to the particle the closest minimum. At the next time, if the particle is in the same place, one associates the velocity, let's say zero, or if the particle went back to the previous minima can associate the state minus one and one for forward motion. So one has given the periodicity of the system, one has three possible states, forward, backward, or pinned, and from this one can build the transition matrix at which elements can be evaluated empirically by the frequency of occurrence of the three kinds of motion. And this is just an example for given value of temperature and force and the height of the potential. And you see that it's rather well verified that these three lines are the same and that the main probability at this low force and temperature is to stay in the same minima more than one time. But the first thing to do is to see if this description reproduces well the complete dynamics. For instance, you can look at the velocity. The velocity in terms of Markov chain has this expression because we said that P plus the probability of moving forward to stay and the probability of going backwards. So the average velocity is expressed by this quantity and to be compared with the average that you can compute from the direct numerical simulation and the agreement is very good. So one can proceed to look at the various quantities in terms of the Markov chain. Here for instance, there is the velocity as a function of the applied force and of temperature. You see that at zero temperature, of course, you need to overcome the potential barrier to start the motion whilst at higher temperatures, you can move at any force. And we explored this range of temperatures and this range of forces. And you can define also mobility for the particle if you look at the limit for the small forces of the ratio of the velocity to the applied force and you find a certain expression that empirically we fitted with this strange function, not strange, but this exponent is very recurrent in the quantum model for various reasons. And we compared also the results with an analytic expression that can be derived from a Fokker Planck solution of the problem but in the limit of over-dumped motion and you can find this in the book by Risken and you see that also in the over-dumped case it's not far from what you find in the case with the inertial case. And finally for this periodic potential with one minimum, we looked at the friction, how to compute the friction. If you look at the equation of motion and you take the time averages, then some quantities average to zero and you are left with an expression that gives you essentially the velocity as function of the driving force and of the average value of the potential. And since you expect that the friction is related to the velocity by this similar expression, you can identify the friction with the average value of the force due to the periodic potential and the friction that also in this case the comparison of Markov and direct numerical solution is very good and here too you can see the different behavior from zero temperature and non-zero temperature for the friction in the sense that here you have the static friction, you have to overcome the barrier with the force if the temperature is zero. And so how much time do I have? And in the case of corrugated potential, it's everything is very similar so you can set up a transition matrix that again is periodic in the sense that this transition corresponded also to the transition from T zero to T minus one, let's say, because the potential is periodic. And we repeat all the exercises we did and we found interesting results concerning the behavior of the velocity which shown here is a different temperature. You see that that high force, all the behavior is very similar for different temperatures. And another interesting maybe fact is that the friction has an inversion and there is a minimum in the friction in the sense that if you increase the temperature for low forces you decrease the friction but for high forces when you increase the temperature the friction increases so there is a non-trivial behavior and again you can compute the mobility and find the same expression as for one with different coefficients as for one minimum potential. And now the last topic is the concern, the computation of the entropy production. This is not a equilibrium system, non-equilibrium system so you expect that there is a production entropy for maintaining the system in the stationary state. And there are several theorems or formulas known under the generic name of regression theorems and one of these, one version of this one especially developed by Lebovitz Spohn and by Christian Meiss and the concern is exactly the mark of processes and it shows that you can express the entropy production as through the ratio of the probability for a given trajectory of getting a given trajectory to the ratio of this probability to the probability of getting the reversal trajectory. Of course for an equilibrium system these two probabilities are equal and the entropy product is zero. In this case we know the probabilities so we can write explicitly the expressions and compute you know the probability of going for a given trajectory is essentially how many times you went forward, how many times you went backward and how many times you stayed in the same place. So plus and minus and zero are these times and you can simplify the expression and finally get this result. But these numbers also you can express for the low or large numbers so you can express as they are proportional to N, well I skip okay but you can express N plus and minus as function as simply as product of N times the probability plus and N times minus. So if you want to compute the entropy production rate which is the entropy for the time unit you get this expression and you can see how it behaves and of course the entropy production increases with temperature and with force and but for instance another application of these descriptions that you can find an analytic expression for at low forces for the for the entropy production. In fact if you have at low forces you have probability for going forward or backward are almost equal you can express them with this approximation and if you make the a bit of algebra you see that the production of entropy is proportional to the square of the force and an interesting fact is that there is a maximum as a function of the temperature which is was not easy to predict in other ways. So similar result you can get for potential corrugated potential with the difference that here many backwards trajectories are forbidden and this is a problem of this kind of of course of computing entropy in the sense that if you don't have the probabilities for the backward trajectories you cannot say anything and so these are the the the results that we got just for to summarize it can be shown that the the prantome in some model is well describable in terms of Markov chain and specifically we investigated the model which is not so much studied with seems so to us in the case of constant force and you can use this description for computing also quantities like entropy production which are not easy to compute in another way and thank you to everybody. So thank you for my presentation and just a question. Very interesting method and it was curious if it's easy more but a finite contact size so more particles if I understood that this is for a single particle case and even maybe functionality two-dimensional systems and so on. Well well specificity of the mark of descriptions that you how to say you should choose the right state for describe your system so it depends also what you want to look at in the sense that if you are in you have many particles and you want to know something about all the particles course your proliferation of states but if you are interested in some average property of collective properties maybe that also a reduced number of states is enough so it depends it's some kind of art to know which state to choose and depending on what you want to do. You mentioned that some backward probabilities are somehow unavailable in this last possibility with the multi-minimum. I don't understand why. Why is that? Because you don't observe actually that the backward motion at high temperature and low temperature and high force you don't have backward transitions so you cannot say anything you cannot you don't have the probability of the reverse trajectory because you mean that is zero this probability zero. Doesn't that mean that the probability is zero? Well the entropy is infinite I don't know but no I think it's a this problem of this description I mean no I think I don't think that the entropy is infinity of course so I think this is a limit of this of this description. The next presentation