 Ok. Zdaj smo. Zdaj sem tukaj zelo v zelo. Tukaj počet. Ok. Zdaj sem tukaj zelo, da te dinamikov vzelo v modelu in in to je češnjel izgledaj, da je izgledaj, da so bilo občasne. Pero eno, vse občasne, da smo se o najbolj denamigovati vsega vzivajstvina, je zelo na numerikovostimulacije, se da je češnje, zelo, in prejvenje češnje je češnje, kaj, da vse bo, kaj bi je dobro vzivati, nekaj, kaj je rečenja, tez bo, nekaj, da imaš rečenja z numerikovostimulacij, En o drugimKay stranji smo nekaj, nekaj začnom, zelo načeži, se, su, klima, dera delov哎 si, sreča naši je zelo, še začno bolj, nekaj ta hopedj, načal bo, nekaj, that maybe this is not all the story. So, there is much more to understand. So, why the dynamics of spherical fully connected model can be solved? Essentially because variables are real variables. So, you can write a large event dynamics, z zelo kavinju. Zgledajte, da je zelo konfiguracija kajnega čeličkega tudi. Na nekaj prav, nam je nekaj prav, načo je zelo konfiguracija, in tudi je je komputer... Is this working? Yes. The Hamiltonian is worth, more or less. It works when I do this. So you take the derivative of the Hamiltonian with respect to the variables and you compute time t and then you have a random noise term which is usually you take a Gaussian random noise. So this again is a vector. And this Gaussian random noise is mimicking the effect of the thermal fluctuations or thermal noise. So essentially you have this, say, deterministic part the system in absence of noise try to minimize the Hamiltonian. But if you have a non-zero temperature on top of trying to go down in energy you also have thermal fluctuations that makes the dynamics to be a stochastic dynamics. And this term is usually a Gaussian variable of zero mean and variance which is twice the temperature and is delta correlated in time. This is the launch of an equation and this means essentially that the thermal fluctuation that the system receives at any time is uncorrelated from the past. And this factor of temperature here is fundamental in order to have in the large time limit the system if there is no ergodicity breaking samples the Gibbs probability distribution at temperature T. So this factor 2T is such that on the long run the system will visit all the configuration according to the Gibbs-Boltzmann distribution of a system in contact with a thermostat of temperature T. So let's take this as a Bonafide dynamics. So obviously it's not what happens in a real system. In a real system the dynamics is a quantum dynamics but we are here on the classical realm and we want to keep classical so we are still working with the classical variable and if the spin is not well defined but if the spins are spherical spins so real variables this differential equation is perfectly well defined. Eventually if you want to enforce the spherical constraint you need to add a term either in the Hamiltonian or here usually you add it here this mu enforce the spherical constraint. So at each time you choose the right mu such that sigma at time T satisfy the spherical constraint. And everything depends on time so then you can solve this by discretizing time. Actually what people did in the late 70s is to solve this model using the genetic functional method the one originally invented by Barty C. Geroz which has been specialized to the case of disorder model by the dominicis and ampillity. And actually I don't want to explain you the method but just to tell you that essentially you build a probability distribution of trajectories each trajectory depends on the noise so you have a probability distribution of trajectories and from that you try to understand what is the... I will show you. You want to simplify since you are in a fully connected setting solving n couple of questions because these are n couple of questions because the full configuration is unfeasible and you want to convert this equation in a equation for a single degrees of freedom but a more complicated equation. And this can be done with the generating functional method because actually what you do well let me put this term into the Hamiltonian so the Hamiltonian I am using is the usual Hamiltonian and then if you want to plug also this term into the Hamiltonian it's enough to add one term of this kind to the Hamiltonian such that when you take the derivative with respect to sigma you get the term enforced in the spherical constraint so you recognize that this just like add in a Gaussian more term in the measure in order to enforce sigma squared equal to one and then eventually you can also add this is integrated over time you can also add an external field that couples with the spins such that when you take derivative with respect to h one spin goes down in the measure so you can add all these terms so we don't have to look at this and then essentially you want to have a probability distribution over trajectories of these spin variables and you would like doing things very sketchy just to give you an idea the idea is that this probability distribution is what you get if you integrate out the noise with this Gaussian measure of zero mean and this variance so here there is the integration over the noise the integration is over all times you have to think that at each time you have a noise so these are an integration over all the realization of the noise along a trajectory the probability which is Gaussian of the noise and then you want to enforce the that the Langevene question is solved so you want to enforce the delta function between the the Langevene question that I can rewrite as sigma dot, which is the time derivative plus the derivative of h with respect to sigma minus the noise and in h in the new Hamiltonian put also the spherical constraint that will be useful in order to take derivatives and compute mean values of sigma so essentially you try to do this and when you try to do this you want to integrate over the the noise so what you do is the usual trick you take the delta function you exponentiate it you introduce new conjugated variables sigma hat in order to make the integral representation of the delta function and at the end so once you at the end what you get is a probability for sigma so let me let me write the really the expression for the what is called the generating function this is nothing that the integral over sigma of the probability over all the trajectory these are all trajectory I don't write the time and the time index and if you use this representation for p of sigma and you do some work at the end you can write this in terms of the original variable the conjugated variable that enforce this term here and you integrate over the noise and you end up with something which is this way where the knot is the is the original variance of the noise so the knot is given by the original variance of the noise which is given, it's a delta function now I write this way because I want to show you what happens when you when you average over the disorder and if you in principle this quantity is one because you are just integrating a probability distribution so it seems a rather useless expression but actually it's not because you can add some more terms here so you already have in dh ds a term with the external field so from here you see you have one term which is i sigma hat and dh ds you have a term with the external field h now for symmetry reason you can add and at the end you want to send all this external field to zero because they are not in the original problem that you want to solve there's been added only such that when you take the derivative of z with respect to h one sigma hat goes down and when you take the derivative of z with respect to h hat one sigma goes down but at the end so once you write this generating function so the generating function you have to think that it's an object which depends on h and h hat because you over all the other variables we are integrating and thanks to this if you are interested in computing for example the correlation between the configuration at time t and the configuration at prime this one of the object we are interested in because this c t and t prime the average is over all trajectory so all the ways the noise affect the dynamics of the system you see that this can be computed by taking the derivative twice with respect to h hat such that one sigma goes down there are e-factor so in this sketch I will not take care of all the minus signs you have to be a little bit more careful that what I am doing things now but this is proportional so we save all the minus factor double the derivative of z with respect to h hat in t and h hat in t prime and then you compute everything in h because those terms were introduced just for the purpose of taking the derivatives and the same you can do for computing the response because the response is the derivative of the mean value of sigma at time t with respect to a real field at time t prime and so again since the average value of sigma at time t is just the derivative of z with respect to h hat time t this is proportional so I don't have all the i factors around is the derivative of h hat time t the derivative of h at time t prime and then you compute everything so this is just to convince you that once you compute this function which is the generating function for arbitrary external fields and conjugated fields by taking derivatives you can compute all the quantities of this what we call r of t and t prime so in particular you can compute correlation function response function so the question is can I compute this generating function in a disordered model because if I am now in a disordered model the Hamiltonian contains disorder so I can do this for any disordered sample but actually I would like to compute something average over the disorder correlation function and response functions are observable are quantities which are called what does it mean if you compute the correlation function on a very large system on two very large system they mostly coincide in disordered model there are other things they are not self averaging so you have to be careful because for example as we saw the canonical function is not self averaging while the free energy is correlation and responses are self averaging quantities so you can take the average of D so you want to take the average of C over the disorder and so you can take the average of Z over the disorder this looks strange because Z looks like a dynamical partition function so you say how can you take the why in the dynamical approach the average of the disorder of this which is a sort of dynamical partition function this is because essentially and moreover there is one more analogy that if you work it out with a little bit carefully you realize that the time in this expression is playing more or less the role of the replica in the replica calculation so the analogy is strong but still you can do the average of Z essentially because by definition when you set all the fields to zero Z is one it doesn't fluctuate from sample to sample so in H equal to zero by definition Z is one so there are no fluctuations from sample to sample so as long as we are close to zero you are safe in taking the average of Z so what you see in it is that the probability distribution over the trajectory is an exponential term but you don't have the normalization factor below the normalization factor is Z itself which is one so you don't need replicas in order to take the Z which is in the denominator and compute it correctly so there are different ways of seeing it more illuminati in the fact that for any sample Z is one so really in this case the generating function does not fluctuate so much from sample to sample so you can take the average of Z this means that if you take this term here the Hamiltonian may have a deterministic part and a random part in the model that we saw there is always only a random part but in general you may have a deterministic part and a random part and you want to take the average over the the random part and what happens when you do this in general and so also in the models we are interested in all this derivation about the the dynamical equation derived through the generating functional formalism is very well done in the lecture note by Castellania and Cavagna you can find it so when you compute the let me split this dh in the sigma in two terms I call it l0 of sigma is the deterministic part and lj of sigma is the random part so in general the the the Langevin equation is sigma dot plus l0 of sigma plus lj of sigma is equal to the noise so this is the the Langevin equation and these are the terms these three terms appear here to do the the average value of z over the over the disorder you have some terms which do not depend on disorder which is this and this and also l0 so you have some terms like minus 1 half sigma hat d0 sigma hat you also have a plus i sigma hat sigma dot plus l0 which do not depend on the disorder and then you have one term which do depend on the disorder so you have one more term here times exponential of i sigma hat lj of sigma you want to average this over the disorder you can do this average but it is surprising that when you do this average this is computed let me write this more explicitly this should be intended always as the exponent an integral over the times so when I write this expression I mean sigma hat at time t lj computed on the configuration at time t so I am assuming that all terms have contraction as an integral over the contraction over the indices here the indices are the time indices so it is an integral over the time so it is not surprising since here there is some disorder when you take the average over j by with the Gaussian measure as usual what you get is the square of this term because this term is linear in j so this term will be something you integrate with the with the Gaussian measure for j and what you get is something proportional to x square and since x is all this term but j when you do the square you are coupling different times because you have an integral over t an integral over t prime and so the system will result in an effective coupling between different times which was not present before before we had all this exponent computed as an integral of a single time so here the action was local in time so only variables interact at the same time here integrating out the disorder we couple different times exactly as it happened in a replica calculation because we are independent then you average over the disorder and replica get interacting very similar and the final result is that this expression here will have many terms but the important one when you do the average over the disorder you get one term sigma hat d1 which depends on sigma and sigma hat sigma hat and another term which is sigma hat l1 of sigma and sigma hat so what's happening among many other terms what's happening is when you I will write this term explicitly now so you can see them but essentially when you integrate out the disorder and luckily enough you get some terms which are of the same kind of those that were already present in the action so for example you can prove also you can assume because otherwise you don't know what to do that there is no term sigma sigma which is not present here you see there is a term sigma hat sigma hat which is a term sigma hat with this l1 which will be a renormalization of this l0 so essentially the average of the disorder induces new terms in the action and these new terms are no longer local in time and so the final result let me write it for now for a p-spin model so this more or less the way to proceed and now you do really the computation where lj is the long event term for a p-spin model so now let's move to the model we would like to understand what happened so I can show you the actual equations so now let's take the p-spin model and sorry once you realize that the average generating function is of the same kind as before with only these interaction terms which are changed you can claim that this average effective is the one corresponding to an equation with l1 instead of lj and the noise which is no longer correlated with d0 but is correlated with d0 plus d1 so once you understand this you say so this something general let me write it here before going to specific this new equation this new generating functional is the one that would correspond to a long event equation where I do have sigma dot plus l0 plus l1 so a new long event term equal to a new noise because now the noise is different is color is no longer xi where the correlator of xi is no longer d0 but is d0 plus d1 and I will show you that in general l1 is no longer a local term in time so you get two modifications of your Langeven equation because of the average over the disorder on one side the Langeven equation gets this is a memory term because now here you have two times is what you get when you take the square of this term yes no this just this expression means the integral over time because since the beginning we had for we have to impose the delta function at each time so at the beginning when you compute p of sigma you want to impose this is a product over all times of the delta function of sigma dot plus l0 minus eta and then you want to integrate this over eta this is a product so once you you exponentiate this you get essentially an integral over time of sigma hat of time sigma and then you integrate over eta so everything is integrated over time but it's local once you do this l1 is no longer local in time so it's the effect of averaging which makes different time interact so this l1 now is a memory term depending on the good history and the noise is no longer delta correlated in the sense that why d0 is delta correlated in time d1 is not for the same reason why l1 is not so in d1 and l1 these are nonlocal term in time so this what happened in general is pretty similar to what happened with replicas so you take the average over the disorder replicas start interacting now here you take the average over the disorder and times start interacting so this equation now has a memory term is no longer Markovian process previously it was a Markov process because it was not depending on the past now it depends on the past and moreover the noise is no longer delta correlated so also the noise is more complicated so if you take this formula you apply to the spherical p-spin model what you get is that this equation let me write it a little bit more explicit so you see how it changes from the original equation you get the following you get the derivative of sigma respect to time is equal to minus mu of t sigma of t plus the integral between 0 and t in the t prime sigma t and t prime and sigma t prime you see this is the memory term I will give you an explicit expression for sigma but in the equation let me write it in a compact way ok plus psi of t so we respect to the original now you have this term which is no local and moreover the correlator of psi average value of psi is always 0 but the correlator now has the original term which is delta correlated in time plus a new term which is no longer delta correlated in time so these are a correlated noise these two effects the memory term and the correlated noise are usual when you do the average over the noise over the disorder in a in a stochastic equation ok what does it mean? because you see when you compute this like it plays the same role as the force bringing the system in a specific direction in order to minimize the energy but now this force is not computed on the configuration sigma at time t as before is computed on all the configuration sigma at time t prime between 0 and t so essentially is considering all the past trajectory in order to compute the the direction where you have to go originally this term originally was the h the sigma computed at time t so originally was something that you compute on the configuration at time t now is something that is integrated over all configuration at previous times exactly so is no longer Markovian because what you do at time t it depends on all the history so now this part is not Markovian and let me write also the explicit expression for for sigma so sigma of times t and s can be written as r t and s and if you want to do it in the more general mixed p-spin model let's do it so this p minus 1 half c t and s p minus 2 otherwise you have to take the second derivative the function f that defines the mixed p-spin model and the correlator of the noise is given by p half c t s p minus 1 so you end up with a new equation which is more complicated but the advantage is that now the disorder has been averaged so what does it mean it seems that this equation now no longer depends on the disorder why because we assume correlation responses to be averaging so essentially this equation satisfy by any large enough system so this no longer depends on the disorder so we end up with a single equation longer a disorder equation but more complicated but this holds for any large enough system having that random amiltonian ok malo over from this equation one can easily derive the equation for for correlation and responses I will not explain to you how to do but it's once you have the derivative of the equation for the derivative of sigma if you want to compute how the correlation changes in time it's not very difficult because you have all the ingredients to do it so for example if you are interested in computing the derivative of the correlation function in respect to s since the correlation function is just the mean value of sigma t sigma s you have to take the derivative of sigma s with respect to s so you take all this expression which is the derivative of sigma at time s with respect to s and you plug it inside and at the end what you get is something like minus mu c of t and s which is nothing but this term here plus the integral between 0 and t the t prime sigma t and t prime c of t and s because again when you do the correlation between t and t prime and sigma of s you guess correlation c t s plus you have to work a little bit more in order to understand what happened to the correlation between psi and sigma but at the end what you get is the integral between 0 and s the t prime d t and t prime r s and t prime and the analogous equation for the response is the following only the relevant terms so you see that what you are doing essentially you are closing your equation on the two quantities that you are interested in correlation and response because you had an effective equation for sigma but since at the end you want to compute correlation and responses you can go from that equation and replace it differential integral equation for c and r the only thing that you need in order that this equation to be closed is an equation for mu which is still not known but you can get this very easily from the fact that c t t is always 1 by spherical constraint and so taking the derivative respect to t of this quantity is 0 and from this you can derive easily an equation for mu of t which essentially you find to be equal to t plus the integral between 0 and t in the t prime d t t prime r t t prime plus the integral between 0 and t the t prime sigma t t prime c t prime t or t t prime c is symmetric so they may look difficult but actually you are solving exactly the dynamics of a model by solving two differential equation where mu is completely given now remember that you have to solve for two times so this is not an easy object because essentially you have to compute this correlation for any pairs of times and this is given by the integral over the past so these are not a question easy to solve but still you can write in a half a blackboard the equation for any p-spin model because now here I wrote the expression for the pure p-spin model but you can write it for any spherical this very compact expression now the question is if I solve this equation what I get now we got the equation which are exact in which sense they are exact because we derive them under the well essentially we derive this equation and I haven't done all the computation explicit but in the computation there are some subtle points so you have to take n very large and this is why the equation at the end the couples the sides so we couple the times but this equation is the same for any sigma so the sigma i sigma j all the sigma are exactly the same equations so you just solve for one sigma as usual all the sides become equivalent the sides are different because this j is different from the other one but once you have over the j these are n spins connected everybody with everybody so they are completely symmetric equivalent so as usual when you take the average over the disorder sides the couple but in the computation you actually use the fact that n is large and so there is that time as long as n the size of the system goes to infinity and the time is large but much smaller than n or much smaller than exponentially in n so you don't in some sense the system has to remain in a situation where for example suppose that you take a model that eventually thermalizes and fall down in the course and in model we are discussing this morning you have a first course and in regime eventually the system thermalize either plus or minus that thermalization step and falling down in a deep minimum is not well described by this equation so this equation are describing a regime where the system is very large and times are large but cell fabricis must hold and when the system is thermalizing the correlation function is no longer cell fabricing because when you are close to do the jump into the deep hole the time which you make the jump may fluctuate a lot from trajectory to trajectory so while the first part everything is very smooth and if you take a system large enough if you take two systems large enough they will behave the same practically the same and it will thermalize by a rare fluctuation the time when they do the jump may be very different and so at the end the correlation function is no longer cell fabricing the two differ so what can we say about this equation so solving this equation is not easy so even numerically is very important I know just two, three words in literature solving explicitly this equation for very long time but it's not very easy to do but we can say something interesting analytically and this is what I would like to show you now so suppose that first of all this equation should be valid at any temperature so in principle they are valid also at very high temperature at very high temperature we don't expect to have aging we expect everything to work very nicely and so if we put our cell fabric at very high temperature we expect the system to eventually enter in a we expect the system to eventually reach an equilibrium a stationary time-translation invariant behavior because at high enough temperature everything should be very nice and the system eventually reaches the time-translation invariance regime where FDT holds this is the solution that has been computed by Crisanti-Orner in the first 19 in this situation you can simplify the equation by say that this is just a function of the difference of time sorry, I am using as a different notation with respect to the previous lecture the first argument was already the time difference where here the two arguments are really the two times where you compute the correlation so eventually if TTI holds the correlation will depend only on the difference of the times and the same for the response will depend only on the time difference and these two functions which depends only on one time these two they are related by the fluctuation distribution in terms of sigma and d is easy to see that FDT can be written as d dot is equal to minus t sigma so also d and sigma becomes a function only of the time differences and so the fluctuation relation the fluctuation distribution theorem essentially put a relation between response and correlation between d and sigma so you plug this answer inside and obviously the equation will become just one equation because once you compute the correlation you also have the response so it's just one equation and this one equation one more property that you use in order to derive this one equation is the fact that a high temperature you don't expect any breaking so there is no ergodicity breaking which means that the limit for t going to infinity of c of t is zero so you want to forget the initial configuration eventually if you use this relation and that limit you take this equation and convert in a single much simpler equation which is this one c dot of t is equal to minus t c of t minus one over t the integral between zero and t in ds of f prime which is p half c of t and s c dot of s so this is the equation that should be solved by the correlation if all these hypothesis holds true so here we are very tired now we do the coffee break and I will show you what happened to this equation after the coffee ok so we can start again many people are missing so this equation here in the literature you can find it also in the so-called the schematic mod coupling theory MCT mod coupling theory and has been used a lot as an effective description of dynamics of system like glass formula liquid so system that do undergo a very strong arrest very strong slowing down when you change the temperature and I want to show you now that actually this equation has indeed a dynamical transition where the shape the solution c of t changes dramatically and this we will see is exactly the dynamical temperature of the spherical p-spin model so well this at least is an equation in just one variable so you can even integrate it much much easier than the full equation which are in in two variables but since we are just interested in the large time behavior so I would like to understand that when and how this system is is relaxing to zero and when this condition fail which means there is ergodicity breaking so I am interested in the behavior of c of t for very large t very large time so let me take a time which is very large and if the time is very large you see here we have oh sorry I wrote this t minus s sorry this is just we are using time transition in variant this is c t minus s sorry so what do you notice in this integral is that if t is very large you can compute the main contribution from this integral comes either when s is close to zero when s is close to zero you have a contribution which is non-zero because at the beginning the correlation decays with a slope which is non-zero but this term is small because it is c of t well it is small it is c of t and what is it breaking is going to zero on the other hand when s is close to t you have that this term is large because it is c of zero is one so the correlation is large when s is close to t but then if you are reaching an asymptotic regime where c is not changing anymore the derivative is very small so what happens is that at the two extreme one of the two terms is very small the other one is different from zero in the middle you can see that both are very small so essentially this integral gets the main contribution which are small contribution because they must be compared with c of t which again is going to zero and c dot which is going to zero so all the contributions are small from the two extreme of the integral in the middle it is small so it is close to zero or either t minus s is small if you do this you can integrate exactly this integral you can see that so this integral here p half c t minus s p minus one c dot of s has roughly two contributions one contribution when s is close to you can take t minus s constant and equal to t so the first contribution is s close to zero and much smaller than t so this term is essentially constant so it is a p half c of t to p minus one and the integral in the s of c dot of c of c dot of s now is integrable c in zero minus c in t c in t minus c in zero so this c of t minus c of zero which is one so let me write one so this integral comes from s much smaller than t and then you have a second term I will put it here where now t minus s is much smaller so s is roughly t and this part is small and so you can write this as c dot of t because now in this term t minus s is much smaller than t so s is roughly t so you can put t here and bring this outside and then you have an integral between you change variable t minus s is the variable which is small so you integrate in t where t is t minus s so this integral is just the integral in t of p half c of t is a number ok let's call all this number a and don't care about the value it's just a number so if the integral has been solved in this way now you see that this term c dot times a you can take it and bring in the other side is just putting something in front of c dot and the other term is proportional to c of t and you leave it on the on the right hand side and so you end up with an equation of this type 1 plus a over t c dot of t is roughly equal because we solved the integral in this way 2 the first term remains the same minus t c of t and the second term now is this one so is plus p half c t p minus 1 c t minus 1 sorry there is a minus 1 over t so sorry this is 1 minus c t divided by t have to multiply it by minus 1 over t ok we are almost done now the correction function is a function which always decreases so c dot t must be always non-positive ok because c is a decreasing function so if the left hand side is non-positive also this quantity must be non-positive ok but now look that all this expression is just a function of c of t so we are asking a function of c of t to be strictly non-positive and so if you try to solve this well you can write it here essentially you have to solve this in so you want to ask all this to be non-positive and if you reorder a little bit the terms you can write it as you can write it yes you can write it as 1 minus c of t divided c of t p half c of t to the p minus 1 much smaller than t square ok, this is a function of c of t I drew it for you I will draw it for you and I drew this function here and it looks like 1 and 0 it starts like it starts like this it depends on the p value but it does something like this so what is telling you that all this construction is consistent if the temperature is higher than this function for any value of c because you want the correlation to go from 1 to 0 and essentially so if you take a temperature t square which is here is fine all the construction is consistent until the very end notice that the quantity that we are plotting so if you take this minus this is proportional to c dot to the rate of the correlation so this difference is proportional to c dot which means that at the beginning it goes down very fast then you reach a region where the slope is much smaller and then eventually you go down again very fast what happen if you decrease the temperature if you decrease the temperature at a certain point you reach the temperature where you find the first intersection at this point you have that c dot is 0 so essentially the correlation does not decrease anymore because the difference is proportional to c dot so what's happening here is that first this condition is no longer satisfied but this c dot is 0 so the correlation gets stuck is no longer going to 0 we call this value qd for obvious reason and we call the value of the temperature where this happen td square for obvious reason and since the expression for this function is very very easy now let me write in terms of so this function I am plotting here is 1 minus x over x p half x to p minus 1 essentially you have to look for so let's do it p minus 2 is much simpler you just have to find the maximum of this function you do a derivative and believe me I just so you have to take a derivative set it to 0 and then you can compute very easily qd and td and what you get is that qd is equal to p minus 2 over p minus 1 and td square is the usual p minus 2 to p minus 2 to p minus 1 so what we got from the thermodynamic computation so what's happening if I draw this correlation function at some temperature before above the dynamical temperature the c dot is always different from 0 although around the qd is already getting slower the picture you should have in mind is the following c of t starts from 1 suppose that this is the value of qd if the temperature is larger than td it goes down it becomes a little bit a little bit slower around qd but still goes down until the very end you decrease a little bit more the temperature and here it goes down it takes more time because now the slope is smaller and it will take more time and once you reach td it gets stuck so this these two are t larger than td and this t equal than td when it gets stuck all the hypotheses under which we derive the equation are wrong and so you have to look for a new solution but essentially since you can do this for any temperature slightly above td for any temperature slightly above td the equation was valid and so we can also take the limit and understand that qd is really the value where it gets stuck so this is essentially the reason why at td we have to forget about the parametric solution this is ergodicity breaking this means that if you start from any configuration at equilibrium the system which means at td it will be in one of the glass of the spring glass states the correlation is not able to go to zero so you will remain constrained in a region whose similarity in respect to the initial configuration is qd and eventually if you go even lower in temperature the plateau value increases and everything becomes even more stuck so this is one of the main thing I would like to show you from the dynamics because this somehow justifying all the picture that I described to you based on the TAPA question the replica calculation because the parametric state was always there and so you have to justify why at a certain point the assumption underlying the parametric solution is wrong and this is the justification ergodicity is broken and so even if between td and tc the free energy is by chance is exactly because the right computation has m equal one so it's not a chance it's the fact that you have exponentially many states and you still haven't reached the condensation so the point dominating the thermodynamics as m equal one and for m equal one the spring glass free energy is equal to the parametric free energy so it's not chance, there is an explanation but in that region the free energy of the right solution is the same as the free energy of the parametric solution the parametric solution must be abandoned below td because ergodicity is broken ok the other the only other feature of the so if you assume tty that's all essentially you have one equation you can solve it very easily and you understand that there is an ergodicity breaking now if you want to go below td assuming equilibrium assuming that everything is ergodic is wrong so you have to forget about this simple equation in one time variable you have to come back to the equation in two times can we say something about the general solution of those equations likely enough for the spherical p-spin model yes if I put now myself below td I have to come back to the equation for c, t, s r, t, s ok I really have to solve this equation this equation are not easy to solve numerically but there are some again key feature that can be computed analytically and the main reason the main hypothesis that allows to solve this equation is an hypothesis that allows you to uncouple the regime of short times and the regime of very large times otherwise if these two regimes were a couple you are forced to solve all the dynamics in order to understand what happened after large times and this is impossible you have to do it numerically but analytically the main reason that allows to make some prediction about the large time behavior of these two quantities is the fact that the large time behavior becomes uncoupled from the short time behavior so whatever you do at very short times so you keep the system and the system is crazy eventually and if you let it relax for the rest of the time at very long times it will reach a solution that is uncoupled from what happened at the very short times and this allows you to close the equation only on the very long time part and this allows to make some analytical prediction and this idea of of having in short times and long times can be somehow formalizing as being formalized by Kuljandro Kurchan that derive this equation and the first solution in the last time regime of this equation in 1993 by the term weak long term memory what I mean by this weak long term memory essentially means there is a long term memory so it's not that the system is as a short term memory if you have short term memory you are Markov like no no here the equation has very long term memory because the kernel sigma depends on the correlation so if the correlation decays slowly sigma decays slowly and the correlation of the noise d and the correlation and the responses so if correlation decays slowly also the noise is very long correlated but weakly long correlated and you can see this if you plot the response time t and s as a function of the difference of times s is smaller than t so this quantity is usually negative so I want to see how the system feels at time t what happen at time s and you can do this plot for different times times s so this will look like this this for a quite short time s so this essentially is zero yes this minus t because it is when s is zero so this is the initial time if you increase s it does like this if you increase even more s it does like this so what's happening you have a region here you see is independent so s is growing in this direction so you see that there is a region of very short time differences where the response is sensibly different from zero which is reasonable you remember what happened few steps before if someone gives you a kick three steps before you feel it on the very long past do you remember the very long past because we have these long term memories and so the response is essentially what is telling you how much do you remember what happened here and you see the increasing s which is what you would like to go in the limit t and s much larger than one so you are interesting what happened when s is large you see this is decreasing so for a fixed value of the time difference the response is becoming smaller and smaller so while the system is becoming more older and older it remembers less and less what happened in the past so this is good and this can be so in particular the two main quantities that one is interested in when you try to solve this equation for c and r taking all the integrals and splitting the integrals in a part close to the first extremum of integration and the second part close to the other end of integration are integrals of this kind you are interested in integrals of this kind limit so you see this is the integral up to now now is time t starting from a time which is t minus t bar so essentially you take t bar step in the past and the answer if I choose t bar large enough say 1000 steps I am taking all the integral of the response so can I ignore what happened before t bar the answer unfortunately is not what I mean is that this integral for any value of t bar finite than the limit of t going to infinity of the integral between 0 and t in the s of r, t and s is always strictly smaller this means that for any large value that you want to consider in the past if you keep a finite memory so this is equivalent to keep finite memory of the past because t bar is the the length of your memory if you keep a memory of size t bar you forget something you are disregarding something because this integral is always strictly smaller than the integral over all the past so this says there is longer memory you are not allowed to tronkate your past you have to keep all your past but luckily enough and this is what allows you to to actually solve the equation is also true that for any finite value of t bar the limit of t going to infinity of the integral between 0 and t bar ds r, t and s is 0 so what does it mean what happens on the first t bar steps is forgotten so you see it is a very delicate situation you are not allowed to keep only a finite time memory but the very early transient, the very first steps are forgotten obviously in the middle there is a lot ok so this means that you can disregard what happens on the first t bar steps but still so even if you are not allowed to keep a finite memory you are allowed to disregard what happens on the first finite steps and so if you wait long enough that all the initial transient happen then you can close the equation essentially these two properties in particular this property in some sense is negative it says it is not a mark of process otherwise it will be a mark of process with a memory term of length t bar is not a mark of process still the memory of the past is weak enough that you can close you can solve the equation by separating the long time and the very short time in the integrals and so thanks to this property you can solve the integral and what you get and what did Kuljandor Kulčan got when they solved this is essentially what I was already telling you for coursing process so you can indeed split the system the correlation in a station or FDT part which depends only on the time difference t now is the time difference plus an aging part which depends on both times in practice let me write this way for the moment t wedding plus t and t wedding and essentially what happens is that when you solve the equation for this part this part already became stationary so it reached the last time behavior and so doing the usual plot essentially this FDT part is what brings you somewhere we can call this the plateau value or the Kuljandor Kuljandor let me call the plateau value we are doing dynamics and so FDT is only responsible for the first part and when you solve the second part not only the first part is already stationary but all what happen on the very first time thanks to these properties are irrelevant for this part here so you can work on this part disregarding the first one so this part is the one which age and you want to understand how it age thanks to this so you can remove from all the integrals the short time part and for this part you can use a scaling that you eventually plug into the equation and now the equation simplify because once you have a scaling relating the two times they are no longer equation in two times they are equation just in one quantity which is the scaling involved in the two times and this is what yes but then you have to decide when t diverges whether t bar stay finite or diverges with t here t minus t bar this diverging so this is the key point the key point is that the time is diverging and you have to decide whether you want to have a second time diverging with t or the second time stay infinite if the second time stay infinite ok, we don't care it if diverges with t we care it but we have to be careful because in that situation so essentially if you take t bar which is say r times t so it diverges but it's much smaller and so you this is the right scaling this is just to say that any so it's like say I have a macro process but the memory diverges with time t so it's still intractable and so you need that property in order to to close the equation and so once you close the equation what you get is the last result I want to show you is that both the c aging and the aging part for the for the response can be written so c aging of t t and s now both t and s are very large and we forget about the finite time this can be written as the plateau value and the ratio of two functions h of h of s divided h of t and the same holds for the aging part this can be written as h prime of s divided h of s and another scaling function so you see that while these two are function of two times these are function of just one quantity so thanks to this scaling we are now reducing effectively the number of variables from two to one and so we can write you can plug this into the equation solve the equation and find a relation between calligraphic c and calligraphic r and what you find in particular for the for the p-spin model two things you find that in order to be able to close the equation two things must happen cp must satisfy this equation cp is for plateau let me call it plateau otherwise you think is p of the p-spin call to one maybe you don't remember it but this is exactly the equation that we wrote for the overlap of the threshold state so for the overlap of the threshold state we have q-threshold p-p-1 half q-threshold p-2 equal to one so what we are finding is that this edge in dynamics below td so the system is out of equilibrium there is no way to have a connection with thermodynamics so what we find is that the system is doing aging as if the first part you have a relaxation within a state which has exactly the same overlap as the threshold states and then you do aging so you are not really inside a state but the very close to threshold states is there and then you also have one more the other properties is useless that I told you because I should spend 10 more minutes in order to connect it to thermodynamics but essentially there is this one more property that tells you that the relation between correlation response seems to be related to to the p of q that you compute from thermodynamics so there is a connection between this quantity and a relation between c caligraphic c and caligraphic r that you measure in the aging regime so all these results seems to suggest the following picture I want to leave you which has been proved in the spherical p-spin model that we hope to be more or less valued also in other model because maybe it's the simplest I don't want to say it's the only but it's certainly the simplest way to understand what happened to the last time dynamics in disorder systems and so the picture we have let me plot in terms of energy and temperature so here we have actually energies should be negative so let's do a plot which is meaningful otherwise you would complain and agree so energies are negative here you have the parametric the parametric energy that if you derive is minus beta 2 then we have td, tc up to here the free energy is the same so also the energy is the same but below we know that the terminology changes and so we have to follow this minimum state but we also have tashel states that they meet the the the parametric this is energy, it's not free energy now it's really energy, what you measure when you do Monte Carlo simulation and so what's happening to the dynamics the dynamics seems to be ergodic here so up to this temperature we visit all the space at this point if you start at equilibrium because it is broken and if you start out of equilibrium so you really do aging the system seems to relax you start from very high energy relax to treshel states while doing aging if this is true and in the spherical peaceful model is true you see that this extremely convenient you don't have next time you don't have to solve this nasty dynamical equation differential integral equation in function of two times for very long times you do the regular calculation on the back of the envelope because once you know how to do regular calculation is real in the back of the envelope you compute treshel states and you know where the system ends so if someone ask you do you go down or if I do a quench to a temperature below the critical temperature and your prediction is it will go to treshel states it will do aging on treshel states because these treshel states are not really very well done states because remember above treshel states there are no minima below treshel states there are minima at treshel states they don't look really like minima marginal object so object which are very spotted is not really a minimum because slightly above there are no minima so is an object which is like a manifold and the dynamics is to approach the manifold and serve on this treshel states so if this works it is fantastic because we can understand something about the outer equilibrium dynamics from thermodynamics are well done thermodynamics because these are metastable states so we know we have to count from computing the sigma of f but still it can be done the problem is that this picture is fine you have to keep in mind because it is useful but it is too simplified there are many other models where first we don't know exactly how to compute treshel states maybe you have to break the replica symmetry twice in infinite times so is not so easy there are models where the dynamics seems to go below what we expect to be the treshel states maybe the computation was not perfect and in general when you start playing a little bit with the temperature so for example when you have the system here and then you move the temperature you expect the system to follow treshel states it doesn't seem to do so so what I told you for the spherical pure p-spin model is exact but as soon as you move from that model and you move for example to the mixed p-spin model or to the easing model easing p-spin model so variables are no longer spherical variables everything becomes much more complicated and there is still a lot a lot of work to do in order to understand what happened really to the out of equilibrium dynamics of this order system and try to make the connection between dynamic and thermodynamics tighter because this helps a lot because we don't want to solve the dynamic equation so I leave you with this and I give you a lot of information I hope something remains still here in order to study if you have question come to me