 OK, so we can start the lecture and there I would like to spend some time and explain you a little bit better the physical meaning of the matrix QAB that we introduced in the replica calculation and so also to in order to explain you why we call the free energy that we derived yesterday the one RSB free energy which looks like mysterious because I didn't explain you even what that RSB means means replica symmetry breaking so let me give you just a flavor of what is a more generic replica calculation we did a very specific one because we started from a system where we made the M copies we coupled the M copies and then in order to compute the the average of the log of the free energy of this already replicated system of M copies then we made the N copies of the replicated system but suppose that you do a more standard replica calculation so we start with a single system okay so you take the computation we did yesterday with M equal one so it's just one system not coupled to anybody else and you want to compute as usual the the log of Z so you have to replicate and so essentially you do the computation we did yesterday with M equal one it means that you are going to compute the ZN average and then you take the derivative with respect to M and the limit and going to zero okay so this means that you take the computation we did yesterday with M equal one and you end up as usual with an integral of the type exponential of N an action which depends on a matrix and you have to integrate over the matrix the matrix now is a N times N matrix and the integral is made over all is a symmetric matrix so QB is equal to QBA and the integral is done over all the off diagonal elements because usually the diagonal elements are fixed by some by some constraint for example in the spherical case all the diagonal elements are one if you do a computation with a different type of variables say easy variables the computation will be different from what we did yesterday because you have to integrate you are to sum over spins taking values plus or minus one while yesterday we integrate over real variables with a spherical constraint so in general the this integration is an integration over all off diagonal elements okay and so if you do this more standard replica calculation you end up with an action that must be extremize on a set of symmetric N times N matrices and then eventually you have to take some analytic continuation in order to take the zero and going to zero limit which otherwise you cannot take okay so at this point yesterday we immediately propose an answer for the matrix a block matrix because we started from a system which was made of N groups of M replicas each so we have some a priori some information that suggested us to to use the block matrix but in general this is not true if you do a computation like this and you end up with an action that depends on a matrix all the replicas play the same role they are equivalent there is no no prefer replica so the the the first solution to this replica collection that one tries is what is called a replicas in metric solution replicas in metric solution which is usually called RS it means that you take a matrix Q which apart from the diagonal that is fixed is either one or zero whatever you want but the action doesn't depend on that has adjust one value in all the matrix so the matrix is constant in the elements so all the replicas are equivalent and so the the matrix should not depend on the on the index so the element of the matrix does not depend on the index and so they are constant okay if you take these answers and you plug into the action okay the action now depends on the one scholar you have to extremize over that scholar okay this replicas in metric solution is not providing the optimal solution in many context like he's providing their the right solution many others so the the the replicas in metric solution is in many situation the right one and it's very simple to to write down and now in order to go beyond so in order to improve the solution one has to go to a broader space of matrices and obviously you have to break the symmetry because if you keep the replica symmetry so if you force the matrix to have all the replicas equivalent this is the only solution you have okay so in order to enlarge the space of matrices where you look for the better optima of this action you have to break the symmetry and this is what is called replicas in the breaking and is essentially the idea the idea that georgio parisi came with in in 1979 and which then boosts the development of all the spin glass and replicas in the breaking theory so the the simplest way to break actually you can break the matrix you can break the replica symmetry in many ways the one which seems to work and actually is also suggested from the computation we did before is to go to one step of replica symmetry breaking which is called one RSB these where the name we use yesterday come in that case q can be written apart from the diagonal which is as usual fixed can be divided in blocks and you can put a value q not of the blocks and a value q1 into the blocks okay and the division is made such that each block is of size m so each block is of size m so in this new one RSB answers you have three parameters q not q1 and m the model we saw yesterday the spherical psp model essentially has this structure with q not equal to zero so the the matrix that we used yesterday was yes q1 q1 this again makes that essentially each replica is in a sense equivalent to the other ones statistically so has the same number of q1 the same number of q not okay but the the symmetry which is spontaneous is broken now is the one among the replicas so the replicas tends to organize themselves in a way that they are not all equivalent since we saw the spherical psp model we understand why this is happening there's actually the system has many states if you put an infinite number of replicas so suppose that you take small n to a very large number and so we are putting a very large number of replicas in a system which has many minima they can fall in the same minima some replicas can fall in the same minimum and other replicas can fall in different minima and so if you use a large enough number of replicas you can sample the structure of the replica of the free energy of the TAP free energy of the real free energy okay now the problem is that then you have to take the the limit n going to zero so also you have to make an annuity continuation in M and and so this is is the tricky part of the replica calculation but let me say that this is the one RSV and obviously you can insist in the sense that there are models where you can prove that this answers is the right one so this answers is picking the optima of the of the action there are other models like the Sherrington-Kipp-Patrick model which is a fully connected pairwise interaction easing variables where this answers is not enough in the sense that if you add further replica symmetric breakings you achieve a better optimum actually you achieve a more physical solution you can prove that for the SK model even the one RSV solution has some unphysical properties like an entropy becoming negative which is impossible in model of easing variables and so there you can further break the symmetry and for example the two RSV metrics is made this way essentially you you break once and then the diagonal is always fixed you break twice okay and so you have a q not outside the q1 here and inside you have q2 okay and in general you can continue breaking obviously the computation becomes more and more involved but in principle you you you can do it actually Giorgio did it in the in the 80s and now their ways of doing all this computation are very compact form and and so in this solution for example you have three values of the overlap and you have two sizes because you have this M1 size and this M2 size so you have five parameters and when you increase the the level of the replica symmetry breaking you you add more parameter and so obviously you you search for the optimal value of the action in a broader space yes okay because yes the action is symmetric in the replica indices because the replicas are being introduced just to to to compute the log but they play all the same role so the action is replicas in metric then you if you if the maximum of the action is not on a subtle point which is replicas in metric we speak of a spontaneous breaking of the symmetry like in the Curivice model the Hamiltonian of the Curivice model is in is is invariant under the spin under the flipping of all the spins so but below the critical temperature the point that dominates the thermodynamics is not invariant because it's either mostly plus or either mostly minor in that case we speak about a spontaneous breaking of the symmetry because the system does it by itself okay and so here the problem is that people spends many years trying to understand what was the the symmetry that gets broken because it's far less obvious actually what you are saying is that the symmetry that gets broken because of the presence of many minima is that if you put many copies of the system they are they will end up in positions which are not equivalent among all them so they they tend to cluster then to to form clusters and so there are some replicas which are closer and some replica which are farther okay okay this just to explain you how is a more standard replica question but essentially the work we did yesterday so the computation of the action is the most difficult part then you have to plug into the this expression of specific answers and then you end up with a replicated free energy which depends on few parameters typically the computation that one first does is the one or is the computation which depends on three parameters and you have to optimize over the parameters in order to to see what but so the the other things I would like to to give you is a is an insight of the physical meaning of this matter at this moment that this matter is just a mathematical object that we introduce in order to to do at the average of the log that we don't know how to do actually the most important thing is that this matrix as a physical interpretation and so the the spontaneous breaking of the replica symmetry it corresponds to some physical properties which is which are the very important in this disorder system you can find all discussion about the replica symmetry breaking and more explicit computation in the in the nice book by Mésar Parisi and Virasoro spinglass theory and beyond so if you want to learn a little bit more about replica symmetry breaking how how to do the computation you you can look in that in that book now I just want to convince you about the physical meaning of the matrix QAB and I will follow again the denotes by by Francesco Zamponi and so essentially when we when we start all the discussion about try to understand a disorderly system in terms of its the composition in pure state we start to say I take the joint probability of the variables and I want to decompose over the states so I use this weights of the states and then the probability distribution in a state name we said for fully connected model this probability distribution within a state can be perfectly described in terms of local magnetization and so our in in in the specific model that we solve essentially the the the organization of the states is very simple because two different state are orthogonal so we use q not equal to zero and then the the only parameter we had q it was actually the size of a state so if you put two in the same state the the overlap q is counting the the the size of the state but in generally if you have a more complicated structure you may have many different overlaps so similarity between states so if you introduce the the overlap q alpha beta which is defined as one over i m i alpha let me put it here m i beta okay this overlap is measuring the similarity between two states okay in particular in the in the in the TAP computation we we use q which was defined as one over n sum over i mi square which is nothing but in this notation is equal to q alpha alpha so it's the self what is called is the self overlap so it's the overlap of one state actually the overlap is the typical overlap of a system putting one state another copy put in the same state okay and and in particular for the for the model that we studied this this overlap don't depend on the on the specific state and so we use just one parameter but in general you may have many different overlaps so states may be more or less similar and and so you want to understand that what is the probability of this object so okay so they let me say a better a full or at least a better description of the system will be to compute this probability distribution which is the sum over all the pairs of states alpha and beta of w alpha the weight of state alpha w beta delta q minus q alpha beta if you manage to compute this you have a much better description of the system because we are saying I'm counting over all the states the relative overlap so the similarity between the states and I'm waiting each state with the right weight okay so this the is called the overlap probability distribution obviously this object still depends on the disorder so you compute this for one sample then you have to average over the samples so the the the simplest thing to do is to average this and so compute the average over the disorder distribution of the overlap okay now what I would like to to convince you is that this object is actually related to the matrix q that we use in the saddle point computation of the replica calculation and so that gives the physical connection between an object q the matrix q which has been introduced just to do the average of the log and something which is physically relevant because it is counting how many states you have at a given overlap okay so once you compute this you understand the partially or totally the physics of a system okay so let me in order to show the connection and let me compute one specific observable let's call it q1 q1 is essentially this object is one of an the sum over i the average value of of sigma of si square average over the disorder so what I'm doing I I'm computing the the mean management spin i then I square it because otherwise I don't get any meaningful result when I average over the disorder I will get zero and this is the let me say is there is an observable q1 okay so this if you use the again this the composition you can rewrite this in the following way we can write this as 1 over n sum sum over i and now you you can put the sum over alpha and beta of w alpha w beta si average over alpha so this m m i alpha m i beta what I'm using here I'm using that the that the composition so the mean value of si can be written as the sum over the states of w alpha m i alpha okay and since it's square I use once written with alpha and once written with beta so this is the square then you take the average and so this can be rewritten as the sum over alpha and beta of w alpha w beta q alpha beta where I'm using this definition of q alpha beta okay now if you if you introduce a delta function in order to recover so you can introduce now an integral over q with a delta function of q minus q alpha beta which is the identity and and then you repeat this sum sorry this cost here sum over alpha and beta w alpha w beta and this just q because I I change the notation all this is nothing but this object here together with this and so this becomes the integral over dq the average value of q q okay because the sum over alpha and beta of delta q minus q alpha beta w alpha w beta is just the definition of the average value so what we see is that this observable is connected to the to the average distribution of the other lap q via this relation so essentially is the first moment of this of this probability distribution okay you can redo the computation for higher moments this is why I put one because actually you can redo the computation for the case moment so you define this object as an as again as again n to the k sum over i1 ik the average value of sigma i1 sigma ik square average and by doing the same steps you realize that this related to the distribution of the overlap in this way so essentially this higher order correlation because this is a correlation average of it this order is just the kth moment of that probability distribution now we want to do the computation of this q1 or qk via the replica calculation so we want to connect now this object to the matrix qb so at the end that we can connect the matrix qb with the mean so with the average distribution of the overlap so if we do the computation of q1 by the replica method what we do well we we start from this expression 1 over n sum over i si average square average over the disorder now I write the average over the Gibbs and sum explicitly okay and since is a square I I use two replicas so essentially I have to do the average of sigma i times the average of sigma i the two average I use different variables because they are sum over so the there are mute variables so this equivalent to write 1 over z square the sum of s1 s2 these are two replicas the Gibbs factor for s1 the Gibbs factor for s2 and the z square here is the normalization and now I put the observer the observer is 1 over n sum over i si 1 si 2 and everything must be average over the disorder so you see I just substitute to the mean value of si the explicit expression so 1 over z sum over all configuration of first replica the Gibbs measure and si of the first replica and then I do the same for the second replica so I use just two replicas to compute the square of this average now if I want to do this computation via the replica method I can realize that 1 over z square so z minus 2 is also equal to the limit of n going to zero of z m minus 2 this trivial now z m minus 2 I compute it by introducing m minus 2 more replicas and so here I already have two replicas I introduce m minus 2 more replicas and so all the expression will be a sum over n replicas because I am already summing over two replicas I will introduce the sum over other so this is just the sum over all these other replicas of exponential of minus beta h this is just a way of rewriting the the partition function to the power m minus 2 introducing m minus 2 replicas so you take this you put it here and essentially you have a very similar expression to what we we had yesterday so this is the limit n going to zero the sum over all replicas so now a is an index going from 1 to n because I have two replicas here and m minus two replicas here you have the the observable actually let's put first the weight the sum over a h of sigma a so I put you see the the Gibbs weight for this two and for the other m minus two so this is a sum running up to n and finally the observable the observable only depends on the first two replicas now you repeat exactly the same computation we did yesterday so we introduced the matrix q blah blah blah and we do all the computation we just have this term this extra term so we have to keep this extra term with us until the very end and so you do exactly the same computation you end up with something like this q1 is the integral over the matrix dq of exponential of n x q and this quantity here you recognize when you introduce the delta function for changing variable from s actually sigma I don't remember yesterday I call it sigma today's s but it's the same from s to q this nothing that q12 okay so you have a q12 here and once you do the saddle point so you you compute this integral by saddle point you take the the the matrix q that maximizes this object this will be nothing but q12 at the saddle point because you have to remember that all this sorry here there is a because the exponent is linear in small n okay so you have the q of the saddle point and then you have this quantity here the saddle point but this goes to one so the this part here this part here goes to one in the limit n going to zero because the exponent is linear in small n okay so essentially it only remains this so q1 is given by the this the element one two of the saddle point matrix so the matrix q which maximizes the action this is not because we found the connection because the same object q1 now is written in terms of the this the mean distribution of the overlaps and in terms of the saddle point matrix qab maximizing the action unfortunately this expression is not very nice because as long as the matrix maximizing the action is replicacy metric q12 is the same as q34 the same as q8 27 or any other choice of the indices but if the matrix that maximizes the action is not replicacy metric then why should I choose one two I can redo the computation putting here 27 and 50 and the computer is the same okay so these are is a well-known problem when you have an action that whose which is extremized on a point that breaks a symmetry there are many saddle points all related by the fact that so you take the q device model when the symmetry is broken you have one minimum with positive magnetization and another minimum with negative magnetization okay and this true in general so when an action which satisfies symmetry is maximized by up on points more than one where the symmetry is broken there are many maxima which are all related by by the symmetry which has been broken okay in this case the symmetry is the replica symmetry so if you have one saddle point you have all other subtle points where the rules and the columns of the matrix are reordered as you want okay this means that the right expression is not taking just one element of the matrix but the sum over all the subtle points and if you do the sum over all the subtle points so this you have to do the sum over all the subtle points this seems the subtle points are connected by a reordering of rules and columns of the matrix this equivalent of taking the average over all the off diagonal part of the matrix okay so this is equivalent to write sum over all actually at the subtle point okay now at any subtle point because now when you change the subtle point and the rules and columns of the matrix are reordered this quantity is invariant so you can compute it on on any subtle point okay and if you redo the same computation for qk what you find let me write it here if you redo the same computation for qk the same replica computation for qk now let me forget the subtle point it's clear that the matrix is always the one that maximizes if you redo the computation for qk you find the replica computation for qk you find the limit and going to zero of the average over all the elements of the matrix of qab to the k okay so now we have the connection so you see the same observable as being computed from the state decomposition which is what we want to learn we want to understand this state decomposition so we want to understand this function here and from the replica calculation and so we see clearly the connection between this function and the and the matrix because essentially if this is true for any value of k is true for any function because any function can be expanded in a in a polynomial and so in general you can write that the the integral over dq of any function f of q average with the mean probability distribution of the overlap is equal to the limit n going to zero two divided n n plus one the sum over all the off diagonal elements of the function computed on qab in particular if you take as a function a delta function if you take here a delta function essentially you you bring this quantity that you are interested in computing outside the integral and so if f of q q prime say is a delta of q minus q prime and you plug it into the integral and you change the integration variable in prime okay then what you get is that the average value of the probability distribution of q is nothing but the limit n going to zero or the average over the element of the matrix delta q minus qab so what we understood essentially is that if you look at the matrix that maximizes the the replicated free energy the histogram of its elements is related to the mean distribution of the overlap which is related to the state the composition that we are interested in computing so this tells you that this matrix that actually you compute for a generic m n and then you take this strange limit but when you have this matrix in front of you in particular when you see the the the specific structure of the matrix that is maximizing the action this giving you also information on the way the states are organizing in the physical space okay so if you have a a replica symmetric solution this means essentially that this p of q has just one value of q so let me now draw some of this mean probability distribution according to the structure of the matrix q that we have seen up to now at least two replicacy metric and replicacy one rsb if I have a if I if the replicated free energy is maximized over a replicacy metric matrix all the off diagonal elements are the same this means that this function is a delta function so in a replicacy metric solution what I have is that the the average value of q is just a delta function on a given value let me call it q1 okay it's just one number q1 q0 what do you want and the contrary on a one rsb solution is already a little bit more complicated because I have two values have q1 and q0 and have a parameter m so how can I compute how many values of of the matrix q are equal to q0 how many values are equal to q1 and then I want to take this are usually strange and going to zero limit the first thing I notice is that the matrix has all the rows which are statistically equivalent so every row has the same number of q0s and the same number of q1 so instead of averaging over the all this part I just look at the first row and this is equivalent okay so this average you can equivalently write it if the matrix q has the one of the structure that I show you as the average over the first row okay now on the first row in a one rsb matrix we have the diagonal term we ignore it because it's it's not it's not in in this sum this sum is for a and b different so the diagonal term we ignore it so among the m minus one terms we have a fraction and we have m minus one a fraction m minus one over n minus one of q1 elements the first row is like this it has one then q1 up to position m and then q0 for the remaining n minus m so the q1s are m minus one over a total of n minus one elements while the q0s are sorry are n minus m over a total of n minus one this is the now you have to take the limit n going to zero and what you get is that essentially the denominator is always minus one and so here you have to invert and what you get is that in the one rsb solution the mean distribution of the overlap reads one minus m which is this term here in the limit n going to zero delta q minus q1 plus m because n goes to zero delta q minus q0 okay so we see that replicas immediately just one delta function one rsb to delta function with different weights the nice thing is that okay so we really we are getting a physical information on the system because this is telling us that if you put two replicas at random in this system they will have with probability m an overlap q0 with probability one minus m an overlap q1 this also gives a nice physical interpretation of why the parameter m that we introduce as a number of couple replicas in order to compute the replicated partition function at the end of the computation yesterday was a number between zero and one because at the end what what I drew for you yesterday was that the the value of m the value of m if you remember the plot of yesterday that dominates the thermodynamics in the spherical p-spin model this one it was essentially between td and tc it was one and then it decreases okay this was the the value of m of states dominating the thermodynamics okay now we understand better the rule of the m parameter because we have a connection with something which is physically meaningful so this telling us that between td and tc m is one which means that this probability distribution trivializes because let me rewrite this probability distribution in the case of the p-spin model where q0 is zero so in the case of the spherical p-spin model that we studied yesterday with replicas this will be just one minus m delta q minus q1 or we call it q star so it's the saddle point value and then q0 was zero we put it at zero since the beginning otherwise if you want to do a better one RSV computation you have to keep it different from zero then you write this other point of question for q0 you realize that the solution q0 equal zero is is always a valid solution and so this the the solution we got yesterday actually does correspond to this probability distribution of the overlap and remember that in the paramagnet so in a paramagnet the distribution of the overlap is just a delta sorry no here I forgot to put so it is a delta function in zero so q0 is zero but the delta function in zero exists and in the in the paramagnet the distribution of the overlap is just a delta function in zero okay so this showing that between tc and td when we have just a dynamical phase transition and the probability distribution of the overlap actually is equal to the one of the of the paramagnet because if you put m equal one this term disappears and the probability distribution is the same of the paramagnet this means that in practice since the number of states are exponentially many if you put even a large number of replicas the the probability of that two replicas end up in the same state is vanishing small in the thermodynamic limit so even if there are states when you put replicas they end up only in different states you compute the the overlap so the similarity between replicas and even below td you find that all the replicas are orthogonal but this is different from saying that there is no ergodicity breaking so there is no structure in the space like in a paramagnetic solution so this observable like the free energy is identical between the one or this solution with m equal one and the paramagnetic solution but for a completely different reason in the paramagnetic solution there are no correlation there is no structure in the configurational space here there are states but there are so many that when you put two replicas at random they will be always in different states and so you get that the weight in q equal zero is one as soon as you pass through tc where you have a condensation now the number of states is small and if you put replicas you have a finite probability one minus m or put in two replicas in the same state and so even from the observation of this probability distribution you do realize that the there is a structure of states because with a non-zero probability the overlap is larger than zero it means two replicas end up in the same state okay and so in this way we also justify why this m parameter turns out to be to be smaller than one in in the computation we did we did yesterday so i think that i told you more or less everything i wanted for for the replica calculation because now you really have all the elements for i don't want to say redo a computation but at least understand that when you read the replica calculation so you should have understood why we introduce replicas and once we find the solution what is the physical meaning of that matrix q and b that maximize the the action and in this way you can also give a physical meaning to all the parameters that you you find in a in a in a in the solution of a disordered model notice that these parameters are not all of them but some of them also appears in the in the TAP approach so even if replicas and TAP approach are are different then at the end you can you can match them and and we have a let me say a common picture which is coherent between the the TAP approach and the and the replica calculation okay so i think it's it's the right time for doing five minutes break and then we come back in this lecture so what i will do now in the following three hours is is try to tell you something about the dynamical behavior of this system but i will try not to make any computation we did already a lot i will try to just give you a flavor of what happened in when you take this is a system and you study their dynamics so actually both spin glasses and structural glasses so by spin glass i mean uh well spin glasses i let me say in in terms of models that i described to you you have to think about the sk model so for me spin glasses are usually the sk model of the other under some models so models where you have a a pairwise interaction usually with easy spins okay then the spherical p-spin uh let me put it even if these are perfect spin glass model let me put in a in a model of uh what we call glasses or structural glasses of glass forming liquids so they have a completely different phase transition so here we put for example the p-spin model okay i will show you that from the dynamical point of view they behave pretty differently and so just to uh so these are rough classification just to to be clear that the dynamics of spin glasses and the dynamics of glasses are different so i will try to convince you that they share some common feature but the other features are different and so so even some spin glass model i put in this category of glasses or structural glasses okay and in this category also full many and experimental system that undergo a very rapid increase of the of the viscosity so they they become extremely slow in a very small temporal range so they they undergo a structural practically a structural rest when you when you cool it down but the main feature that is common to all these disordered models when you look at the dynamics is a very important feature which is called aging what i mean by aging i show you the the outcome of a typical experiment to take a spin glass or a structural glass this is pretty common but the the the the first experimental result are from spin glasses obviously for the experimental spin glasses are models where the interaction are on a cubic lattice or in a final dimensional model why the sk model is a fully connected topology so they are they are different but from the point of view of of the dynamics they are pretty similar and suppose that you have this spin glass which is a material that has a phase transition between a high temperature and a low temperature where the high temperature is parametric parametric means essentially you put a field any aligns with the field you remove the field and it loses any magnetization at low temperature no at low temperature you you put a field and the the outcome of what happens when you remove the field is very field dependent and in particular you can do the the following experiment uh you take the system uh uh this done so you can do two different kind of experiments one experiment you take the system and you keep the the the external field on the system switch on for a time tw which is called the waiting time okay so this is the waiting time and then you switch off the field and you look how the magnetization decays if you are in the parametric phase independently on how long you keep on the field the decay of the magnetization is more or less the wedding independent so if you look at the at the magnetization as a function of time given the wedding time if you are at high temperature what you observe essentially i'm making things simple but okay they are more or less like this the magnetization decay in a way which is independent on the wedding time so you use different wedding times and in the case as a function of time more or less in the same way on the contrary if you go below the critical temperature here what you get is that the magnetization so the way the system relaxes back to its uh let me say uh free energy minimum uh in absence of a field depends on the time the system spent with the field switch on and so you what you get is something like and so so these curves have a at the wedding which is increasing okay so if you keep the field on for a longer time the relaxation of the system will be slower okay this called aging because you see the the time when the magnetization decays you can think is like an age of the system so uh oh the contrary you can think that the age of the system is the time you wait okay so you wait you let the system become older and then uh this name is better appreciate if you do an even simpler experiment which is the following and provides mostly the same the same result this experiment is better done in uh in the experimental lab this other experiment is better done in uh in numerical simulation numerical simulation instead of switching a field and waiting the system to relax in a field then switch the field back and see how magnetization decays we prefer to do everything without switching the field and so you can do the the the following experiment at time at the initial time you prepare the system in a t equal infinity configuration this is a random configuration okay so these are the configuration then you quench it immediately at the at time zero plus and so you you quench it immediately at time zero plus at a tempo below the critical temperature so essentially you already immediately start below the critical tempo but you start from a random configuration okay and then uh you you measure the correlation uh you measure the correlation between the configuration of time t time t waiting plus t and the configuration of time t waiting okay so you see these are like an overlap okay is there a correlation is measuring again the similarity between configuration at time t waiting and configuration at time t waiting plus t okay you do this plot for several t waiting you see now here there are there is no no uh field we are just waiting at the waiting time before measuring this configuration and from this t waiting time on we see how the system um go far away from this reference configuration measure at time t waiting so this t waiting is the age of the system when we start the measurement essentially so we let the system evolve before t waiting steps then we measure it and since that time we keep measuring how fast it goes away and what you see is something like this well i i'm making a little bit exaggerated but it's for didactic purposes again this is more waiting time this large waiting time so these curves again are with waiting time increasing i haven't put the label so you don't know what i'm plotting but this c t and t waiting as a function of t so again here i'm not switching on any field and just letting the system relax and what's happening is that if i wait more time the system becomes older and for the relaxation so from the configuration at time t waiting will take more time okay so you see this edging phenomena is is is very interesting because essentially what is telling you is that the system is actually is out of equilibrium and most of the experimental uh and this disorder system which are studied are all out of equilibrium so we spent all our curves because uh uh well i missed the first lecture and then i was slower than than than expected the student thermodynamics of disordered models but in nature it's unlikely to find a disordered model at equilibrium they are always out of equilibrium so we would like to understand their dynamical behavior because if you want to describe a a a a disordered model that you can measure in a in an experimental lab or even a numerical simulation uh you would like to to understand something about the dynamics in particular the the edging dynamics or the out of equilibrium dynamics not the equilibrium dynamics if you do the computation or the experimental high temperature every is is simple not not so simple but at least a doesn't depend on these diverging time scales on the contrary if you go down in terms of where spin glass states dominate then you have that the the behavior of the system has some diverging time scales you want to understand something about these dynamics so solving dynamics is is very difficult in the last lecture i will just show you the the only model that we are able to to solve i will i will give you some insights about the what we get from that solution is the solution of the spherical p-spin model as usual and but in general we are not able to solve the dynamics so one of the more interesting ideas that grow in in our community in the last decades was to try and this also is suggested by the solution of the spherical p-spin model is to try to connect the dynamical behavior of the system to some thermodynamic properties okay since we are not able to solve the dynamics in general but we are a little better in doing replica calculation for for solving the thermodynamics so counting states and all that we would like to make some predictions about the dynamical behavior of the system from what we know about the structure of states so the the composition in many states how many they are are which free energy they are and all that okay so these are a possible way and actually the the the is still well the the problem is still completely open we understood only few things but he's still a lot to do so the so these are common feature to these two classes of models if now i want to highlight differences just to give you an idea of what happened in one case and what happened in the other case essentially the main difference that in models of spin glasses with two spin interactions and at the critical point so in in models of this type here so if you take a model of two-spin interaction what happened is that for this model t d and t c coincides so essentially you don't have a a dynamical transition before the the condensation transition if you want the critical temperature and so there is a unique phase transition which is continuous so you have a continuous phase transition from the thermodynamics point of view in terms of replica calculation this means that the all the overlaps are are very small uh so i mean are much smaller than one if you go slightly below slightly below t c okay so it's a continuous transition all the overlaps becomes non-zero but in a gradual way okay and this reflects in the dynamics that you you indeed go in a in a regime of aging so below the critical temperature you do aging but if you are slightly below t c the the typical behavior of of correlation functions uh is that essentially the the first regime so they they decay to to something which is of the order of the overlap so something small and then aging takes part only uh in uh in a tiny region so essentially they go from the trivial paramagnetic behavior to a non-trivial aging behavior in a continuous way and so exactly at the critical point so exactly the critical point they still decay to zero in a power low fashion so it takes some time but everything is growing continuously so in in models where you have two-spin interaction the non-trivial dynamics grows continuously as the thermodynamics does okay on the contrary in structural glasses as i will show you is one of the features which has been computed exactly in in the spherical p-spin model and also in the mod cablin theory for for structural glasses they at the critical point actually so multi-spin interaction p-spin models here tc is strictly smaller than td and what you find then i will show you something more specific for the spherical p-spin model is that at td you have a a much stronger phase transition much stronger ergo busy break you see here the ergo busy breaking is taking place in a tiny region so essentially the system can go very far away because q is small it means that if you if you look at the correlation between the reference configuration and the system that evolves the system that evolves it can go very far from the reference configuration q is small okay so it means that almost all the space is still accessible there is not such a strong ergo busy breaking so only a tiny part of the of the space is not perfectly accessible while here the ergo busy breaking is much much stronger and it what happened at td is that the the the correlation function decay doesn't decay to zero anymore so for t smaller than td you have something like this and so there is really a huge gap which by the way we will see is equivalent to the q star value that we computed from the thermodynamics okay so you see the difference at the critical point here the the ergo busy breaking so there is growing continuously here is is is really very very drastic and and so this behavior is is definitely very different from this one but in general then obviously then you have the the dependence on t waiting as usual so here the dependence on t waiting is in a small region here that depends on t waiting a very large region at a critical point so all this part is you have a stronger because it's breaking and and then you have aging in a wider in a wider interval of correlations okay so um what can we say in general about this aging behavior certainly we we have to abandon the idea that correlation and also other quantities like responses that i will define now so the the two main quantities we work with when we study the dynamics of these models are the the correlation that i already defined so the the correlation between t and tw is one over n sum over i si tw plus t si tw and response function response function between t and tw is the derivative of the mean value of si at time t plus tw plus t respect to an external field uh apply a time tw so in order to do this you take the amiltonia which eventually doesn't have any any external field and you add to it the external fields and then you compute this in the limit h equal to zero okay so these are the two main quantities that one is interested in in in understanding in a dynamical behavior these tell you how far the system is going after with respect to the reference configuration at time tw and this telling you how much do the system respond at a later time if you switch on a very small field at time tw okay usually if you are in the high temporal phase you are in a situation where um the dynamics is not aging if the dynamic is not aging so if you have no aging then this correlation function response function do not depend on the waiting time they only depend on the on the difference of time between the two times that you are probing and so here the difference of time is exactly t so you expect that the if there is no aging you have what is called time translation invariance also called t t i which in practice means that if you redo the same measurement of the correlation of the response now within five minutes and tomorrow you will get the same answer is the if the system is not aging it will respond it will correlate exactly the same way if the system is aging the the answer will be different so if there is time translation invariance then c of t and tw is equal to a function only of t let me call t c of t so let me use abuse of notation when i use just c of one variable is the is the time translation invariance function and and also r of t and tw is a function of just the time difference in a situation like this you can prove what is called the fluctuation distribution theorem is a very important theorem f dt fluctuation distribution theorem which essentially relates these two functions in the following way the time derivative of r is minus one over the temperature times c of t so this means that if you are evolving a system always in contact with a thermostat of temporal t so the same setting where you do canonical computations and time translation invariance holds then you can prove this fluctuation decision theorem relating the the response and the correlation often is written also in a integrated way so you can integrate the response and then you get a relation between the integral of the response which is the susceptibility or the the integrated response and the and the correlation but this way is perfectly fine so usually if you if you find it in a integrated way let me write it here you find so if you introduce the integrated response which is the integral of the response which is useful because usually in experiment you cannot make an instantaneous as you cannot switch a field which is really instantaneous you switch on a field for a time maybe a short time about this finite okay and so the the system responds to a field but it's not instantaneous so this is the response to a field which is which switch on from time zero to time t okay and then you switch the field off so this is the integrated response and by fluctuation distribution theorem the integrated response multiplied by t is equal to otherwise and so you integrate this and you get the correlation at time zero and if you let me say this should be c of zero but usually if you normalize properly correlation the you can always set the correlation at time zero equal to one okay and this is the correlation at time t okay and okay so this what you do when the the dynamics is is time transition variant but in general we are interested in a situation where this doesn't hold so can we say something in general in a situation where time transition invariance doesn't hold so can we make reasonable answers about the form of these two correlation response function in these cases that we are most interested and actually this is known in some solvable cases among the solvable cases where edging is taking place there is also one which is a non-disorderly model if you consider the even a simple ferromagnetic model and you quench so you you you cool it down below the critical temperature so you take a say two-dimensional three-dimensional easy model and you cool it down below the critical temperature what happens is that the system is starts doing what is called corsening and and so it starts developing regions which are larger and larger of the where spins are aligned but still before reaching the equilibrium where the system is either positive or either negative magnetize you keep for a long time not so long time but it's in spin glasses keep it for for years but in you know for a ferraman and then you you thermalize quite easily they they remain now in a situation where spins of different magnetization form a structure of this type okay so these are plus spins and minus spins minus plus and so on okay minus minus minus and so on okay and and the system if you if you look at it during a simulation you realize that during this coursing so the the total magnetization is roughly zero for a long time the and then eventually he has to fall down in one of the two free energy minima either positive or negative so this state is a metastable state and and stay there for a for a long time and during this process you can observe it and you can realize that actually there is a correlation length say the size of this object they call psi of t that is glowing so this patchy scenario is making the patches always larger and larger and eventually on a on a long time he falls down in a in a situation where everything is magnetized and if the temperature is non-zero just a few island of different sign but the system is completely magnetized here that the magnetization is is different from zero okay so this process which is called coursing you find a very nice review by Alan Bray on this coursing process if you are interested in this much well understood because the system is non-disordered but already provides very good insight of what's happening during aging in this case since we just have two ordered phase there is a competition between the phases you see the plus phase and the minus phase they both want to grow because they want to the the system is happy it reduces the the boundary between the phases but which one is going to win you don't know and both grow that both exist and just eventually at the very end the one of the two take predominance and wins but you're interested in in in this regime intermediate regime and what you can show that there are some specific approximation where you can do explicitly the the computation but in general the the correlation can be split in two parts that I I draw it for you again you have as a function of t is always the time difference you start from one because when t is zero is the self is a is the self overlap which is one for spin configuration or even for spherical spins and then you have a a very fast convergence this for coursing to a value which is the the square of the magnetization minimizing the free energy okay and then now I'm doing it exaggerately in the sense that if you look at the numerical data they are not so sharp they are much smoother and you have to to do interpolation but for you I will do it very sharp so you you converge to m square and then you relax in a in a way which depends on t waiting okay and what you can prove what you can compute under a a suitable approximation is this this is that this part the scaling is essentially related to this length scale that is growing okay so the system is becoming slower because essentially there is a a length scale becoming larger and so when you want to forget a configuration where the patches are small is easy you just change something and you have reshuffled the configuration if you want to forget a configuration where the patches are large is much harder because you try to to move the boundaries of the patches but the patches are are much larger they are called domain the domains are much larger and so forget them requires more time requires take a domain and killing it or changing to the other sign by moving the boundaries and if the domain is large takes more time so in general you can split the the correlation in in two parts one part which is say the stationary part of the fd the part which only depends on t and here the this part here the below m square an aging part which depends on t and tw and in the specific case of coarsening you can show that it depends only on the ratio between t we want to make the ratio smaller than one so i want to put here tw plus t and here tw so in general i can rewrite this as the fd t part plus the aging part which depends on psi of tw divided by psi of tw plus t okay and so this should gives you a a quite clear interpretation of why it's happening aging here everything is very simple because there are just two competing phases you are making these domains larger and if you want to forget a domain you have to wait longer notice that for the the growth of the this which is called correlation length or or domain size depends on the dynamics you use if you use for example a glubber dynamics or Monte Carlo dynamics where where you don't conserve the order parameter so you can you can change the the global magnetization psi of t rules like t to the one half why for example if the dynamics are a little bit more constrained you do for example what is called Kawasaki dynamics where you you are allowed to change the spins but such that the total magnetization remains perfectly identical so essentially you have to uh you have to do cooperative spin flips so it takes more time in that case you have a that psi of t goes like t to the one third so it's lower and eventually if you put even a tiny fraction of disorder the growth of psi of t becomes logarithmic and is low okay so this just to to give you an idea of what's happening ordered model and weakly disordered model so in weakly disordered model psi of t becomes a a function of locked t well it's a generic function it depends a lot on how you put the disorder but it becomes logarithmic and is slow okay so these are very simple situation because as I was saying there are only two phases in two states in competition and still we already see aging but in this case we have a very nice physical interpretation of this age in behavior if the system is older the if the system is older the the the domains are larger and so it takes more time to forget the domains and then everything depends only on one length scale because the the you can prove that the the the size of the domain of the plus domain and the minus domains is more or less the same so as long as you have just one length scale this determines everything so this length scale is related to a time scale and so this time scale then fixes the the scaling of the of the aging part in the in the in the coarsening what's the problem with spin glasses the problem with spin glasses is that we don't have only two states we do have many states infinitely many states in the p-spin model or even in the esk model where the first transition is continuous and you can prove that the complexity is very small still it means that you will have a many not too many 200 states and you can imagine that if you want to do something like this but with 200 states that are competing the the situation is much more different in the sense that also the number of states is relevant and so you may expect to have many different timescales length scales and timescales entering in in in the system and indeed let me just tell you one last thing is the outcome of a very nice experiment which has been done in the late 90s we should convince you that while in coarsening you are growing just one kind of order when you do aging in spin glasses you are most probably growing different kind of orders at different temperature so what makes the finite model simple is that if you thermalize it half the critical temperature the confusion that you get is also very good at one quarter of the critical temperature because the state is always the same so all the effort that you do in in in relaxing in the free energy landscape at one temperature it's exploited also another temper because the free energy is always the same in spin glasses you have many different states the weights of the states may change with temperature and so you may have that one state dominate below tc slightly below tc another state with another kind of long range order dominates at a lower temperature and so this has been shown with a fantastic experiment uh on spin glasses where experimentalists measure uh uh a susceptibility it's called chi-sion is the out of phase susceptibility because they put an oscillating field but okay don't care is how the system responds and they measure as a function of temperature here there is tc so at high temperature the the susceptibility is zero so essentially the system is not responding actually it's responding in phase and not in out of phase if you're responding out of phase means that you are not so free to respond you like to respond in phase so if the field is oscillating you would like to oscillate like the field this how much you don't oscillate like the field you oscillate 90 degrees with respect to the field so it's something which says that you are not free to respond so in the high temperature of the system it's free to respond and the and the uh uh susceptibility is zero then at the critical point you have a rapid increase of the out of phase susceptibility means the the system now is blocked below tc is not able to to respond freely and the out of phase susceptibility becomes different from zero so it's like someone is is is pulling you but you are in a crowded situation and you are not able to to be beyond your friend okay this what's happening with spins in a spin glass and then you enter the the spin glass phase and while you cool down the system you make a stop okay and if you stop say for three hours the system relaxes because while relaxing is trying to arrange in order to follow the field so if you allow the system to relax so here there is a a stop okay of several hours the system relaxes which means that it try to adapt to the field because now the situation is is not changing here you are changing the temperature so you are changing the environment and now here the environment is fixed the field is doing always the same and the system is time to adapt to the field now you start again the cooling always at a very slow speed and what you serve is is the following the system goes up again and and then continues the way what does it mean that it goes up again that all the the work he did to adapt at this stopping temperature this is the stop temperature all the work he did is lost okay so he did so he tried to develop a long range order so he tried to adapt the spins in order to respond properly to the field which was a constant field doing always the same and all this job done in three hours as soon as you cool it down a few milliseconds after is lost so he means that here the long range order is different so all what you learn here is useless so this is called temperature chaos so the kind of order that the long range order that is here is developing here is useless for a close by temperature but what is still more amazing is that when now you hit it back at constant rate no stop at all the system does the following okay he followed the deep he means that the information so the long range order that he develops in the three hours here is not lost is there and when you come back to the same temperature and that long range order now is useful to to respond properly to the field you do it so you see two effects here with one which is called rejuvenation or temperature chaos or this calls also rejuvenation and the other one is the is a memory effect an incredible memory effect because he spends also a lot of time here but then he comes back and he remembers perfectly that at this particular temperature he spends some time to develop a long range order so this should be quite convincing that in a spin glass in a system made of many possible states there are many possible long range orders it's not like here that is just one long range order this long range order compete a different temperature one long range order is much more favorable than another one and so developing a given kind of long range order here is useless but at the same time the system is so well organized and this reminds the hierarchical organization of states in the parisi solution of the sk model is so well organized that when you come back you recover the order that you've developed which is not lost at lower temperature so this particular this order is lost at higher temperature so higher temperature wash the order developed at lower temperature but at lower temperature you don't so it's like working on different length scales so at a given length scale I develop an order now I work on the details but the length scale is there if I go higher temperature then I wash everything which I developed below so these are a nice experience just to give you an idea of how complex is the is the dynamics in this model and and so there is a lot of work to do in order to to understand consider that there is no finite dimensional model which is able to reproduce this behavior so this is an over problem so if you find a model this is able to reproduce this behavior would be very useful because at the moment we are not able to reproduce this numerically with numerical simulation okay thank you you see you in the afternoon