 So we were talking about coarse graining dynamical systems such as maps and we saw how in the case of a simple tent map an asymmetric tent map we saw that the coarse graining into two cells a left cell and a right cell led to a matrix W of transition probabilities and I left it to you as an exercise to check that this really was the transition matrix for a Markov chain. So I hope you've done it and convinced yourselves that indeed the dynamics reduces to that of a Markov chain. Let me spend a few minutes now and explain a little more detail what we mean by a Markov chain so that in general this concept becomes familiar to you because it's extremely useful. This is the discrete time analog of a continuous time process Markov process which we talked about a little bit so let me explain again what we mean by Markov chains. So we imagine there's a random variable which takes on a discrete set of values suppose you take these values to be x1, x2, x3 etc and we work in discrete time just as in the case of maps and let me label the values of these of this random variable x of j, j equal to 1, 2, 3 and so on. So I label these values by just j and when the random variable takes the value xj I say that the system is in the state j. So from now on we don't worry about the x's itself we just talk about the state of the system and that's labeled by an integer. This set of integers could be finite or infinite so the number of possible states of the system could be finite could be infinite but it's discrete and countable in this fashion. Now we ask what's the probability p that at some specific instant of time n discrete time the state is j that's the probability that the system is in the state j at the time n and we ask what's the sort of rate equation you write for this and this becomes on the right hand side because we have now a discrete set of states you have a summation rather than an integration and this probability depends on the preceding step and nothing more than that so it's equal to p a summation over all the possibilities k pk of the probability that the system is in the state k and then makes a transition from the state k to the state j at a certain rate a transition rate which we wrote as w so this is equal to w from the state k to the state j so w I'm not sure how I wrote this the last time we wrote this as w jk I wrote this as w jk or kj it's a matter of kj yeah this is the initial state and that's the final state so what did I write this as jk oh I just wrote it in the same order jk transition probability state k to state that was my definition so this becomes w jk pk at time n minus 1 having reached the state k at time n minus 1 it then jumps from k to j so that it contributes to pj of n at time n minus there's a loss rate which is kj pj at time n minus 1 this is the chain equation this in fact is the master equation not the chain equations the master equation for a Markov chain of course you have to specify some initial condition you have to say in what system a state the system is in could be in a distribution of states or could be in a specific state if it's in a given state then of course you would say pj at time 0 if this thing here is in a specific state let's call that state l for instance at time 0 then this thing becomes equal to ? j l so it's 1 if j is l and 0 otherwise so the task is to solve this set of this equation here with those initial conditions and of course exactly what we wrote down earlier place it comes through and that is if I define a vector column vector p of n which consists of p1 of n p2 of n etc then this whole business could be written as pj of n is w times pp of n wp at n minus 1 where there's a certain matrix whose elements are given can be read off from this equation and this of course immediately implies that p of n is equal to w to the power np of 0 and that's the formal solution to the Markov chain now there's a way of classifying these Markov chains is a systematic classification depending on what kind of transition matrix or transition matrix of transition probabilities w that you have if it's possible to go to any state in the Markov chain from any initial state then we say that this chain is irreducible because there's a connection between any state and any other state maybe not in one step but given enough time if this is true then this chain is irreducible if it turns out that there is a flip flop between a few states for instance if from state 3 you go to 5 to 7 and back to 3 and so on then you can have these periodic cycles these Markov chains in between but if no such periodic cycles exist then you say the chain is aperiodic and in general the non trivial cases of those where it's non aperiodic as well as irreducible so that all states are connected up this fashion so everything depends on the nth power of this matrix and we know that this matrix can be put in Jordan canonical form by a similarity transformation after which raise raising it to the nth power is a matter of algebra so in principle this Markov chains can be solved some technical difficulties arise if this summation goes on till infinity so from one if it goes up to infinity then you have to worry about convergence questions of convergence of these matrices of what you mean by the nth power for arbitrarily large n and so on arise some technicalities arise but otherwise in principle this is all that a Markov chain does and the whole thing is guided by what this W does a very important question which is important for us to in the context of dynamical systems is does this thing reach a steady state at all is there any stationary distribution associated with it is there a quote unquote analog of an equilibrium state an analog of critical points in the case of dynamical systems or fixed points in the case of maps is there an analog of that in Markov chains and the answer is yes and that's very important we'd like to know if after a long time if there is any invariant probability distribution just like an invariant measure is there something that doesn't change under further time iteration at all so is there a limiting form to this as n tends to infinity is there some kind of equilibrium distribution if so it says that the distribution at time n doesn't change at all from that at n-1 as n tends to infinity and that would happen when you equate this to that and you solve for the invariant distribution on both sides if you recall in the case of continuous time Markov processes and continuous time we ended up with an equation which looked like ?p over ?t was equal to some w p of t and I pointed out that the stationary distribution would correspond to a right eigen vector of w with zero as the eigen value because you want ?p over ?t to be zero so you'd like to have a non-trivial eigen vector such that when w acts on it from the left you get zero on the right hand side and the right hand side was pretty much something like this except that it was in continuous time and what did we say then well if this vanished this quantity vanishes then you have a steady distribution there are many ways in which this sum can vanish you'd have to solve a set of homogeneous simultaneous equations but one important way which happens in many physical problems is if each term in this bracket vanishes if term by term this vanishes then you have a steady distribution which obeys a principle called detailed balance in other words if the steady or equilibrium distribution let me label it as p equilibrium j it's independent of time this thing here one possible solution is that this solution is such that w jk pk equilibrium is equal to w kj and this is called detailed balance it's the detailed balance condition which is sufficient to produce for you an equilibrium solution because it just says that the equilibrium distribution is just the ratio of these rates w jk over w kj gives the ratio of pj equilibrium over pk equilibrium and of course you could ask but that only gives a ratio of these probabilities how do I find the numbers themselves what would you say I'd normalize the whole thing has to be normalized to unity and that gives the overall constant so this is a very very important subclass of possibilities this is an important class of possibilities where you have detailed balance and of course whether it obtains or not depends on the physics of the situation mention this because we'll come back to this we'll see that there are reaction reactive diffusive equations where we are going to impose the detailed balance condition based on physical considerations yeah yeah if it has a non-privile equilibrium distribution yes certainly yeah if it's an a periodic irreducible chain then and it has an equilibrium distribution it will have certainly be recorded yes it will jump from one to the other it's not saying that things are going to stop it's just like an invariant measure so the dynamics continues it's just that under time evolution that distribution doesn't change right for example the gas particles in this room if I assume that these collisions have sufficiently short-term memory that the processes are all describable by Markov processes then look at the velocity distribution of the particles of gas in this room it's a Maxwellian distribution under equilibrium conditions the whole idea is in spite of the collisions going on in spite of the dynamics going on the Maxwellian distribution doesn't change so it's an invariant distribution it's an equilibrium distribution so that's exactly the point it is exactly like saying we went back a few steps saying that the density phase space density obeys an equation like H with O and the whole point of equilibrium statistical mechanics was that in equilibrium thermal equilibrium this is 0 so we discovered that those distributions the phase space distributions had to be functions of the Hamiltonian what functions they were dependent on the external conditions if you kept a system in isolation a closed system in thermal equilibrium then that density is uniform on the energy hypersurface that's the micro canonical ensemble if you kept it in contact with a heat path such that it could exchange energy with the surroundings but not matter but not mass then this row was some specific thing called the Gibbs distribution it was e to the – beta H where beta is 1 over kT Boltzmann's constant times the temperature if you kept it as an open system which could exchange matter as well as energy with the surroundings then you worked in the grand canonical ensemble and this row involved not just the Hamiltonian e to the – beta H but there was an extra factor which depended on the chemical potential of the system and the number of particles in it so this thing here is just this is precisely this is an example of the kind of invariant distribution that you deal with in physical situations the point I'm making is that the distribution itself the probability distribution itself is invariant under the time evolution but it's not that transitions are not occurring they're occurring all the time but keeps the entire distribution unchanged so is that does that answer the question okay I was point out that detailed balance doesn't have to obtain always there are special physical reasons in certain systems why it obtains and then you have a very particularly simple particularly simple solution to the problem of finding the invariant distribution but that's our equilibrium distribution doesn't have to happen always you can however in principle ask is that does yeah yeah this was for the discrete case so if I want to convert this now to a time equate like delta p over delta t yeah then I would subtract from this pj of n – 1 and then go to the limit in which the time becomes the time step becomes 0 and then I end up with the master equation so this equation here is for a chain it just says at time n – 1 you reach this at every unit intervals I'm making transitions and I have reached this state k at n – 1 this is the probability and the next step is to flip to J so this gives me the probability that having reached the state k at time n – 1 in the next step I'm at state J and this tells me how much flows out in this stage I've already am in the state J at n – 1 and in the next step I move out here yes exactly I've assumed throughout that these things are independent of time these are constants some given constants what would you say is this mark what would you say what sort of chain would you say it is Markov chain would you say it is if these were themselves independent would still be Markov Markov just means one step memory but what sort of a chain would that be if these were independent this would be like the non autonomous systems we looked at in the case of dynamical systems so what would you say what kind of chain would this be the statistical properties change with time and it would therefore be a non stationary random process to still be Markov but non stationary and then of course there is no question of a stationary distribution in that case so I've assumed that this this set of numbers is independent of n itself and the n dependence comes here completely yes that is for analyzing a problem that is given to you yes like how do you justify varying the time steps over the no no no that's a good question the question is the time step arbitrary or not okay there are problems where if the time step is given to me and it's discrete and the state space is discrete it's a Markov chain if the Markov conditions are satisfied okay however there could be other problems which I could start by modeling in terms of a discrete time dynamics but where the dynamics is actually running in continuous time and then to derive those differential equations I could start by writing down difference equations and then moving to the continuum limit that's the way you derive differential equations in any case in most physical problems to start by asking what happens at finite increments and then go to the limit in which delta T goes to 0 and it would lead in general to differential equations yeah no not at all I mean no not at all I mean this is nothing to do with whether it's discrete or continuous or anything like that no I don't have to set that I would say this is equal to I mean if I say this is independent of n yeah exactly I take the limit in which n goes to infinity and ask is there a non-trivial limit right if there's a non-trivial limit that's my stationary distribution isn't it yeah so all I have to do is to ask does this have a finite limit does this does this thing have a limit as n goes to infinity and the idea is that in most cases if the chain is ergodic that limit will be independent of what p of 0 is as long as everything is connected to everything else just like the invariant measure was independent of what the initial distribution was in the case of the dynamical systems we looked at so in the same spirit it's if it's nah that's a subtle question is this true or not is a question let me let me explain this that's a good point let's look at it in continuous time we saw that for stationary Markov processes everything was decided by two quantities one of us a probability distribution of the variable itself the probability density function for continuous variables and the other one was a conditional density which looked like this so we assume stationarity we assume time is continuous we assume that the state space is also continuous and these are probability density functions this one is the one time density function but that is independent of the origin of time so there's no t sitting there and this one was a function of xt x0, t0 but by stationarity I subtract the t0 out it's a function of just one time here okay now the interesting question is is limit t tends to infinity p of xt x0 is this such that it loses memory of its initial condition and becomes equal to p of x this is the question one always and the statement I made was that if the system has a sufficient degree of mixing then this is true then this happens so this is essentially what happens in the case of these Markov processes the kind of Markov process we're talking about this is what happens the fact that it's in continuous time is irrelevant I mean I could rewrite this in discrete time doesn't matter yeah not necessary it's not very clear why that should be so why should it be so yeah it actually implies yeah it implies this is not necessarily true it implies that this system it's not a question of the memory being shot that's been taken care of already in saying that everything all the joint probabilities are decided by the just this two time probability but now if the correlation between the variable with the auto correlation function of this variable dies slowly enough as t goes to infinity there's actually no reason why this should happen why this limit should it go to this it's an assumption that's made in general it's a sort of consistency condition but to prove it rigorously is another story all together so it is not I don't believe that this is so that as soon as you say the process is Markov that this is true I'm not 100% sure I'll check this out but the strikes me that this is independent this is an independent statement further input that's been put into this that's got to be added but I'll check this out all right let me go back now and talk about the idea of a recurrence and this is something which we deal with when we do coarse-grained dynamical systems we looked at a two cell dynamics if you like for the tent asymmetric tent map in which you went from the left to the right and back again and so on but let's do this in a slightly more general setting in higher dimensions in general and see that there's an extremely simple formula for the mean recurrence time which is important to understand it's called the Poincare a recurrence formula is valid for all ergodic systems doesn't have to be chaotic or anything like that and it goes as follows the derivation is simple so let me do that and goes as follows so we look at recurrence let me call it recurrence time statistics we start with the phase space of this kind arbitrary dimensionality we will work it out in discrete time in terms of maps and we could make it continuous later there are some subtleties involved but for ease of illustration let's look at discrete time dynamics now I have a point an initial point x0 and at time 1 it jumps somewhere else and then somewhere else and somewhere else and an orbit is formed by this point x0 and I ask the following question I divide up my phase space into cells in this fashion and I focus on one particular cell let me call the cells C I assume there's an invariant measure and I assume that the dynamics is ergodic in other words any set of initial conditions that little volume element would visit all of this available phase space given enough time I don't assume anything else just ergodic nothing more than that and now I ask if I start with the initial point in C then what's the first time that I come back to see what's the probability that I come back to see at the nth time step that's one question the second question is what's the mean time in discrete time steps of some time step tau in discrete multiples of this time step tau what's the mean time to come back then I could ask what's the variance what's the statistics in general so I'd like to discover what the statistics of recurrences to the cell C look like that's the target and I assume that there are invariant measures there's an invariant measure on this and the measure of this cell C let me call it mu of C and let's assume that the whole phase space is now it's measure invariant measures normalized to unity so I don't have to keep dividing pardon me we computed and we said well we said that this is equal to the time step divided by mu I'd like to derive this formula I'd like to derive this from first principles but in principle in in general I would like to derive the statistics itself not just the mean time maybe the variance I'd like to find out exactly what it is so this is the target now how do we go about this well it turns out there's a very elegant and a simple formalism to do this which goes as follows first let's ask what do I mean by the probability of a recurrence to this cell what I mean by it is the joint probability that and now let me in keeping with our notation right earlier times to the right and later times to the left I want to join probability that if I start at the cell C at time 0 I leave this cell and I come back at time n so I want to join probability that having started here I'm in the complement of this cell so let me call the rest of it that means the rest of this other than the shaded portion so I'm in C tilde at time 1 and let me call this time step tau let me set it equal to 1 for the moment C tilde 2 and so on till I hit C at time n C tilde at time n-1 so this conditional probability is what I'd like to compute what's that equal to well we know that this conditional probability is a certain joint probability divided by the absolute probability this thing here so this is equal to P C n C tilde n-1 C tilde 1 C 0 divided by P of C 0 but this P of C 0 since I'm assuming that the system has an invariant distribution measure and everything is stationary this P C, 0 is independent of the time origin it's independent of this time argument and it's just P of C and that is nothing but the measure of the cell itself so this is the same as mu of C I use P of C and mu of C interchangeably they're exactly the same thing remember that I've normalized the total volume or the total measure invariant measure of this phase space to unity so then I can talk I can replace P of C by just mu of C and it's this quantity I'd like to compute but what is this quantity since the variable that I have is this point x0 which moves around here so let's do the following let's write this as a multiple integral over the phase space times what now what should I write here it's d mu that's the measure of x d mu this is rho of x dx if you like d mu of x over this measure right times what's the first point that I should write down I'm going to start at time 0 inside the cell and let me define the so-called indicator function chi of x to be equal to plus 1 if x is an element of the cell equal to 0 if x is an element of C it's like a theta function so it's equal to 1 if the point is inside and 0 if it's outside since I'm going to start there so this is chi of x definitely and how does this x evolve we've already assumed that this x evolves xn is equal to sum f let me not use this cumbersome notation it's sum operator T acting on xn-1 this is the map function if you like but I've written it here as some operator which acts on xn-1 and produces x of n just for ease of notation this is the time development operator which takes me from time n-1 to time n so what happens next at time 1 it should be outside therefore you should multiply this by 1- the indicator function of t on x because at time 1 this guy should be outside and the same thing should happen for time 2 and 3 up to n-1 so let me write a t power k which says this is the same map iterated k fold on x a product from k equal to 1 up to n-1 and then so you start here you jump out and you stay out and come back at time step n so the last factor is chi at time n so it's tn x so formally this is the same this is this probability the whole thing normalized by 1 over mu of c so although the notation looks elaborate the reason for it is it's in complete generality so very complicated time evolution is taken care of here by writing this abstract operator t if this is the set of very complicated nonlinear map it's still taken care of by writing this and this tk stands for the kth iterate of this map just as t to the n is the nth iterate of this map and you have to do this integral in principle and that gives you this conditional probability which is the probability of a recurrence to the cell at time n so this is the probability with respect to which I start taking averages but first I have to compute this number in some simple fashion now let me introduce the following auxiliary quantity let me define w n tilde to be equal to the probability so for that I just write P probability that if I start in the complement of this cell at time 0 I remain there till time n-1 so this time it stands for c complement 0 it's a joint probability here for starting there remaining there all these do exactly the same thing in the last one this is the definition of being in this cell c tilde at time n n greater than equal to 1 so I define the measure of those events where I start with the representative point in the complement of this cell and I don't leave this at all I don't leave the complement I don't enter the cell c but at time n-1 I'm still in the complement pardon me all of them are c tilde every one of them is c tilde so I don't enter the cell c at all so here is c and the rest is c tilde so I start somewhere here and I the orbit goes on here but never enters the measure of that set of points let me call that w n tilde it's clear that w-1 tilde is equal to we said n equal to 0 n equal to 1 in this this is just the invariant measure of c tilde so this is mu of c tilde because it's just p of c tilde 0 the origin of time doesn't matter so it's just mu of c tilde therefore this implies a very useful relation which is mu of c equal to 1-w1 tilde because remember the total measure is 1 so mu of c plus mu of c tilde is equal to 1 by definition therefore mu of c is 1-w1 tilde let's also further define it become useful w0 tilde to be identically equal to 1 itself we'll see why that is necessary and useful what can we say about this sequence w n tilde this is a sequence of numbers now starting with 1 w1 tilde is some number less than 1 between 0 and 1 and so on what can we say about this set of numbers is it an increasing sequence or a decreasing sequence as n increases it should decrease because this is the probability that the system doesn't move enter c at all so it's a sequence which is bounded from below because these have got to be non-negative numbers being probabilities so the sequence is bounded by 0 from below starts at 1 and is a non-increasing sequence bounded from below therefore there's a theorem in analysis which says such a sequence has a limit point in other words just the statement that this sequence w n is a non is a decreasing a non-increasing sequence bounded from below by 0 because it's clear that this is a set of numbers can become negative implies limit w n limit n tends to infinity w n tilde exists that's a rigorous theorem in analysis there's no reason why this limit should exist it could just oscillate but this is guaranteed that such a limit point such a sequence has a limit now what does it got a city have to say about this sequence what would you say is implied by ergodicity for this limit we know that given enough time any set of initial conditions has to visit the entire phase space including c therefore it's quite clear that as n increases and n tends to infinity what's the limit of w n tilde should be 0 by ergodicity so that's the assumption that's the rigorous assumption which says by ergodicity and let me write that here this is where the input goes in is strictly 0 this limit exists and is in fact 0 now let's try and simplify this a little bit so what's the trick one would use well if you had only these factors and nothing more just a set of these factors then you can easily see that is related to this sequence because this precisely the joint probability that you start with c tilde and you remain in c tilde if you had this thing here but unfortunately you can't do that because you have this factor and you have this factor what would you suggest we somehow have to convert them to these factors so what would you suggest add and subtract one right you add and subtract one this is all you do so if you did that then the following happens so let's do that and then a remarkable formula emerges so this p of c n c tilde n-1 tilde 1 is 0 so for the moment let me just look at the numerator so this 2 is unconditional which is this multiple integral could be written as equal to integral d mu x times instead of chi of x let me call this 1- chi of x and then subtract the 1 should be careful about minus signs though so let me let's do that slowly so let me write chi of x chi of p n x let me write this quantity and subtract the rest of it out so this minus well the first term that I have to subtract out of this one clearly and then what what else do I have to subtract what else do I have to subtract plus chi of x plus chi of t to the power n-1 I write it in this fashion now if I plug this in here I put that in here then what would this integral become it would it would say you are going to start at z so it would correspond to starting it would correspond to starting in the cells c tilde at time 0 and then continuing all the way up to time n here so what would that become it would be w tilde of n plus 1 because we define w tilde of n as staying in this complement from 0 up to n-1 so you would immediately get a w tilde of n plus 1 and then you have this term and this term right so let's write this once again by adding and subtracting I write this as equal to so this portion let me rewrite this as minus 1- chi so that takes care of this portion and then I am left with this and this term let me write it as term as equal to minus 1- chi of t and x in this fashion and what have I done now I have subtracted this so I write this plus 1 so I put them all together and what do I get I end up with this probability is equal to we already saw that you had a w n plus 1 tilde and this term here corresponded just the product pi from k equal to 1 to n-1 right so that would give me w tilde of n-1 plus n-1 that takes care of this term and it takes care of the original term this term this product and then I have minus this minus that now what is this term so this says you are going to sum with this product now what does that say what does that give you it gives you w tilde of n with a minus sign what does this give you what does that give you that's where you need a little bit of subtlety because this term here is really saying it's integral d mu x let's write it out product from k equal to 1 to n-1 1- chi of t and x tk x and then it's multiplied by 1- chi of tn x it's this term so I could make this n and get rid of this so what's that equal to what's that equal to unfortunately I can't write it as a w immediately because the integration is over x but the characteristic functions here are over tx and k runs from 1 upwards unfortunately what uses that going to be it's inside the integral it's inside the integral unfortunately so what can I do about this you're integrating over x which is the time 0 if you like but then everything else is happening all the integral involves whatever happens at time 1 2 3 etc so this is giving you the probability some kind of probability it's trying to give you but it's over tx t squared x and so on but you see this is an invariant measure that's the whole point of invariant measure that if you apply the t operator to x the invariant measure doesn't change at all so this is true that's the meaning of invariant measure so it doesn't change at all which implies I can change variables from x to tx and nothing happens and once I do that then this becomes wn immediately so that gives you another factor of wn which is twice this divided by mu of c to get the actual recurrence probability so we collect all these results and let's write our final result which is that this conditional probability that we are interested in this thing here is now been regressly proven to be equal to wn-1-twice wn plus wn-1 divided by mu of c but mu of c was 1-mu of c tilde but c tilde was w1 tilde but we define this to be w0 so that gives you the actual recurrence time probability this is the probability after time interval n you are guaranteed we have to check normalization but you actually guaranteed that this is a positive number because this is a decreasing sequence tending to 0 in the limit and therefore this is like the second derivative of this object and you can see that it's actually saying that this sequence if you plot as a continuous function is concave upwards because it's got a lower limit which is finite so this quantity is non-negative and is in fact the exact probability distribution but you can now ask what's the mean time to recurrence therefore if I call that mean time to recurrence to the cell c t sub c this is equal to first of all I need to show that this is normalized make sure that this guy is normalized so let's check that out first what is and let's call give it some number there is some name so let me call this guy r for recurrence c of n we have to make sure that summation rc of n overall allowed values of n should be 1 otherwise there is no guarantee that recurrence is a certain event we know it's a got it can better be equal to 1 we got to make sure of this so what is this equal to what's the least value of n for which this is valid from 1 1 all right so what do you get so this is a common factor you call it 1 then this is w 0 tilde minus 2 w 1 tilde plus 2 tilde plus w 1 tilde minus 2 2 tilde plus what happens now so it's quite evident that w 2 tilde for example will appear twice and get cancelled so will w 3 w 4 etc and you are left with just finally so everything gets cancelled except w 0 tilde minus w 1 tilde plus twice plus single times w 1 tilde so this whole thing collapses to just this that's equal to 1 so it's evident that if you this thing is normalized it's a normalized probability distribution now we can find the average what's the average distribution what's the average recurrence time so since we using n at this time n c this is the mean for cell c what does this give you this is equal to a summation n equal to 1 to infinity n times r n r c of n what is this equal to so it again this is simple to see this 1 over w 0 tilde minus w 1 tilde multiplied by 1 times this so w 0 tilde minus 2 w 1 tilde plus 2 tilde plus twice when n equal to 2 so it's 1 over w 1 tilde twice w 1 tilde minus 4 times w 2 tilde plus twice w 3 tilde plus now thrice the next one so thrice 2 tilde minus 6 w 3 tilde plus thrice etc so this obligingly cancels out and what are we left with so this guy cancels out so twice this plus 4 times that minus 6 cancels out and so on everything cancels out and you are left with this is equal to w 0 tilde over w 0 tilde minus 1 tilde but w 0 tilde is 1 by definition and this guy is 1 minus w 1 tilde which is the measure of the compliment therefore it's 1 over mu c so this whole thing finally 1 over in time steps of 1 had we used a time step tau it would be tau over mu c this is the Poincare recurrence theorem as it proves rigorously that when you have coarse grain dynamics of this kind if the system is ergodic and has an invariant measure then you are guaranteed that the mean time of recurrence to any cell is inversely proportional is just the reciprocal of the measure of that same variant measure of that cell the useful piece of information in a very general statement we went through a little bit of formalism but it's a very general statement the assumptions were minimal and completely rigorous just assumed ergodicity and the existence of an invariant measure and this statement follows at once if you have time steps which then go to 0 continuously that becomes a little more subtle because it's not very clear if this formula can be just translated directly because if I put a tau here instead of a 1 and let tau go to 0 then formally any finite measure itself will have a 0 recurrence time which is absurd and that's because of a flaw in the argument which doesn't take this possibility into account we counted as a recurrence something where the system stays in the cell itself because we started at n equal to 1 so really you should start by saying it goes out and comes back and then taking the continuous time limit is a little trickier you have to subtract the measure of all those points all those events where the system starts at time 0 in this cell and remains there at time 1 that's a fake recurrence with a recurrence time of 1 so you should subtract that and if you improve the formula for which there exists such a formula then you get an improved formula for this which changes this slightly but in general apart from that technicality this is a very general result is useful in many cases so it gives you a quick order of magnitude estimate of how long it takes to recur now this is used in principle even in statistical mechanics in large systems it turns out that this is the time for coming back the punker a recurrence time the mean time and when you have a very large system then this thing here can be shown the time can be shown to grow like the exponential some exponential of the number of degrees of freedom which is why the macroscopic world appears irreversible to us because even if the statistical properties didn't change and everything went on as before without aging even then the time for the system to recur would become exponentially large in the number of degrees of freedom and if you have 10 to the 23 degrees of freedom then this is more than astronomically large e to the 10 to the 23 is so large that it doesn't matter whether I measure units time in units of seconds or microseconds or ages of the universe it doesn't matter at all it's exactly the same impossibly large number so that's the reason why macroscopically things appear to be irreversible even though in principle if you didn't have any dissipation you had dynamical systems which for even if you said that the system was conservative and didn't have any irreversibility built into it the recurrence times would become impossibly large unrealistically large that's the why the reason you don't see it in practice okay I don't want to get into the details of macroscopic irreversibility here it's a subject by itself but this result is used in reducing these orders of magnitude okay so let me stop here and we will continue next time slightly different topic.