 Yeah, we were examining the elementary properties of Markov processes and let me continue with that give a little more of an introduction to some elementary Markov processes and then we go on to applying whatever we have learnt to dynamical systems specifically to coarse grain dynamics. If you recall I define the Markov process as one in which the conditional probability for an event to occur at a certain instant of time is dependent only upon the preceding instant of time whatever happened at the preceding instant of time and this short term memory led to the consequence had the consequence that the entire family of probability densities or probabilities for a Markov process are in fact expressible in terms of a single conditional probability and this means that the system is completely known all statistical averages can be found once I give you an equation for this conditional probability or probability density in the case of processes in continuous time this probability density obeys a certain master equation and the way we wrote this equation was ? over ? T P of X and T given an initial value X0 was written on the right hand side as equal to an integral over all intermediate states X ? P of X ? T X0 multiplied by the transition probability to go from X ? to X the transition rate to go from X ? to X minus P of X T X0 multiplied by the transition rate to go from X out to X ? and this was like a gain term and that was like a loss term and this is what the master equation read this thing is called the master equation and I further pointed out that if you took these quantities and wrote these out as some kind of column vector then you ended up with a matrix equation in the case in which the value of X the values of X were discrete values discrete set of values otherwise it is an integration or replaced by a summation in the case of a discrete set of values this kind of equation is not easy to solve it is a linear equation in P but it is an integral differential equation on this side and it is not altogether trivial to solve it although there are well established techniques now for handling this sort of equation and making considerable progress of course the input information has to be this you have to know what these transition rates are to go from one state to another the same thing if I wrote it out in discrete time for a discrete set of states and let us say the variable X takes on only a discrete set of values X1 X2 X3 etc and let us label that by some integer J and let me say the system is in state J if the variable random variable has a value X of J then I can ask what is the probability that it is in the state J at time n or time n plus 1 what is this equal to this is equal to the probability that at time n the system has reached some state k for example let us put the time index inside because we are going to write this is a matrix equation so P j at time n plus 1 is P k at time n multiplied by the transition rate of making jumps from the state k to the state J per unit time so in one more step it jumps to this point and in our notation this thing is a summation over k and in our notation this is a transition from k to J the way I have written it from X prime to X I write this as from k to J this is the gain term so this stands for the transition probability from k to J minus P j of n w kj so again this is now in the form of a rate equation it really is telling you that here is the gain term which contributes to the probability in state J and this is the loss the rate of loss and if you write these P j is together as a column matrix then of course this is a matrix equation of the form P is P at time n n plus 1 is equal to some w P at time n and since this is a constant matrix which are given once and for all the transition rates between the different states it is immediately obvious that P at time n is w to the n P at time 0 so this at once implies that P at time n is w to the n P at time 0 and all you have to do is to calculate this matrix here now the way to calculate the nth power of a matrix for large n if this state space is got some large number of dimensions is not altogether trivial what you do is to try to diagonalize this matrix or at the very least if you cannot diagonalize this matrix at least bring it to Jordan canonical form after which you can take its nth power without too much difficulty so this is a problem now in matrix algebra to find out what P n is given any initial distribution P 0 let us look at a few examples let us look at an extremely simple example in continuous time just to get used to the idea of these things in continuous time and let us look at an example where the state space just contains it is two dimensional just two possible values so we have a variable which switches between two values say x 1 and x 2 at random instance of time and what would this look like if I plotted the graph so let us say there are two values x 1 and x 2 so as a function of t I have the variable x and suppose this is the value x 1 and this is the value x 2 just for illustration sake let me take it to be negative it does not matter so this is the value x 2 it continues for a while and then flips back to x 1 goes on for a random instant of time back to x 2 for a random instant of time and so on so here is state 1 and here state 2 and there is a certain transition probability or rate per unit time of switching from state 1 to state 2 and let us call that lambda 1 so lambda 1 so 1 to 2 rate of switching this is lambda 1 and 2 to 1 back again this is lambda 2 in general and suppose this process goes on for a very long time and now I ask what are these equations what is dp 1 over dt equal to and what is dp 2 over dt equal to it is evident that at any instant of time p 1 of t plus p 2 of t must be equal to 1 the system must be in one of the two states and it flips back and forth between these two states now what would this equation be equal to if the system has reached the state 1 already it makes a transition to 2 with a rate lambda 1 and that is a loss term for p 1 so that is minus lambda 1 p 1 because you are switching out of 1 into 2 like this and then the gain would be if you switch from 2 back to 1 and that would be plus lambda 2 p 2 on the other hand here for p 2 what would it be here obviously lambda 1 p 1 minus lambda 2 p 2 and this is as it should be because as you see the transition matrix if I write it down is in fact so w in this case this transition matrix is minus lambda 2 lambda 1 minus lambda 2 and as I promised the sum of elements of each column is 0 as you expect and it has the immediate consequence that if I added up p 1 plus p 2 I immediately get as you can see d over dt p 1 plus p 2 equal to 0 which implies that p 1 plus p 2 equal to constant equal to 1 if you normalize it initially it remains at 1 for all time the job of course is to now try to find out what are the steady state properties of this system by this process by the way that it where it flips by Markovian process where it flips between two possible values back and forth is called a dichotomous or dichotomic Markov process very often abbreviated as DMP very popular in modeling very large number of situations and it is evident immediately that I can eliminate either p 1 or p 2 and I am going to get a second order differential equation for either p 1 or p 2 the same equation in fact and then the solution depends on the initial conditions so it would the solution would in fact be the sum of two exponentials depending on the Eigen values of this matrix which are fairly straight forward right now we can in fact compute e to the wt write down the full answer can do this in some simple form but first let me ask this what is the average value of x in this case this is an ongoing process for a very long time so you could ask what would be the equilibrium value what would be the equilibrium value so that is or a stationary value equilibrium probability distribution what would that be well exactly as in the case of dynamical systems we essentially have a system which looks like this and in equilibrium this should be 0 in the stationary state so you want wp to be 0 and if I call that let us call this p 1 equilibrium and p 2 equilibrium this column vector tells me what the equilibrium distribution is between 1 and 2 what would that be for this process I have to set this equal to 0 in other words I have to find a column vector p which is annihilated by w you act with w on the left and you get a null vector because the rows the sum of each column is equal to 0 it is evident that the uniform vector is a left eigen vector with eigen value 0 so it is evident that if I took this 1 1 and operated with w on the side I get 0 that is because the sum of each column is equal to 0 so what I want is a corresponding ket vector right eigen vector corresponding to the 0 and that would give me the equilibrium values out there but you can actually guess on physical grounds you can guess what happens here what is the average time that the system spends in state 2 what would this be let us call this tau 2 mean mean residence time in state 2 let us call that tau 2 what would this be what would the mean residence time here be and similarly tau the mean time here would be tau 1 what would this be well it is clear the faster it gets out of that state the less the residence time in that state and if it goes on for a very long time then what you could do is to add up all these intervals and divide by the total time and take the limit in which the total time elapses goes to infinity and similarly for state 1 and this would give you the relative fractions the fraction of time that the system spends in either state 1 or state 2 over a long interval of time that would be in fact directly proportional to your equilibrium distribution so what would that be what is tau 2 going to be exactly it is just exactly absolutely so this thing here is lambda 2 inverse and this is lambda 1 inverse lambda 1 is a rate at which it switches out of 1 right so lambda 1 inverse is the mean time it spends in state 1 and therefore this p1 equilibrium would be equal to tau 1 over tau 1 plus tau 2 and the other guy is tau 2 over tau 1 plus therefore this is lambda 2 over lambda 1 plus lambda 2 and this other guy is lambda 1 over lambda 1 plus lambda 2 that is physically obvious it is easy to trivial to check that if you took this and wrote down here those column vectors this column vector here with tau 1 is lambda 1 inverse and tau 2 lambda 2 inverse you get 0 on the other side what would the average value of this process be what is x average equal to well exactly so I would expect this to be x1 tau 1 plus x2 tau 2 over tau 1 plus tau 2 this is what I would expect now that would be equal to lambda 2 x1 plus lambda 1 x2 over expect this to be the average so it is sitting somewhere here it depends on whether it is closer to this or this would depend on what the transition rates are in general and similarly for the mean mean square that would just be x1 squared and x2 squared here that is it and you can find the correlation function that is the crucial thing we would like to find out what do you think that is going to be that is the most important thing of all so in particular I would like to find the correlation function and let me call this C of t in equilibrium this would be equal to the average value of x-x average x of 0-x average x of t-x average and the average value of that that is my definition of the correlation function what do you think this is going to be in this process you can compute this without too much difficulty but let us see what the exact expression for this quantity is what the actual expression for this is the formal expression if I say that the so this is equal to if it is a continuous process for example I would say this is equal to an integral over the initial value the x0 an integral over the value at time t let me just call it x times the probability that I have x at time t given x0 at time 0 multiplied by the probability that I have x0 at time 0 since it is a stationary process there is no time argument there multiplied by x-x average times x0-x average that is that is the quantity which I want to average over and this is what I average over this probability distribution this is the conditional probability density and that is the probability density stationary density of the process now of course I have discrete values 1 and 2 so each of these is summed over x x1 and x2 and instead of these densities I write down the probabilities themselves so it is a straightforward calculation you can come complete this fairly straightforwardly let us look at what these probabilities themselves are in the simplest instance so to make life a little simple let me do the following let us take x2 to be – x1 so that it is completely symmetrical and let us call this value some constant value c and the other value is – c – c so it flips back and forth between plus c and – c and for algebraic simplicity let us suppose the transition rate is just a constant lambda so this is some tau equal to lambda inverse and this average value is again tau equal to lambda inverse average on either side the average duration in each state is some tau which is the reciprocal of a single common rate of lambda of switching from 1 to 2 and 2 back to 1 so this just becomes this quantity here then it is easy to compute so I write p of t once again as equal to e to the wt p of 0 whatever be my initial distribution and what is e to the wt it is this but this is equal to I need to exponentiate this I find its square and its cube and so on and so forth so let me write this as – lambda times the unit matrix plus lambda times a matrix which is essentially 0 1 1 0 so this becomes e to the – lambda t because I have e to the power a matrix plus another matrix and e to the a plus b is not e to the a times e to the b unless a and b commute with each other but in this case a happens to be the unit matrix which commutes with everything else so that is the reason I chose the special case so it is e to the – lambda t and then it is the exponential e to the lambda t times this matrix 0 1 1 0 the whole thing acts on p of 0 what is the square of this matrix well let me call this matrix it is got a name – lambda i plus lambda let me call this matrix sigma 1 and then notice that when I exponentiate this sigma 1 things are very very simple because what sigma 1 squared it is the unit matrix right so this becomes I once again what happens next factorial sigma 1 cube but sigma 1 squared is the unit matrix so it is just sigma 1 plus dot dot dot and I gather all the is together and I have 1 plus lambda square t squared by 2 factorial plus lambda t to the power 4 over 4 factorial etc add infinitum and what is that equal to plus I have another matrix which is lambda t plus lambda cube t cube over 3 factorial plus lambda 5 t 5 over 5 factorial plus times sigma 1 so what is this equal to yeah it is equal to cos hyperbolic lambda t times I plus and this is sin hyperbolic lambda t times sigma 1 that is it therefore this thing here is now exponentiated it is finished it is equal to e to the minus lambda t times this matrix that matrix is just this cos hyperbolic lambda t sin hyperbolic lambda t on this side sin hyperbolic lambda t cos hyperbolic lambda t is that matrix acting on p 1 of 0 so let us write that explicitly p of 0 and you begin to see immediately that it gives you exactly what you would intuitively expect directly and that is the exact solution to this differential couple differential equations which we circumvented by simply writing the matrix solution down right away and therefore p 1 of t p 2 of t to write it as a column vector this is equal to e to the minus so let us let us write this out explicitly solution p 1 of t is equal to and p 2 of t is equal to there is an e to the minus lambda t cos hyperbolic lambda t times p of 0 so this gives you e to the power minus lambda t p 1 of 0 plus sin hyperbolic lambda t p 2 of 0 and similarly that is it so it gives you explicit solutions if I start with the system in say the upper level then p 1 of 0 is 1 and p 2 of 0 is 0 what happens as you can see the probabilities flip back and forth so if p of 0 is equal to 1 0 then this quantity here becomes equal to this is 0 here so it is just this and that is equal to one half 1 plus e to the minus 2 lambda t and that is it and in that same case this becomes equal to half is an e to the minus lambda t so this becomes 1 minus e to the minus 2 so at t equal to 0 this is 0 and that is 1 and as t increases this fellow drops down from a number from 1 towards half asymptotically and this builds up from 0 towards the half pardon me yeah there is a bracket over here and what is the correlation function become what do you think the correlation function would be and what are the steady state probabilities what is the equilibrium distribution in this case just half and half it is clear that the number the average durations are equal and it is half and half but that is precisely what you expect this to become as t tends to infinity that is a check that there is sufficient mixing in this system that the conditional probability tends to the stationary probability as t tends to infinity the memory of the initial condition is gone and that is exactly what has happened here it is not hard to show that the correlation in this case c of t the average value is 0 so we do not even have to subtract the average in this quantity is equal to what if the value of the random process is plus c or minus c when I take averages we multiply two together and I square it in I want the correlation function it becomes clearly there is a c squared which appears here c of 0 must be equal to 1 and then it is multiplied by if I compute those averages it just turns out to be e to the minus twice lambda t it is exponentially correlated with the correlation time which is 1 over 2 lambda in which is 2 lambda inverse that is the time roughly on which this that is the time scale on which the correlation this memory acts it is not surprising because there is just one time scale in the problem so you end up with something which is proportional to lambda inverse there are many many processes which are exponentially correlated Markov processes there could be in general more time scales there could be a sum of exponentials but when processes are exponentially correlated with a single correlation time something very special happens and the DMP is a classic instance of such a two level system a two state process you can add a symmetry to it you can change the rates and so on and so forth but the essential physics does not change very common in modeling very very popular in modeling it is clear that it models something which is either on or off a two state system either on or off you can make a three state system where it is on or off and then there are quiescent states there are active states and so on those are generalizations of these two state processes okay we could make things a little more complicated you could say well and this is a very popular process so let me mention that there is a class of jump processes where you end up with the following situation the variable X takes on a continuous set of values and there is some equilibrium state there is some equilibrium distribution P equilibrium of X some equilibrium probability density then in the Markov process for which the transition rate is of this kind this is the transition probability rate or per unit time to go from X prime to X we could model this in the following way we could say well this thing is a rate so there is a time scale lambda inverse sitting there multiplied by something which depends only on the equilibrium of final state distribution so you could say it does not depend on where you are but it is instantly jumps and with something which is proportional to P equilibrium of X this two occurs in many physical situations so the jump rate is independent of the initial state depends only on the final state and that to the manner which is directly proportional to the stationary distribution itself I urge you to solve the master equation with this assumption this is a normalized density so if you integrate this over all X you assume it to be equal to 1 and then it turns out you can actually solve for the conditional probability density P of X, T given X0 you can actually solve this exactly and show that the process is exponentially correlated so it generalizes the idea of this two state process to a continuum of states in a specific direction so once again I leave you to work this out and find show that the correlation is exponential it is an exponentially decaying correlation we will again talk about Markov processes we will again talk about similar considerations but let us go now change your horses and go completely back to dynamical systems and see how a specific core screen dynamics is in fact a Markov process this is what I would like to do and I like to show you that the dynamics is exactly that of a Markov chain which is the sort of thing we examined here and now just to make things a little less accidentally degenerate let us look at the tent map once again of the unit interval but we will make it an asymmetric tent map just so that we do not have artificial degeneracies and there is a specific reason why I do so so let us take instead of a tent map which goes up and comes down half let us say it goes up in this fashion and comes down there for the rest of it and let us suppose this is some number a which is less than a half so 0 and 1 here and this is unity here and this is my map function so this is how xn plus 1 is given xn I call this the asymmetric and the map function is the following xn plus 1 is equal to it is clear it is xn divided by a for 0 less than equal to a the slope is 1 over a because it reaches 1 when xn is a and what is the slope here it starts at 1 and goes to 0 and therefore it must be 1-x and the slope is 1-a because if a is a half it is also half it is also 2 so a less than equal to xn less than equal to unity now this map is fully chaotic and what I am going to do is to partition it into two cells a left and a right and the natural thing to do is to partition it from 0 to a as one cell and a to 1 as another so let me call this left and let us call this the right cell and similarly for xn plus 1 what is the invariant density of this system of this map so we write the Froben is for our equation once again I have rho of x equal to integral 0 to 1 dy rho of y a delta function of x-f of y that is equal to well since I am breaking this map up this is 0 to a which is one piece and an a to this is another piece so this is equal to 0 to a dy rho of y a delta function of x-y over a because that is what the map function is this region and then a piece which is a to 1 dy rho of y a delta function of x-1-y over 1-a that is the Froben is for our equation and in the standard procedure we convert this to a functional equation and see if we can guess a solution to this equation so I pull out this a y becomes a x so it is rho of a x but if I pull out this a from the denominator it goes up here in the numerator and similarly plus 1-a this factor comes up and then a rho of now y here is equal to so 1-y becomes equal to 1-a times x so y becomes equal to 1-1-a x so this argument is 1-1-x that is the Froben is for our equation what is the guess solution yeah the fact that this and that seem to cancel each other suggest that you put rho equal to a constant again so yes indeed this is true rho of x equal to 1 is the unique normalizable non-negative solution to this equation so it is a uniform density once again it does not matter whether it is a symmetric tent map or a symmetric it is uniformly spread out now I am going to do the following I am going to ask what is the transition matrix to go from one cell to another we write this w down completely for this chain and then we test if this is a Markov process or not by finding out whether the two step probabilities is just the square of the same matrix which you get in the one step if it is we know that it is a Markov process or the n step probability is just some transition matrix raise to the n if so if that is computable on both sides and you verify that this is so then it is as good as a Markov process but we need several steps to do this let us do that carefully first let us ask what is the so my cells are here this by the way is a also so this is cell left here and that is right here and this is left here and right here for the one step transition first I ask what is the measure of the left cell what is this equal to that is clearly an integral over the left cell which is 0 to a dx rho of x that is obvious that is the definition of the invariant measure just the integral of the invariant density over the cell and since this is constant equal to 1 this is a and that is certainly true I need to calculate the transition probabilities so I need to calculate quantities like the probability that if I start at 0 at time 0 in the left cell I am at so L at time 0 I remain in the cell L at time 0 I need to compute this number that is a probability pardon me at time 1 at time 1 at the end of one time step so let us put the time steps here similarly I need to calculate P L 1 given that I started r on the right hand side or P R 1 given that I start at L 0 and T R 1 given that I start at R 0 I need to calculate these probabilities and then I am going to put that in the form of a matrix call that my transition matrix at each step now how do you define these quantities what would you do remember that the x itself is continuous but I have broken up the whole thing into cells and I am now looking at the symbolic dynamics of the cells so the states of my system are either L and R or L and R and that is what I have written here these are the probabilities I am computing but how do I write this down for example how would I write this down what would be the procedure to write this down from first principles I would like to know what is P at L 1 at L 0 I have to be a little careful now so I ask I say well what is the joint probability that I am at in the left cell at time 1 given that I am in the left cell at time 0 so I start with this quantity here and ask what is this joint probability and this as you will see intuitively is equal to an integral over dx 0 suppose x has the value x 0 at 0 time so it is an integral over dx 0 over the L cell and integral over dx 1 over the left cell multiplied by multiplied by what since x is continuous we are talking about probability densities right so this is multiplied by the probability invariant probability that you are in the cell L you have the invariant probability for this variable x 0 rho of x 0 integrated over L that gives you the probability of starting in L multiplied by the probability density for starting at x 0 and moving to an x 1 that we know is a delta function kernel and we know that that is equal to this this is a conditional probability density it says if you give me an x 0 I am guaranteed to go to an x 1 whose density is given by this and that multiplied by rho of x 0 gives me the joint probability density for x 0 and x 1 x 0 at time 0 and x 1 at time 1 and it is integrated over x 0 and x 1 over the regions that you are interested in so this is in fact the definition of this joint probability but what is that equal to what is this equal to well x 0 L the cells L is just 0 to a dx 0 and it is just 0 to a once again over dx 1 on this side and rho of x 0 for this map is 1 so you do not need to put that in but you do need to put in delta of x 1 minus f of x 0 this function and this function was what x over a so it is x 0 over a because that is what the branch on the left is so delta of x 1 of x 0 over a I need to compute this integral. So the range of integration the region of integration you can do this geometrically in the following way one way would be to convert this to an integral delta function over x 0 finish that and then do the x 1 integration but I can do it as it is because I also have to integrate over x 1 I may as well do it directly provided the delta function fires when will that happen here is the region of integration this square and here is the delta function constraint which tells you that x 1 is f of a f of x 0 so what is the value of this integral now what is the value of this integral of course it does for all x 0 between 0 to this point wherever this intercept is for every x 1 in this region of integration between 0 to a that delta function fires so I can do the x 1 integration provided I restrict the x 0 integration to this region otherwise the delta function does not contribute and what's this point this value here it's a squared it's a squared because a squared over a will give you this value a so this integral becomes equal to 0 to a squared dx 0 and now the delta function has taken care of it become 1 because this integral is finished here and the answer is a squared therefore what's this equal to that's the conditional then say probability for this L to L exactly so it must be this quantity is this quantity multiplied by P of L 0 is by definition equal to P of L 1 L 0 the joint probability is the conditional probability multiplied by the absolute probability therefore this quantity alone is this divided by P of L 0 but that is equal to a squared divided by what's this equal to P of L 0 yeah it's the invariant measure because it says what's the actual stationary probability of being in L I close my eyes and put my finger on this on this interval and what's the probability I am here or there and just mu L which is equal to a so we have an a here what's this going to be well let's do that let's see what that thing is so again it's over L dx 0 but this becomes an R and now I am trying to compute P of R 1 so this is finished this is over R and I have to do this integral and what is that become I use the same strategy as before in this case since the invariant measure is unity it's actually very simple I have to integrate over this quantity here dx 0 over L but dx 1 I integrate over R so really it's this range of integration that I am talking about and this range of integration for every value of x 1 I have a contribution from the delta function provided x 0 runs from here to there and therefore this is equal to integral and that runs from a square to a from here to there dx 0 and then the delta function contribution over x 1 gives you unity for the x 1 integration so it's a times 1-a so this guy gives you a times 1-a but then I must divide in order to get this conditional probability I must divide by the measure of this cell here mu of L which is an a and therefore this just gives me 1-a similarly for the other one P of L 1 R 0 I have to integrate over this rectangle and the contribution that's non that's non trivial the only contribution comes from this region what is this point this function is 1-x 0 so this is x 0 here and this is x 1 it's 1-x 0 over 1-a and that is equal to a because that's the value here so tells you 1-x 0 equal to a-a square or x 0 is equal to 1-a plus a square so this point is 1-a plus a square therefore if I integrate we are now doing L 1 and R 0 so this is over R this is over L and that integral is this interval and it's 1- this guy so it's a-a square is a times 1-a so this quantity here is a times 1-a but then I need to divide by the measure of this cell to get the conditional probability and that's a 1-a and therefore it's just a and finally we need to know this length and divide by the measure of the right cell because we are now integrating over this region and that length is 1-a plus a square-a it's 1-a the whole square divided by 1-a to get conditional probability just gives me a 1-a so this here is an a and 1-a on this side the sum over each column is 1 therefore it's a stochastic matrix the matrix with non-negative elements such that each column adds up to 1 or each row adds up to 1 is called a stochastic matrix because it's connected to these probabilities and such a matrix would have a uniform left eigenvector had the some row sums been equal to 1 it would have a uniform column vector as the eigenvector so it clearly has one as an eigenvalue and a uniform left eigenvector and the corresponding right eigenvector is not hard to find it would be related to some equilibrium distribution in this case it would be related to a and 1-a itself so this gives me my transitions the transition at one time step and the question is is can the transition matrix at two time steps be written as a square of the transition matrix or wine one time step so if I call this some transition matrix T it's related to the W that I wrote down earlier but the addition of the unit matrix then the question is what's T squared and is that the same as so the question we ask is the following is and I leave you to verify this is the matrix P of L at time to given L at 0 is this matrix equal to T squared is the question if it is then you have reason to believe that this is a Markov chain because all you are doing is to take this transition matrix one step transition matrix and multiplying it raising it to higher and higher powers now what would you do to compute this yeah what I do is to first compute this L2 L0 and I write this as over L dx0 let's just call it all right x0 over L dx2 times rho of x0 which is a unit a unity in this case multiplied by a delta function of x2-f2 of x0 so I take the second iterate of this map and play the same game and compute these numbers which are very easy to do in this case and check out whether that's equal to the square of this guy or not of this matrix here if so then by iteration by induction I know that this is true in the end step as well I leave you to verify that this is indeed so this is a Markov chain which is equivalent to saying that this partitioning that we've done of the phase space in this map is a Markov partition because the symbolic dynamics of the system has been reduced to a Markov chain in discrete time and therefore I can use the entire machinery of Markov chains to solve various problems here for instance I could ask questions like if I start with the cell L then what's the mean time for me to come back to the cell L because it undergoes excursions from L to R it stays in L for a while etc so if it leaves L after the time step what's the mean time for it to come back what's the statistics of these recurrence times and so on so the entire machinery of Markov chains for which well defined answers exist for such questions can be used in order to study the dynamics of this system you don't have to go back to the map anymore because that dynamics has been now transferred to the properties of this Markov chain it carries all the information that you need this this this transition matrix already carries everything so we will study this a little better because I'd like to do two things I'd like to show you that there is a uniform there's a very specific behavior of that recurrence statistics namely when the system comes back because it's ergodic you know that the system will come back to whichever cell you leave given enough time then a question of interest is how long does it take to come back what's the statistics of the recurrences are the successive recurrences independent of each other or not and so on a well defined answers to these exist and we will use this as a sort of case history to study this little deeper and I'll do that next time