 Yeah let me go over once again some points regarding recurrence that we discussed with regard to maps and coarse-grained maps and then given example using the symmetric tent map which we have already studied in some detail as an illustration of the general principles involved recall that we had a phase space whose total measure was 1 and in this phase space I define cells and let's say this is some cell some particular cell C and then we were looking at the statistics of the recurrence of a typical phase trajectory to this cell in coarse-grained dynamics and just to recall to you what results the sort of results we had proved we discovered that if mu of C is the invariant measure of this cell we asked for the measure of that set of events where a typical trajectory starts in this cell at time 0 and then returns to this cell for the first time at time n and we call that the recurrence distribution this we found was given by a certain formula in terms of a quantity W n which was the measure of all those events where you started in the complement of cell C at 0 and you remained in this complement till time n minus 1 and then go into this cell in terms of this quantity W n tilde we discovered that the probability and now I will use the word probability itself for a recurrence namely to return to this cell for the first time and let me call that F return probability of return for the first time at time n to cell C given that you started in C at time 0 this was what I denoted by R the last time but this is a more expressive notation it says you started in cell C at time 0 and you returned to cell C at time n for the first time without having come there before after a so zone the rest of the time in between in the complement to the cell C so we are looking at the statistics of things which do this start here and end up back there at a specific time n this is the random variable here is n and we are trying to find the distribution in this n this was given by 1 over the measure of the cell C multiplied by W n minus 1 tilde minus twice W n tilde plus W n plus 1 this was the formula we derived on very general considerations and there were just two assumptions made in this derivation the first was that there exists an invariant measure that the system has settled down to some stationary probability distribution denoted by mu here and the second assumption was that the system is ergodic namely if you started anywhere the system walked around everywhere in phase space and came back eventually to this point and it does so repeatedly it keeps repeating itself over and over again the system is not periodic that would be trivial kind of recurrence it is ergodic and therefore we speak not of recurrences back to exactly the same phase space point as the starting point but to some neighborhood of it to some cell in which this initial point is notice that this has the structure of a second derivative in time it is a second difference in time and this itself is like a stay probability because you start in some place and you stay there without moving out of it into the cell C so it is essentially telling you that and this is typical it is a general sort of result that the first derivatives of stay probabilities would give you escape probabilities and the second derivatives would give you return probabilities so in order to start at C and recur return back to C return to C you have to first leave C and then you come back so it is pardon me it is like the second derivative in time isn't it this is just the second difference the first difference in time would be something like W n tilde minus W n minus 1 tilde and the difference of the difference is like a second derivative in time so it is a second finite difference of course this can be formalized but intuitively it is clear what is happening you have a certain cell C and you would like to find the statistics of return to this cell for the first time well you have to go out and you stay there for some time and then you come back and this is like the Sojourn outside this cell so you start here you escape you stay out and then you come back and that is so it is not surprising that exit time distributions or escape time distributions are like the first difference of state distributions and return distributions recurrence distributions are like the second derivative in time like the rate of change of exit time distributions or escape time distributions so the structure is not surprising post facto after the end by hindsight it is quite physically quite appealing now of course remember also that we had defined W tilde to be equal to 1 and W 1 tilde was simply equal to reset n equal to 1 here it is just the invariant measure of C tilde itself so mu of C tilde which is 1 minus mu of C I would like to now apply this problem apply this general formula to a specific problem and the problem we look at is the tent map one dimensional map asymmetric tent map because I would like to play with this parameter a make it as small or large as I please so the example we are going to look at is the asymmetric tent map for which we already discovered that a coarse grain dynamics into two cells this is a this is 0 this is 1 and 1 there the coarse graining into two cells namely a left cell from 0 to a and a right cell from a to 1 this led to a transition matrix W which was the probability that I start and the left cell so this is L that's R and let's complete the diagram by writing an L here and an R there as well so I start here and then I move to I remain in L at one the other probability was L 1 R 0 P of R 1 L 0 and P of R 0 this is what I call my transition matrix W and this was easily worked out to be equal to a and an a here a 1 minus a and a 1 minus a and my claim was that this is a Markov partition in the sense that this probability that I am in either cell L or R let me just denote these by I J and so on so I am in the cell J at time N given that I start at cell I at 0 so let's put that 0 explicitly here this quantity here if it's a Markov partition then you are guaranteed that this is equal to the J ith element of the nth power of the one step transition probability this is a one step in one time step these are the probabilities of jumps and if I take this matrix raise it to its nth power and then compute the matrix elements corresponding matrix elements that should give you the n step transition probability of course you can independently compute this conditional probability by just writing this out as a joint probability and then dividing by the absolute probability of starting at cell in cell I at time 0 and that will involve a delta function which involves the nth iterate of this map asymmetric tent map so it's a non trivial problem to compute this nth iterate and then find out what this probability is but I leave it to you to verify and this is not difficult to do here that this probability is indeed given by nothing but the corresponding matrix element of the nth power of this matrix which proves that it's a Markov partition this is a consequence of the Markov property once you have this we can now start computing what these probabilities are that's very straightforward here and what happens in this case and we can in fact compute what this number is because I just have two cells now so C is let's say L and C tilde as R just a part Markov partition into two cells and we can very easily compute what these probabilities are what is this so John probability for instance what's this in what's the most interesting property of W as I have written there what are the eigen values of W yeah the sum of the as you can see the sum of each column is equal to unity what does that tell you yeah that there is a uniform left eigen vector which is 1 1 and the eigen value is 1 what's the other eigen value of this matrix what's the determinant of this matrix it's 0 so what's the other what's the other eigen value 0 has to be the other eigen value there is just two eigen values here therefore it's immediately clear that for this matrix at any rate we have an extremely simple property which is W squared equal to what's W squared equal to square this matrix 1 and 0 the eigen values so the characteristic function right 1 and 0 the eigen values of this matrix so it's clear if lambda is an eigen value this must be true it's a 2 by 2 matrix so this is the characteristic polynomial of this matrix so it immediately implies that W squared minus W is 0 or this must be equal to that yeah therefore it implies that all powers of this matrix this matrix is idempotent all its powers are itself independent of a whatever be a the matrix on raising it to any power gives the same matrix for all that goes that at once tells you that the probability of being in L at time n starting at L at time 0 is just the n power is is just the 1 1 element of this raise to the n which is the same thing as that so this is equal to a etc so it's extremely simple in this case and we can compute these numbers what's W and tilde then you can write this in terms of these probabilities and work it out etc so but you can see this almost intuitively a 0 to a is L and this guy is R and this is our cell C let's say so this is the C and this is C tilde that's for illustration take I take C to be the left cell and C tilde to be the right and then what's W n tilde going to work out to be this problem this corresponds mind due to the complement so you have to be in R and it's a stay probability in R it's 1-a to a certain power so this thing is equal to 1-a to the power n so you're going to start at 0 and up to n-1 you're going to stay there and it's just this so what's the second difference this thing in this problem first of all 1 over mu of C that's just a because the uniform the invariant measure in varying densities uniform so the measure of the cell on the left is just a the length of that interval multiplied by 1-a to the power n-1-2 1-a to the power n-1 that gives you something I've left out yeah that's fine so this is equal to 1-a to the power n-1 divided by a and multiplied by 1-1-a the whole square so it's an extremely simple formula and it tells you something interesting it says the probability of return to this cell C at time n for the first time starting at cell C at 0 the first the recurrence probability is given by this geometric factor it's just a times 1-a to the power n-1 similarly we can ask for recurrence to the right so let's do that let's write this f left n left 0 is equal to a times 1-a to the n-1 and similarly f rn r0 equal to but this is not hard to guess what this is going to be yeah a and 1-a just interchange between left and right so this is equal to 1-a times a to the power and now let's check normalization because the system is ergodic we have to be sure that there is recurrence that's going to happen so if I sum this over n I better get 1 and indeed I do as you can see if I sum from 1 to infinity yeah why the invariant density is 1 yes no no no this is not a question of no that's not true the chance of it coming back the probability that's it l given this this doesn't diminish this is still equal to a that's still equal to a now what is this correspond to this is equal to the probability that if you start in l at 0 you come back to l for the first time at time n for the first time and of course you'll keep coming back over and over again so this probability summed over n must give you unity if the return is a sure event we know it's a proper random variable and indeed so because if I sum this from n equal to 1 to infinity this is just a geometric series so it's a divided by 1 minus 1 minus a which is unity so it's certainly normalized this is certainly true summation n equal to 1 to infinity f n 0 is equal to 1 as required what's the mean time of return what's the mean time that it takes to get back n l say to the left this is equal to a summation n equal to 1 to infinity n times f l n 0 and what is this work out to so it's a summation n equal to 1 to infinity n times 1 minus a n minus 1 what's that equal to just an arithmetic or geometric series so it's n x to the n minus 1 so it's just the derivative of x to the power n right so it's a times d over dx if I have x to the n here we differentiate it from 1 to infinity the sum is x over 1 minus x and I have to set x equal to 1 minus a so what does this give you I differentiate this I get 1 minus x so it's 1 minus x plus x over 1 minus x whole squared so that's 1 over 1 minus x whole squared so that's 1 over a squared which is equal to 1 over a this is 1 over 1 minus x the whole squared and I put x is 1 minus a so I get 1 over a squared and I multiply by 8 that's 1 over a and indeed that's exactly what it should be because we know that this must be equal to the measure 1 over the measure as required the smaller a is it's intuitively clear that the longer the mean time of return for the first time is going to be so that was the reason for my choosing a here rather than a half so that I can tune this you can see immediately that it takes longer and longer if a becomes smaller and smaller the mean time of return to a specific point which a set of which is a set of measure 0 will diverge and that's exactly what's happening and similarly n r equal to 1 over mu r which is 1 over 1 minus a but then once you have this distribution you can do more you can find the variance of the time of return of first of recurrence of first return and then all its statistics completely what sort of distribution is this is some kind of geometric distribution I could also write this as e to the power n minus 1 log 1 minus a so it's some and since log 1 minus a is a negative number you can see that it's exponentially decaying in time and this is very typical of these processes whenever you have a hyperbolic system whenever you have no stable fixed points nothing I mean the map is got a non a slope greater than unity everywhere this thing is said to be uniformly hype it's supposed to be hyperbolic expanding everywhere at all points then it's a characteristic of such systems that the distribution of recurrence is going to decay exponentially in time and that's exactly what this does because I could write this as e to the power minus n log 1 over 1 minus a and it's an exponential decay we can do more we can go further do this we can even find limit loss and let's see what I mean by that you could ask alright if it comes back once will it keep coming back that's the first question and if so what's the statistics of such successive returns and the other question you could ask is I said this is a Markov partition and I left it to you to verify that the probability that you start in cell I at time 0 and you return to it and you back there in cell I at time n or you're in some other cell J at time n is just the J ith matrix element of W to the one step probability to the power n what's the guarantee that this is Markov well explicit verification but do we have any general rules well in this case if the map is piecewise linear in one dimensional maps of this kind if the map is piecewise linear as this is and if your cells in phase space are such that the boundary points of these cells are either fixed points everything's unstable of course or are on periodic orbits if they are on periodic orbits or their pre images of periodic points then the partition is guaranteed to be Markov that's true in this case pardon me yeah where are the boundary points for this where are the end points for this partitioning 0 is 1 is 1 and 1 is 1 and the partition was simply l equal to 0 to a and r was equal to 1 a to 1 something like this now 0 is already an unstable fixed point of this map and where does the point 1 the point a go where does this point go if I started this point after one iteration get to this and I am here and at the next iteration I am at 0 which is an unstable fixed point so the point a is a pre image of the unstable fixed point at 0 and the statement is if you take us an interval of this kind and you partition it not necessarily into 2 if you partition it and you call this cell C1 C2 C3 C4 and C5 then if these points these points which are the end points of the cells the boundaries between the cells if they are either unstable periodic points or the part of unstable periodic orbits or their pre images of such points namely after few iterations they fall into such points then you can prove that the partition is a mark of partition of course you can refine this partition further I can make this smaller and smaller I could look at points which fall into a after one iteration and so on and so forth and if I partition it further in this fashion it is called refining the mark of partition you guaranteed that the partitioning stays mark of so it is useful piece of information to try it is useful to try and partition a phase space into a mark of partition because then you can use the machinery of mark of partition of mark of chains in order to analyze the dynamics as we did here indirectly in some sense because once you have a mark of process you have another property which I did not mention a renewal property which is the following if I want to compute the probability of reaching some cell J some state J of a mark of chain starting at some state I at 0 let us assume it is a stationary mark of chain then this quantity is equal to we can write a chain condition for it as follows so you start at cell I at 0 move to a cell K for instance at time n prime and for the first time and then start and sell K and move in the remaining time so if it is a stationary distribution with stationary we only that K n minus n prime to state J p and this is equal to summation over n prime equal to 1 to n that is a renewal condition it says start here at 0 move to some set some other state K for the first time at time n prime and then you start at that state K and in the remaining time you end up at the state J and then you sum over all the intermediate times here and this is the first first passage first time you pass from I to J so first passage time probability so you should not have reached this K from I ever before till time n prime and then in the remaining time you do this and you sum over all the n primes and you get this probability now theory of Markov processes gives you a formula for this because you tell me the one step transition probability I can find the n step transition probability by just raising that to some power n now how do I find these quantities those are precisely the quantities we want in recurrences the first passage then the first return probabilities so I would like to have in our case what happens if I start at C and come back to C here so I would like to put I and K equal to each other what does this suggest to you is it possible for me to find this quantity this set of quantities given something for this set of quantities what does it suggest to you think of n as time and then this is a function of t prime and it summed over t prime and you have t minus t prime here so what does it suggest how would you solve this sort of equation it's like an integral over t prime right so we have something which says let me just use p here you have p of t it's going like integral dt prime up to t from 0 because you discrete steps if you make the steps small enough times p of t minus t prime and then this is some f of t prime so what does this suggest how do you solve for f of t prime in an equation pardon me it's convolution exactly you take Laplace transform so that's the natural way to solve this problem and therefore if you took Laplace transforms here in this case in discrete time then you have a formula for this and that's it in this case it would lead exactly to the formula which we wrote down under general considerations here the second difference term that we wrote down so you have such a renewal equation this is called a renewal equation it's an example of a renewal equation for a Markov chain it's a very convenient way of finding first passage time distributions and that's exactly what we are interested in the first return time distribution so this is one more way of doing it instead of going through the general formula so we could have done it this way as well in this case because the partitioning was a Markov partition yes that's exactly the statement I made there's a general way of proving that if you have a piecewise linear chaotic map then if you Markov partition the interval if you partition the interval such that the boundary points are either on periodic unstable periodic orbits or their pre-images of such points you have guaranteed that the partitioning is Markov this can be established on general grounds now once you've established that the partitioning is Markov then you use the the machinery of Markov chains to solve the problem you don't have to go back to the dynamics the actual map itself all I have to do is to compute the transition probability in the transition matrix the W for such a situation and everything else is given in terms of w yes not that you can solve it but a great deal of information about the system is being is obtained once you've managed to find a Markov partition but there's no guarantee that for higher dimensional maps other maps non piecewise linear maps etc it's trivial to find such a state a partition not true so it goes one way it's that if you have a Markov partition then you are in good shape this is what it implies but how to find such a partition is not known in general so it's not that we chanced upon the Markov partition here it's just that I know that this map the tent map symmetric or asymmetric is a sort of paradigm of chaos in a certain sense it's like the Bernoulli shift it's a paradigm of chaos and for such systems then we can see that a great deal of information can be obtained now most of the rigorous results that are proved in ergodic theory have to do with systems which are formally something called axiom a systems namely at every point in the phase space first of all there's an invariant measure and at every point in phase space almost everywhere you have an expanding direction you have something which is where you have I should say in more technical terms where the stable and unstable manifolds intersect transversely they're not tangential to each other so these are technicalities mathematical technicalities but what we're trying to do the much lower level is simply look at some very very simple low dimensional models one dimensional models specifically and use a little bit of the machinery of Markov chains to show that the actual dynamics can be removed can be you know you can cover it up you can subsume it in the dynamics of coarse grained Markov partition so this is our modest aim here yeah absolutely yeah yeah yes yes let me rephrase what he's saying he's saying if I refine this partition further and further which I can by taking this map taking the pre-images of this point and writing it in terms of more cells then in principle I can go to a limit where I can perhaps even mimic the actual dynamics itself yes yes that's the eventual aim that is of course the eventual aim in principle to do this but it's not true in practice it's not easily achievable in practice except in model systems of this kind but if you can prove something is true in principle then at least you have some idea that the results you get by studying this kind of analysis by this kind of analysis are reliable in some sense so this is at that level no more than that in fact let's answer this question directly what happens if I have successive recurrences what does it look like can I gather some general information from this and let's look at that and ask what's the probability we've seen the probability that you come back for the first time at time n we'll ask the following question now so we keep this in mind that this is an exponential distribution in geometric or exponential so let's ask the following question what's the probability that I return if I start at L at time 0 that I return to L for the first time at some time n 1 so let me call that n 1 just for ease of notation let me say this is the first there's the probability of return to L at time n 1 then after that the next return is at some time n 2 and the return after that the first time is at n 3 n r so this is now asking for the probability of successive returns to L or recurrences to L at these instance of time there are such recurrences and I can ask what's the probability of this how do I compute this well this is not hard to do because once we have this machinery that this thing is acting exactly like a stationary Markov chain and it's not hard to show that this thing here is again of precisely the same kind it's a times 1-a to the power nr-r multiplied by a to the power r I think and it's not hard to see that this is equal to f at n 1 times f if I call this f of n for a moment I call this f of n f of n 2-n 1 of nr-nr-1 so simply factors in this fashion and what does that suggest to you what does it suggest to you if I say that the probability of a recurrence in the sense we've defined at the instance n 1 n 2 n 3 up to nr the probability of r of these recurrences at these specific times is just the probability of a recurrence at time n 1 multiplied by a probability of a recurrence in a time interval n 2-n 1 etc up to here what does that suggest to you about these successive recurrences they're independent of each other they're independent statistically independent that's the reason these probabilities simply factor in this fashion so that's telling you how random the system is in a sense that this has completely factored out and it's just a function of those time intervals nothing more so it's telling you that in these systems successive recurrences are statistically independent of each other not going to be always true but in this instance it's so exactly it's essentially due to the lack of memory in the Markov chain in the Markov process on this case the Markov chain that's essentially what it's due to you could ask what's the probability given a certain amount of time n what's the probability that I have r recurrences probability of r recurrences in a given time interval what's that given what's that equal to what would that be well we already have a problem an expression there so what would this be right so so you're given this now you're asking what's the probability of r recurrences appearing so what would that be this is already suggesting what the answer is going to be absolutely it's a power r 1-a to the power n-r but you don't care in what sequence this is going to happen right so there's an n that's the binomial symbol where n c r it's the binomial distribution now we could ask now final question we could say suppose suppose the time becomes very long n tends to infinity and let's say a becomes smaller and smaller what do you think happens to this distribution it becomes a pass on distribution so in a limit in which n tends to infinity a tends to 0 this distribution n c r a to the power r 1-a to the n-r n becomes infinite a goes to 0 such that the product of n times a is finite this tends to the pass on distribution so in fact let's make it such that limit n a equal to some number equal to lambda c so what does this go to then in that case this goes to lambda to the r or r factorial e to the minus lambda the pass on you shouldn't be surprised at this because the first one the first recurrence as you can see is essentially a geometric distribution it's the first term in that and then successive recurrences would be pass on distributed because they're independent so this is the reason for saying that if you have an ergodic dynamical system in which there is chaos for example and you're on the attractor or on the invariant set with some invariant measure then if you take a sufficiently small cell in the system and ask what's the distribution of recurrences to the cell that is pass on distributed this fashion this is generic this is typical behavior it changes drastically once you have stickiness in the system like we looked at intermittency then of course you can intuitively see this whole thing will go out of the window because the system would tend to stay in a cell where there is a marginal fix point if it's just marginally unstable and then the recurrence time distributions would be very different from this exponential behavior in fact it would start leaking like power laws various power laws here the limit distributions are not pass on anymore they could become Gaussian's they could become other stable distributions and so on and this is a characteristic way in which you detect the existence of such intermittency or such stickiness in the dynamics because after all everything depends finally on the behavior if you look at what we wrote for f of f of n this went like this sequence W n tilde minus twice and there was a 1 over mu of C we are looking at the long time behavior of this in this simple case it was an e to the minus n times something or the other we are asking what this does so you can easily see what would you say if W n tilde went like 1 over n squared for example what would this go like for large n it's like the second derivative right so it should go like 1 over n to the power 4 in that case on the other hand if this went like a 1 over n then this goes like a 1 over n cubed and so on so you immediately have the possibility of very slow decay then of course you are not guaranteed that you have a Markov chain or you have a Markov partition or anything like that you have a general formula and it could be as correlated as you like which is what happens in the case of intermittency or in the case of systems which do not which are not hyperbolic everywhere which are the more typical cases which in practice would be the typical cases so the final comment one can make on this is that while we have these beautiful generic properties for classes of systems dynamical systems in practice in real life in a given dynamical system you don't have this beautiful mathematical behavior the system is not uniformly hyperbolic everywhere in general and then you would have different kinds of recurrence statistics you would have slow decays would have power laws and so on which would give you some indication of what kind of sticky points are there in the phase space so this is where I would like to end this little discussion of recurrence statistics per se and of this kind of coarse grain dynamics we look at some other examples subsequently and now we move on to other topics we talked a little bit about Markov chains and we defined some of their properties of Markov chains in particular we looked at Markov processes in continuous time itself more than Markov chains but then we reverted to Markov chains when we came to this coarse grain dynamics because it was in discrete time here but essentially the property the essential property you need to remember that is that it is one step memory just the preceding step nothing more than that and then the statement I made was that the probability density that the variable is between x and x plus dx say in a continuous case state space case starting at some x0 this conditional probability for a stationary Markov process determines everything it determines the process completely if you take the limit of this as t tends to infinity then this as t tends to infinity tends to a stationary probability distribution function p of x which gives you the mean value the mean square value all the moments which are all independent of time so in general for example the nth moment of this would not depend on time this would be equal to dx x to the np of x but all two time and higher time probabilities joint probabilities are all given by this function in terms of this function then the question is what do you write for this equation what kind of information what kind of equation does this quantity satisfy and we saw that this satisfies under suitable conditions it satisfies a chain condition to start with the Chapman call my graph equation but in addition under suitable conditions we saw that this thing satisfied a master equation let's write this master equation now so I said that ? over ? tp of x, t let me let me for the moment drop this x0 we will impose it as an initial condition an arbitrary initial condition so this thing here satisfies an equation of the form dx ? p of x ? t multiplied by a transition probability density per unit time which was x ? – p of x t this was the master equation we wrote down now I didn't mention ways of solving this master equation it's not trivial because it's an integral differential equation as you can see and then the initial condition on this p of x0 could for instance be some ? of x – x0 that takes care of this conditional probability okay now how do we solve this kind of equation it's not trivial you have to know what this kernel is like this transition density is like but there is one case in which you can reduce this to a differential equation which is in general easier to solve in some sense than different difference equation than an integral equation of this kind well suppose you say w of x x ? which recall is the probability density per unit time that if you start in the state x ? reach a value between x and x ? t x per unit this is the transition probability density per unit time this quantity here what could this be a function of in general it's a function of the starting state and the end state as well but we could say well suppose this is a function of I write this as some function of the starting state x ? and the jump x – x ? so let's write that as ? x which is x – x it's a function of this and a function of that in general well it's equivalent to saying it's also a function of x and x – x ? entirely equivalent and now let's assume that as a function of this jump here all the moments of this w exist so let's assume that all these quantities integral d ? x ? x to the power n w of x ? x suppose you assume that these moments exist let's give them some names a n and that of course be functions of x suppose for all non-negative integer n then under those conditions you can reduce this equation here this equation here implies ? over ? t p of x ? t becomes in a differential equation so all we have to do is to substitute for that here in here and then there is an integration which is doable and you end up with this is equal to the following a summation from n equal to 1 to infinity unfortunately it's again an infinite sum here you have to pay that price – ? over ? x to the power n a n of x p of you have to worry about convergence and so on I am going to slur over those points but you can show that this is equivalent to this differential equation but it's an infinite order differential equation because you have the partial derivatives with respect to the state variable x of all orders in here this expansion from this master equation this is called the Cramers expansion you will see in a minute that you are actually familiar with this expansion at least part of it now let's make a further assumption these moments what would you expect physically for any process in general ground you would say well this here is a function of the starting state for example and the jump and the higher and higher moments of this could be expected to get smaller and smaller therefore you could say suppose these ends as n increases become smaller and smaller numerically then I could perhaps truncate this equation at some stage I could approximate this equation at some stage may be the first stage second stage and so on it turns out that there is a systematic way to do this it turns out that the most common truncation is at the second stage so you end up with an equation which reads ? over ? t p equal to – ? over x a1p plus d2 over dx2 this is typical of what you get for such Markov processes and this is called the Fokker-Planck equation or the forward Kolmogorov equation there are several names for this equation it's a first order in time and second order in the state variable partial differential equation now you are familiar with one example of such an equation what's that which is that equation by the way this is called the drift term and that's called the diffusion term and the ordinary diffusion equation that you write down for p is an example of such a situation that equation if you recall was this that corresponded to the case where you had no drift here you had free diffusion of some kind and this was a constant that came out and then you had this equation so they begin to suspect that Brownian motion which leads to this diffusion equation is a Markov process indeed it is it's called a Wiener process and it's a special case of a more general situation so processes where the master equation can be reduced from that level to this level they are called in the mathematics literature they are called diffusion processes as a generalization of this original diffusion equation and you could ask where do these equal can I write an equation for X itself yes you can write what's called a stochastic differential equation for X which is equivalent to this master equation for the corresponding probability density if I look at the motion of particles in this room at some constant temperature T and I assume these are classical particles undergoing random collisions with each other nothing else then it turns out you can write such an equation for the probability distribution of the velocity of the particle and that equation looks like this ? over ? t p of v t for one component of the velocity any Cartesian component can be written as ? times over ? v of v times p plus over m over dv2 I will explain what ? is in a minute but this is Boltzmann's constant this is the mass of a particle and this is the absolute temperature and this is what this distribution is for the probability density that the velocity of a particle is between v and v plus dv at a time t you have to give me some initial condition and the initial condition on this problem would be an arbitrary one so you could start by saying that p of v0 equal to some ? of v minus v0 for example and I can plot p this is p of vt which is the solution to this equation it's a Markov process it's a diffusion process and it obeys a master equation of this kind here now what would you say is the distribution at t equal to 0 well it's some ? function at v0 and what's the distribution as time goes on this ? here is the friction constant in the system it's related to the viscosity of the medium what would you say happens as time goes on at t tending to infinity what would you say would happen to this well again remember that the probability distribution is vt starting with some v0 it is what we have written this with that initial condition what do you expect will happen to this as t tends to infinity I for well I have this gas in equilibrium at some absolute temperature t and now I look at some particle and suppose its velocity at t equal to 0 is v0 and I let time evolve and I ask what's its probability density fine but what would you expect as t tends to infinity it would certainly lose memory of v0 what do you think will become pardon me what do you think will happen to p of v, t it should go to the Maxwell distribution it should go to the Maxwell in equilibrium distribution once again right and what would that be p equilibrium of v but what should this what's this distribution this is one Cartesian component of the velocity so this is equal to e to the power minus mv squared over 2kt e to the minus the energy over kt for a free particle the energy is half mv squared and it should be normalized so there's a there's an m over 2 pi kt square root factor which normalizes this distribution so I would expect that this distribution at t equal to 0 goes into this at t tends to infinity so this is t equal to 0 and it should drift back and go into this and let's check if that's true that's easily done because that distribution at t goes to infinity is independent of time and I can write this as a total derivative for the equilibrium distribution so let's do that quickly I expect 0 equal to this now let's convert this to a d over dv make this a d2 over dv2 and this is p equilibrium v times p equilibrium the gamma cancels out we could take out a d over dv and write it in this fashion so this is 0 it implies this thing is a constant what can this constant be well I know this p equilibrium must have finite moments therefore as v tends to infinity it must vanish faster than any power of v or inverse power of v so as t tends as v tends to infinity plus or minus infinity I want this to go to 0 and I want this to go to 0 so the constant is 0 because it must be 0 at v equal to infinity and independent of v therefore the only constant you can have is 0 but that's in trivial equation to solve it says dp equilibrium over dv so it says dp equilibrium over dv equal to minus mv over kt what's the solution to that this is the solution that's the solution so indeed this is happening this is tailored to happen so we have an example of such a Markov process namely the molecules in this room their velocity represents a Markov process a diffusion process over on top of it and it's in continuous time so that's the simplest physical example direct physical example of a Markov process under suitable assumptions we've made a large number of assumptions here this is a specific model of randomness we've assumed but it's a very satisfactory one.