 In the previous lecture, we saw what a Markov process is, how to construct transition probabilities and transition probability matrix, then how to use these transition probabilities for calculating the occupancy probabilities of states. As the process moves on, we could calculate the transition probabilities for the next step by using the probabilities calculated in the previous step. This is of course, perfectly valid. You can go on doing it from let us say the weather today to weather tomorrow or from then on to weather day after tomorrow every time through a transition matrix for a single step. So, this is called basically single step based marching or single step transition probability. However, if you want to calculate let us say the weather problem, the weather after one week from the information available on the occupancy probabilities of rainy and sunny days today, it is actually not necessary to go through every one step for seven such steps to reach after one week. One can actually construct its multi step transition probability. This precisely what is made possible because of the Markovian nature of the processes and this approximation is what we will learn today. It is not an approximation, this method is follows from what is called as a Chapman Kolmogorov equation. First let us see how to construct a two step transition probability or two step transition probability. So, by that what is meant is a matrix whose elements basically give a rule for making a transition from today to day after tomorrow or from an nth step to n plus 1th step n plus second step two days. So, this derivation of this follows from the basic Markovian assumption that we are making or the rule for constructing the occupancy probabilities from the previous from the information available on the previous step. Thus we have seen for example, that we can construct the transition probabilities at the n plus 1th step in a state J by summing over all the states there are supposed to be S states on which one defines that the occupancy probabilities via a transition probability element which says 2 J from K. This is the rule we adopt given that the system was in a state K at the nth step. So, we just to be familiarize ourselves. This we call as the occupancy probability of the state K. So, it is very easily interpreted the probability of a system being in state J tomorrow is comes from the state states which are today and sum over all the state probabilities with the appropriate a transition probabilities to state J. We can actually go back by one step we can just replace n with n minus 1 and say that the nth step for example, W n K the probability of the system being in state K would have similarly come from n minus 1th step probability distributions via the appropriate a transition probabilities 2 K from I W n of I here now the it runs over all I states all I indices and this is n minus 1. So, this is made possible because of the because of the fact that we defined a transition probabilities which do not depend on the number of steps the process the number of steps and the process is Markovian. Now upon plugging this in the previous expression we can write the n plus 1th step occupancy probabilities let us say in a state J as a sum of these 2 K equal to 1 to s then we can the second sum also we can write right now 1 to s p J K W n K, but W n K we are going to replace now by p K i W n minus 1 I we can now since both the sums run from 1 to s we can as well interchange the two sums and write I equal to 1 to s then K equal to again 1 to s p J K p K i W n minus 1 I since W n minus I does not depend on K it can be taken out of the inner sum and it can be the whole thing can be written as I equal to 1 to s W n minus 1 I then this whole thing as K equal to 1 to s p J K p K i bracket close. Now we can identify the term within the bracket as a transition probability itself because once we multiply consider it as a transition probability then the state I at n minus 1 transits to the system from that state it transits to the jth state in the n plus 1th step directly supposing we define the term in this curly bracket with the new expression that is we define p 2 J from I to equal to sum K equal to 1 to s p 2 J from K to K from I then the transition probability at n plus 1th step to the state J simply can be written as sigma I equal to 1 to s p J 2 J from I in 2 steps and the probability that at the n minus 1th step it was in state I can just compare this. So, n minus 1th I and this curly bracket has been renamed with a new concept or new terminology called p J I 2. So, the p J I 2 therefore, represents a direct 2 step transition probability 2 J from I in 2 steps is called 2 step transition probability. The relationship above that is this relationship which relates the 2 step transition probability as a sum of transition probabilities in single step is in fact, the first Chapman Kolmogorov expressed simply it just says that the probability of transition from I to A or the element of the transition matrix describing transition from I to J in 2 steps is the sum of the transition elements from I to K multiplied by probability of transition from K to J summed over all possible states present in the system. So, it is almost looks apparently simple, but it has lot of implications in stochastic processes. Since the elements p J I they make a matrix you can easily see that the sum over matrix elements is a standard rule for multiplying matrices. Hence, we can say that this p J I 2 these are the elements of the matrix elements of matrix p square multiply dot dot product of the matrices. So, in other words we say that the 2 step transition probability matrix p 2 equal to basically p square. It is a very important development in understanding because once we therefore, once we had the 1 step transition probability matrix p we have to just construct all the elements by squaring it that automatically leads us to as 2 step transition probability matrix. In general one can then go on and construct an n or or let us not use n let us use K or some m let us say m steps transition probability will therefore, be p to the power n understood in the matrix multiplication sense. This is we can say further reinforced Chapman Kolmogorov equation C K for n step transition matrix. So, pictorially all that we have achieved say by 2 step transition matrix is while we had earlier transition in single step from a state some state K to state J by a single step transition elements like this from from various states, but essentially given by a single jump now we have achieved this is single step. So, in a 2 step transition we have from 2 S state J from state I it is transition that it occur to some state K and then from K it translated to J and for all possible transition for all possible K values 2 steps. So, this is the pictorial representation of 2 step transition. So, in other words I can straight away go now and construct the elements of the occupancy probabilities as a vector the elements from column vector W which can be obtained by operating the column vector of the first state via an n step transition matrix, matrix should be able to defined via the some rules that we derived above. An interesting question in this now arises about what happens if one goes on increasing n will it attain will the system attain at stationarity or not. So, limit n tends to infinity what happens to p to the power n supposing stationarity exists then all those vectors W 1 W 2 etcetera W n will should reach some stationary state distribution vector W infinity let us say let us denote it and that state is such a way that when you operate it further it should be invariant with respect to further changes. So, stationarity would imply implies that the asymptotic state W infinity exists which when operated with the transition matrix remains. So, that is how we will define W infinity which means p W infinity if it remains an infinity we can call it as a I unit matrix unitary matrix W infinity or p minus I W infinity should be 0. This is basically an eigenvalue equation a matrix eigenvalue equation and that steady state elements will be therefore, the they will be the eigenmetrics corresponding to an eigenvalue equal to 1 normally we say p minus lambda I equal to 0 here lambda should be 1. So, and 1 should always be one of the eigenvalues there will be others which are which may not be physical, but an eigenvalue 1 should always exist the corresponding eigenvector will be the W infinity vector. In fact, by operating it many times one can easily solve a given problem. We basically discussed a two dimensional case a matrix with just four elements two independent elements the weather problem, but one can have any number of states and the problem may be more difficult as the number of elements increases, but nevertheless the problem is now transferred to a purely numerical problem of extracting the eigenvalues and eigenvectors of a matrix. However, if you want more information on the way the system transits to that state from an initial state one has to actually solve the equations which is which gives us more insight just to understand how system transits to an asymptotic state from a given initial state we continue with the weather problem just to illustrate the method to. So, transient solution let us discuss the the dichotomous weather problem and then let us see what solutions does it lead to for the steady state. So, as we noted in our previous lecture, we have basically now two independent transition probabilities one probability of transiting 2 S from S to sunny state today given that it was sunny state yesterday PSS this let us say was we now as a gravity we call it as P 1 then obviously, the probability of transiting to rainy state given that it was sunny yesterday that will be 1 minus P 1 we call it as Q 1 like in that example P 1 was 0.4 and Q 1 would be 0.6. Similarly, let us define P probability of transiting to rainy state from a previous rainy state single step let us say as P 2 then conservation of transition demands that the probability of transition to sunny state from rainy state yesterday should be 1 minus P 2 which is Q 2. Now, let us denote the occupancy probabilities for gravity for example, W n S which meant probability of system occupying a sunny state state S there are two states S and R this we just denote this as a simple notation S n for gravity because. So, S is the probability now at the nth step. So, then similarly we can say W n R which is 1 minus W n S because it has to be either in the rainy state or a sunny state it is a very simple problem we can denote it as R n. Now, the one step transition probability equation reads we can write it for S n plus 1 S n plus 1 will be equal to P 1 into S n plus Q 2 R n the sunny state tomorrow will come from sunny state today with the probability P 1 and it will come from a rainy state today from with the probability Q 2. Now, other equation is similarly can be written R n plus 1 equal to Q 1 S n plus P 2 R n we have ensured these are basically the transition laws one step this equation is obviously imply S n plus 1 plus R n plus 1 because P 1 plus Q 1 is 1 and Q 2 plus P 2 also is 1 simply S n plus R n and if you propagate back your starting vector itself is normalized. So, this should always remain. So, you can go on backwards and show that we just one all the time. So, if you solve one of them you have the answer for the other since it is a simple 2 by 2 equation we can replace R n in the first equation like here if we S n plus 1 it depends on S n and R n can be replaced in terms of R n plus 1 and S n and R n plus 1 can be replaced as 1 minus S n plus 1. So, the entire equation can be written very easily as R n will be R n plus 1 minus Q 1 S n divided by P 2 which will be 1 minus S n plus 1 minus Q 1 S n divided by P 2. Hence we will have the equation for S n plus 1 simply P 2 plus Q 2 by P 2 if you substitute and take all of them take them to the left side S n plus 1 term to the left side then on the right hand side you will be left with the term for S n which will actually when you simplify you will get this minus Q 1 Q 2 divided by P 2 plus a constant term coming from Q 2 by P 2. So, P 2 cancels and note that P 2 plus Q 2 is 1 and we can easily know since Q 2 equal to 1 minus P 2 etcetera this term P 1 P 2 minus Q 1 Q 2 equal to P 1 plus P 2 minus 1. So, we have now an equation which simply becomes S n plus 1 equal to P 1 plus P 2 minus 1 into S n plus Q 2 for all n equal to for all integer values. So, we can actually solve this by easily noting that it can be written in the form some alpha S n plus beta where alpha equal to P 1 plus P 2 minus 1 and beta is simply Q 2. You can proceed by induction for example. So, if you put n equal to 0 you will get S 1 equal to alpha S naught plus beta. Similarly, if you put n equal to 1 you will get S 2 equal to alpha S 1 plus beta which can actually be expanded into alpha square S naught plus beta into alpha plus 1. Next one more term we can try and this will be alpha S 2 plus beta and again substituting for S 2 it will take up alpha cube S naught then plus beta into alpha cube minus 1 by alpha minus 1. This last identity this is basically because you would have got alpha square plus alpha plus 1 term and that we rewrote in this form. So, now it makes it very easy for us to write the most general term which takes the shape S n equal to alpha to the power n S naught plus beta into alpha to the power n minus 1 divided by alpha minus 1. Explicitly one can write the solution therefore, we have solved the problem now S n equal to P 1 plus P 2 minus 1 to the power n S naught plus Q 2 is 1 minus P 2 into we can write this as 1 minus P 1 plus P 2 minus 1 to the power n divided by 2 minus P 1 plus P 2 because alpha minus 1 will come to we have made it as 1 minus alpha we have written it in written it in a way to that the term remains positive. Since P 1s each are probabilities P 1 plus P 2 should always be less than 2 hence the denominator will be positive in this way of writing. So, this is the most general solution to the two state problem that we discussed and some interesting property we will see in the next lecture. Thank you.