 Now, we are going to further study some properties of we are going to further restrict this study of Markov chains to some special Markov chains called homogenous Markov chains. So, let us understand what we mean by that. So, before that we will introduce some definitions we have we are going to define. So, we are going to say suppose you are at in state n you are in state i and in the next state in the next step we went to state j you are going to denote this as pij of n that is you are going to another state j in one step at the nth time at the time index we are going to denote it as pij and this is called transition probability at step n and in the way we have defined discrete time Markov chains this need not be same for every n. So, let us say I am going from state i to j from day 10 to day 11 to 11th day that probability need not be same suppose if you happen to be on the same state i in the 20th day and you going to state j again in the 21st day need not be always same, but if it so happens that you going from some state to another state in next step happens to be independent of n then you are going to call it as homogenous Markov chain at step n step n time are same for me like a time is indexed right 1, 2, 3, 4 like that we can call nth step or nth time slot whichever you mean at instance n equals to 1 now here just saying you are state i in in step n it is just saying in the next step you go to state j these are going these are talking about one step jumps, but at which step you are going to jump that is going to be denoted by n suppose you take n equals to 10 so this is saying on the 10th step you are at state i what is the probability you go to state j in the 11th step if you take n equals to 100 then you are in state i in the 100th step where the probability you go to state j in the 101th step and if these probabilities are same irrespective of which step you are then we are going to call it as time homogenous Markov chains. So, that is what like this guy is now does not depend on what is n and we just denoted as pij. So, now if I am going to construct n s cross x matrix p where this is of pijs so what does this mean? So, I am talking about state i going to state j right now using these elements I can construct a big matrix p where it will tell from state i. So, if I take the i through it will give me from i different different states that I can go that is going to give me that rho corresponds to jumping from state i to other states if I am going to take this matrix then I am going to call it as transition probability matrix as I already told you what was that it is the state space. So, if it is a countable when I say cardinality of what is this going to be infinite right. So, this s could be finite or it could be infinite we are allowing it to be countable, but it may so happen that the number of states are only finite in this case this matrix is going to be finite dimensional matrix it will have finite number of rows and columns, but if my state s is going to be uncountable it could it is just going to be large matrix where the number of rows and number of columns are going to be uncountable. And now if I take a row of so all of you are able to imagine how this transition probability matrix means looks like or what a how does it behave. Suppose let us say I take one row. So, this is my matrix right may be the better way to write it is like this. This is a matrix containing the elements p i j ok. So, now let me take i 1 i. So, let me take one row I throw and add all the elements in that what is this going to sum to 1. And of course all this elements are going to be non negative p i j are not going to be negative because they are all probability matrix. So, such a matrix p is called stochastic matrix. Is that true? So, if I take this and and now sum over i for a given j will it sum to 1? Now this should be j belongs to s right. So, this is I am adding across the columns. No, rows fixed I throw are fixed and now adding all the elements in that row that means I am adding across the column right. So, what is this? I am asking basically the question I am in state i and now I am going to state j and now I am considered all possible j's from a i I am asking I am jumping to any other state. What is the probability of that? Once right if you are jumping to any other state it is going to be. Now here what is this probability is saying? What is the probability of jumping to j from different states? So, here final destination is j and now we are asking what is the probability of reach j from different i's? Is it 1? So, what is this? Let us say you have state i here and this is j here and this j is all possible values when I say j is equals to s right. Now you are saying okay what is the probability that from this? So, from this I can I am going to jump to one of these states right that is what is probability 1. Now I am asking the other way question I want to come to here but from different values. Now I am asking okay what is this probability? So, what is the probability that I reach to j from one of the states? But at least is this going to be 1? No. Can this be greater than 1? Okay check this. Okay definitely this is not 1 can this be greater than 1? It can be 0 also. Sorry it can be yeah so fine 0 is fine 0 is trivially true. Question is is it can be greater than 1? So, let us say from one particular state it has very high probability of coming to there and it could be greater than half and also it is very likely from another state also it could be coming to that right. So, these probabilities could add up to 1. So, this if you look at a row this cup this forms a probability vector if you take a one row in your stochastic matrix that that forms a stochastic that is a probability vector but if you took at a one particular column that need not be a probability vector. So, because of that it could add up to be greater than 1. Okay now that can be between 0 and 1 also and I am not saying it is all it is going to be at least 0 and 1 is trivial because each element here is between 0 and 1. The question is will it add up to greater than 1 that can be possible. Now okay now let us try to understand how this transition probability looks like for the example we saw earlier. So, all yes is it true all my transition probability matrix are going to be my stochastic matrices why? No this is about already like okay like a time home engineers Markov chain for this we have this transition probability matrix is this transition probability matrix a stochastic matrix yes right because if you take it is a row it will add up to 1 that is the only right condition I need and the other condition that these Pij's are. So, okay let me formally define is called first thing is the summation Pij for j belongs to S is going to be then transition probability matrix is a stochastic matrix. Yeah can this be other way around also? Yeah I mean if you take a stochastic matrix that will correspond to transition probability matrix of some Markov chain okay okay another example. Now let us try to understand how the stochastic probability of yeah it is just like stochastic matrix is a general concept where this has to happen and now if you have a Markov chain and it has an associated transition probability matrix. No it is a general matrix where the rows add up to 1 okay. So basically what we are trying to show is the property of the transition probability matrix is it is a stochastic matrix okay let us revisit our example which we did earlier. Now how does this transition probability matrix look like? So we said that y n takes value what? It takes value 0, 1, 2 all the way up to infinity right if n is sufficiently large. So now let us try to construct its transition probability matrix. So it is going to be cardinality of S into cardinality of S. So there will going to be uncountably many rows but let us try to understand how it is one row look like. So let us take the first row which corresponds to i equals to 0. So what does i equals to 0 means? So I am now basically want to ask you have observed 0 value and now y n plus 1 so this is basically now I am basically trying to say y i n plus 1 equals to some j given y n equals to 0 right. Now this i is is 0 now. Now this let us vary this j if I vary this j I am going I should be able to get a row for that right. So if j equals to 0 what is the business value? So see this is i right this is going to be i 1, i 2 all the way like this and this is like j 0, j 1 and all the way like this. This is how the stochastic matrix is going to look right right. This is because this is a matrix which is Pij for all possible values of i and j. So i is indexing the row and j is indexing the column. So let us say what is this 0, 0 corresponds to what is this probability? Why is that right? To remain 0 and what is this P and what are the other elements is going to be if they are going to be 0, 0 can now let us come here. If i is 1 can y n plus 1 be 0 so what is the probability then? And then if i equals to 1 what is the probability that j is 1 again 1 minus P and this is going to be P and now if I take i equals to 2 what is this value is going to be? So how this matrix look like? Each row will be such that it will be the shifted version of the previous row shifted by one element right shifted towards right. So this row is shifted here and you feel one like this you keep on shifting and you can feel the entire matrix like this. Now often this kind of instead of writing this transition probability matrix like this one can try to show them pictorially called transition diagrams. So what we will do usually in this is we are going to circle the states like this let us say this is state n like this. Now if you are in state 0 I want to see what is the probability that you come back to state 0. So that is 0 going to 0 what is this value and now what is the probability that you go from 0 to 1 P and all other things you do not put because they correspond to 0 value and now what is the probability that 1 remains 1 and what is that it goes to 2 P and what is this probability 0. So you can write it and say 0 or if it is not if it is 0 you can ignore that link and suppose if I take this something what is this n going to n and what is this? So like that you can show this. So you can see that this transition probability diagram has the same information as this transition probability matrix here okay next. Now I am going to define one more term here let us say I am going to define i jn. So now notice that instead of writing this as a n as a within this parenthesis I am writing it as a superscript here. Now this is a different meaning what I mean by this is probability that x1 equals to j given x0 equals to r. So what is the difference between this notation and this meaning this notation here? Here you wanted to go from state i to state j in one step in one jump at state n, step n but here you are not jumping from i to j in one step but you want to reach there in n step starting from origin. So initially at time 0 you are at state i and now you are asking in the nth step I am going to reach state j okay. Now xm equals to this and x so you can verify that under homogeneous Markov property you can indeed write like this for any m. So it does not matter from which state you are jumping to which state what matters is what is that length of steps you are looking at yeah right now we are not talking about this. So let us not confuse let us for time being let us state this itself what I am just saying that you are initially at state 0 sorry at time 0 you are at state i and now asking after n steps in the future you jump to state j let us denote this by pij and then I am going to denote this big matrix p containing all this pij n. Now let us try to understand what is the relation between this p matrix and this p superscript n. So let us take this for n equals to 1 I am saying p equals to 1 is going to be this p is this true for n equals to 1. So n equals to 1 so I am just asking for one step jump here and that is going to be in that case it will fall back to this definition of p okay. So this matrix here is called n step okay now what we are going to show is fine this is true p of 1 equals to p now we are going to show that p of n this value is nothing but actually p to the power n what is the meaning of this p superscript n in parenthesis is this n step transition probability p to the power n means this p multiplied by p times. So n step transition probability matrix can be expressed in terms of transition probability matrix completely okay let us see why this is true. So this is we are going to call it as so let us take some states ij in S and some numbers okay now what I am going to do is I will be interested in finding xj n plus l. So n plus is some number so what is the definition of this probability this probability is a probability that i go to xn plus l equals to j starting from x not equals to y. So this is the definition of pij n plus l right I am just replacing n by n plus l. Now I am going to use the usual trick of summing it over all intermediary states xn xn plus l equals to j given x not equals to r. So what did it I just bought in this extra state here and summed over all possible state it can take okay then so you see that I just bought in this state xn and then applied my chain rule of probability here to write in this product form and then I am going to use my Markov property and the second term here. So if I am going to use Markov property does this probability depends on x not equals to y no right it only because I am already conditional xn equals to k it will be independent of this guy. So it will be can now focus on this part so now this part here by my definition this is starting from state i in the 0th round and jumping to state k in the nth round right this is by definition pi k in n steps and then here I am asking starting from n step where you are in state k I am going to state j in the n plus 1 step that means what how many steps I am jumping here l steps. So you can also argue that this guy is nothing but pi sorry pkjl so this is so if this n equals to 0 suppose this let us let us fix this n equals to 0 so this is going from k to j in l steps right but now we are stating starting from nth round and then jumping to the n plus nth round. So the jump is of l rounds here so this can be also written as pkjl. So what we are saying is by this in this definition if I add like this n plus m and make this sorry m and n plus m this can be still with the same what matters is the number of jumps so this we have to bit argue using our time homogeneity property that this is also true. Okay now what we are doing we could write pij of n plus l as summation over k pi k in n steps and then going from k to j in l steps right. So if you are going to think of this as a matrix product I can write it as p of n and p of l in this of this ijth element in this is this true. So if you take this matrix pn corresponding to the nth transition probability and take this matrix p of l which is lth transition probability if you are going to take the product and then look at the ijth element this is exactly going to be this. So why is that can you imagine so when I am looking at ijth term right what I am basically doing is I am multiplying the ith row with the jth column that is exactly this right and I am just summing it over all possible states that is why this value. So then if every element ijth element of this n plus l can be written like this then what we are basically saying n plus l is nothing but p of n and p of l right. So what this relation basically said is if you take the ijth element of this it is going to be the ijth element of this product. So that is why these two matrix are same. So good what we did is we are able to split this transition probability matrix in this fashion now that will lead us to this result how by doing the iterations. So I am going to now say this l was here arbitrary right I could choose any l I like. So what I will do is I am going to take it as I am going to take this n minus 1 and 1 and then I am going to write it as I am going to treat this n as n minus 1 and l as 1 in this case in that case I can treat it as p of n minus 1 and p of 1 right. I am just whatever the property we got here I am applying that property to this right. So just by say taking l equals to 1 and n equals to treating l as n minus 1. Now by my definition what was p 1 it was p can I do the same business that I did here on p n minus 1 and write it as like this and I can keep on doing this right. So eventually what I am going to get I am going to get p raise to the power n actually p0 and p raise to the power n. So what is p0 you start from a straight and you remain in the same round you remain in that state. So let me define what is p0. So what is p0? p0 if you take going to take the ij element this is going to be probability that x0 is going to be j given x0 equals to i that is the meaning of p0. I mean by this kind of iteration yeah you do that iteration again finally yeah. So in that case when you end up with p1 you do not want to make it 1 plus 0 right. So in that case yes you do not need to do this you can just end up here. But I still want to define what is this p0. What is p0? You are going to define it identity matrix y. So when i equals 0 here this again going to some other state j is it possible? No right this is going to be 0 and if so this is going to be 0 if i not equals to 0 but if i equals to j this probability is going to be 1. So because of this if we are going to now take this p0 this is simply going to be identity matrix. So both ways are correct we can just stop at p of 1 or even if you do this our to make the things consistent our p0 is going to be defined like this. So let us stop here. So in the next class we will look into the other properties of Markov chain called as strong Markov property.