 So, I will continue the discussion with transient and absorbing probabilities that is your reducible market chains. So, now, here for example, look at this transition diagram. So, there are four states and you see that from one, you can either come back to one or go to three, but then three, once you reach three, then the probability of returning to itself is one. So, therefore, this is a certain event. So, once the process from one goes to three, the process will stop there. Similarly, from one, it can go to two and then from two, if it transitions finally to four, then four is an absorbing state and so, your process will again stop here and similarly two. So, you can see that from two also, you can go to one and three or two to itself and then two to four. So, one and two, this should be one and two. One and two are transient and three and four are absorbing. That you can immediately see just by looking at the transition diagram at once and of course, you can see that over a short period one and two will be visited, because they are transient. So, for a while, the process may go from one to one itself or from one to two, then two to one, that may go on for a while or two to two, but then the moment the process transitions to either three or four, it stops. So, therefore, for over a short period of time, one and two will be visited, but ultimately the process will either enter state three or four and then stay there. So, the process will end and so, now using this, I want to talk about absorbing probabilities. So, for example, the questions that arise. So, I have just first written the first question, even though this is plural, we will talk about the other question after this. So, which absorbing state will be entered? Of course, here again, we will only talk in terms of probabilities, which absorbing state or what is the probability? Now, just to point out the difference between the absorbing probabilities and the study state probabilities, you see, now just look at this example. So, what I am saying is that we will answer this question by computing absorbing probabilities. So, before I start talking about the method for computing the absorbing probabilities, let me just give you a feeling, what we are talking about. See, here if you are in state one, then the probability of transitioning to three seems higher than transitioning to four. Because you might say that one, of course, if it happens in one step, then this probability is half or then it can happen that you can go to itself and then transition. So, that will be also then that will be, so this will be half plus half into 1 by 4. So, let us just compute this in two steps and then if you, so transitioning, if you start in one, then transitioning to t three, let us say in two steps, so half plus half into 1 by 4. But if you are in two, so computing the transitioning, if you transition from 2 to 3, we cannot do it in one step. So, the two step transition probability will be, so the path will be from 2 to 1, from 2 to 1 and then 1 to 3 and so this would be 2 to 1 is 1 by 3 and 1 to 3 is half. So, 1 by 3 into half will be 1 by 6. So, this is the probability, two step transition probability of going from 2 to 3. Fine, I mean again the example that I am trying to take is, so half plus half into 1 by 4 will be how much, this will be 1 by 2 plus 1 by 8 which comes out to be 5 by 8. You are connected to 3 from 1 directly, so this is what we will try to bring out through the computation of the absorbing probabilities. It looks like that transitioning from 1 to 3, the probability would be higher if you are in state 1 and if you are in state 2, then the probability should come out to be lower than that. So, in other words what we are trying to make a case for is that the absorbing probabilities are not independent of the starting probabilities, that means the starting state. Whereas, steady state probabilities we saw, so unlike steady state probabilities for an ergotic process, the absorbing probabilities depend on the starting state. See, steady states were independent of where the system started and therefore, we would compute simply pi equal to pi p and it did not matter where the system was, because remember all the rows of the matrix became identical. But in the absorbing probabilities, it will depend on where you are starting and this is what we want to make sure. Now, we will ultimately when we compute the probabilities, we will show that the absorbing probability when you are in state 1 to 3 is higher than when you are in state 2 and similarly for 4, it will be higher when you are in 2 than when you are in 1. So, we will show, because here we will have to then let us say I take 3 steps, 4 steps, 4 transitions, then I can show you that the numbers here when you in state 1 transitioning to 3 will be, will have a higher value than transitioning from 2 to 3. So, let us see, it will come out. So, let me make these definitions, a i j is a probability of reaching the absorbing state j from state i, where i is a transient state. We want to compute this. Now, of course, one way is that you can use the first passage probabilities. So, f i j that means, over the first time you transition from i to j in n steps and then you sum this from n 1 to infinity. Now, this is not computationally efficient, not computationally efficient, because we will have to compute all the higher powers of the transition matrix and then we want to compute the f i j n. So, an alternate method and this should looks familiar, because we have already used this kind of argument. Now, what we are saying is that if you want to compute a i j, then the transition either takes place in one step. That means, in that case the probability is p i j. So, this is the probability of transitioning from the transient state i to the absorbing state j. So, then if this is p i j plus or i transition from i to another transition state, transient state k. So, that probability will be p i k and then a k j will be the ultimately transitioning from the transient state k to the absorbing state j. So, this will be this number with this probability and this is the probability of transitioning in one step from i to j. So, this is the argument. Now, a complete set of linearly independent equations will be obtained if the same argument is applied to each transient state. So, that means here I have done it for i. So, whatever the number of transient states from each of them, I will try to find out this probability of transitioning to the absorbing state j. And then I will have a complete set of linearly independent equations and we will then solve for the a i j's and then we will get the. So, we will be able to answer the question that which absorbing state will be entered in the sense that you will say that a particular absorbing state will be entered with this much probability starting from this state. So, this is the way we will be able to answer that question. So, now let us just take the transition diagram. So, we are considering this process and if you write out the equations, then you see from one transition state 1, you want to go to absorbing state 3. Then if you write out these equations, it will be p 1 3, either you do it in one step or you go to the transient state 2 p 1 2 and then a 2 3 or you come back follow the loop p 1 1 into a 1 3. So, this is your equation when you want to write down for a 1 3. Similarly, you can write it down for a 2 3. So, this will be p 2 3 plus p 2 2 a 2 3 right from 2 you can either transition to 3 in one step p 2 3, p 2 3 in our case will be 0 right or there you will have to go from 2 to 2 loop and again a 2 3 there will be a 2 3. So, you come back to itself and then a 2 3 the probability of transition of 2 to 3, then all you go from 2 to 1 and then you go to from 1 to 3. So, this is these are the two equations. Now, if you substitute for the p i j's, we obtain these two equations right and you can see that here the variables are a 1 3 and a 2 3 right because this is 0 as I said p 2 3 there is no arc from 2 to 3. So, you have two unknowns and two equations. So, we should be able to solve for a 1 3 and a 2 3 here right. So, see if you look at these equations from here if you bring a 1 3 here it will be 3 by 4 a 1 3 equal to half plus 1 by 4 a 2 3 and here because this is 0. So, you bring a 2 3 here. So, this will be 2 by 3 a 2 3 which is equal to 1 by 3 a 1 3. So, that immediately gives you 2 by 3 a 2 3 equal to 1 by 3 a 1 3 gives you a 2 3 is half a 1 3 and now if I substitute for a 2 3 in terms of a 1 3 in the second equation which was 3 by 4 a 1 3 is equal to half plus 1 by 4 into 1 by 2 a 1 3 right. So, then I get the equation for the value of a 1 3 which is 3 by 4 minus 1 by 8 a 1 3 is half that is 5 by 8 a 1 3 is half. So, therefore, I get the value of a 1 3 as 4 by 5. So, once my a 1 3 is known as 4 by 5 I can compute a 2 3. So, you know even from here only one could have concluded what I was trying to show you, but of course it would have needed at least 2 to 3 you know computations of 3 4 parts to get you to this that a 2 3 would be half less than a 1 3. The probability of reaching the absorbing state 3 from 2 is smaller than the probability of reaching 3 from 1 and no maybe you can say that the intuitive feeling looking at the diagram, because you have a direct connection here and here you will need at least 2 transitions to come here. So, therefore, this was the feeling which you can now sort of validate by doing these computations. So, therefore, a 2 3 comes out to be 2 by 5 which is less than 4 by 5 and therefore, it is less than a 1 3. Now, certainly a 1 3 plus a 2 3 will not be equal to 1, because we are not sure whether we will be yeah. So, a 1 3 plus a 2 3 yes, because you know we do not know whether we will from 1 we will definitely reach the absorbing state 3 you may also reach the absorbing state 4. So, therefore, these 2 probabilities will not add up to 1 not necessarily, but if you fix the initial state then the system will enter 1 of the absorbing states. If you fix the initial state that means if I simply say that my system is in state 1 then of course, a 1 3 plus a 1 4 will have to be 1 or if my system is known to be in state 2 then I must transition to either 3 or 4 ultimately. So, therefore, a 2 3 plus a 2 4 will have to be 1. So, if you know this then I can immediately compute, because I have already computed a 1 3 and a 2 3. So, then a 1 4 will be 1 by 5 and a 2 4 will be 3 by 5 and. So, here again the same thing gets validated that your a 1 4 is less than a 2 4. So, the probability of transitioning from 2 to 4 is higher than transitioning to 4 from 1. Let me just make this 1 this is the whole idea. Now, a 1 4 and a 2 4 I could have also obtained just as I wrote down the equations for a 1 3 and a 2 3. So, the same way I would write down the equations for a 1 4 and a 2 4 and I would compute them. So, this is what now this is the method for computing the absorbing probabilities and as we have seen that these probabilities will depend on where the system is and unlike the steady state probabilities which are independent of the starting state. Now, let us just generalize this process. If I now call a as the matrix of these transition probabilities then here your I will vary I mean transition probabilities from transient states to absorbing states. Then I will be the rows of the matrix are the you know these vary over the transient states and j varies over the absorbing states. So, the columns correspond to the absorbing states and this. So, for example, in this case it is 2 by 2. So, in this case your matrix will be 2 by 2 and you can see that if I wrote down the equations for a 1 4 and a 2 4 also then I will have a matrix here corresponding to all these absorbing probabilities that I want to compute. So, then in that case the system would have been written as the matrix A and this will consist of remember the R is the matrix consisting of transitioning from a transient state to absorbing state in one step you are writing. So, P i j's. So, these were. So, therefore, R is a sub matrix of your transition matrix P which has the same dimension as A and the rows of course correspond to the transient states and the columns correspond to j because you are transitioning from a transient state to absorbing state in one step. So, therefore, R is exactly the sub matrix of P of the same dimensions as A right. Then we wrote down Q A. So, remember this was the this will consist of the transition probabilities from a transient state to another transient state because we were writing. So, either from i to j you did in one step and those probabilities are here or you go went from i to k another transient state and then from k you went to j finally. So, i to k this is transitioning from a transient state i to a transient state j k. So, Q will be that matrix and since the columns here are transient states. So, and the rows of A are a transient states. So, this matches. So, this is compatible and this will be your system of linear equations written in a matrix form when you want to compute the absorbing state probabilities right. And so let us see. So, here for example, if you bring to this side it will be i minus Q times A equal to R. So, we continue with the computation of through this matrix we will and we look at the entries of what do we mean by. So, in other words it will be i minus Q A is R. So, A will be i minus Q inverse R and i minus Q inverse will exist. So, i minus Q A is R. So, this implies that A is i minus Q inverse R right. So, you see here A represents the probabilities of reaching and absorbing state from a transient state. And so when we set up this equation a valid one then there must be a solution to this system. So, that is what we are saying. So, therefore, i minus Q and it is a unique solution. So, therefore, i minus Q inverse exist that is what we are trying to say. So, I am giving the argument in this way normally you try to first show analytically that the matrix is non-singular and hence the solution exist. But here we know that the system will ultimately settle down into one of the absorbing states. So, these probabilities are finite and therefore, the solution exists and hence i minus Q must be invertible. So, this is the whole idea. So, the star's equation has a solution has a unique solution. So, i minus Q inverse must exist. And now we have interesting interpretations of the elements of i minus Q inverse also. And as I said the question that keep arising we will go on answering them and in between we will also look at the interpretation of the elements of i minus Q inverse. So, the second question that arises is how many times a transient state will be occupied before absorption occurs. So, now the second question is how many times will a transient state will be occupied before absorption occurs because obviously this is you are talking of a reducible marker chain and so you want to know how many transitions on the average will go continue before absorption occurs and the system stops. So, this is the question mark and so for each particular transient state we want to answer the ask this question. So, and surely this is a random variable depending on the starting state. How many times a transient state will be occupied again is a random variable and it will depend on what is the starting state. So, let us define E i j as the mean number of times that transient state j is occupied given that initial state is i. So, remember again computing your M i j which was your mean time for transition from i to j in the ergodic case. Then we also talked about the same thing while computing your F i j's and now and then the absorbing states the probabilities of going from a transient to a absorbing state A i j's. We were using the same argument which we will use here again to compute to write down you know the equations relating these various E i j's which are mean number of times that a transient state j is occupied given that the initial state is i. So, here this is of course, transient state i is a transient state again. So, i to j. So, when i is not equal to j then of course, you will go to i to k and then k to j. So, this will be the mean number of transitions that you need for transition from k to j and this is probability of transitioning in one step to i to k when i is not j and this summation will be over all transient k's. So, this is one set of equations the other would be if i is equal to j then you are computing E i i. So, it will be one step you can go from i to i you transition from in one transition to from i to i. So, therefore, number of transitions is 1 plus you again transition from i to k where k is transient and then from k to j. So, the mean number of times the same argument continues. So, in matrix form these equations because now we have these relations for all i to j and this is from i to i. So, the same dimension as q. So, the relationship is that E is equal to i plus q e and so here now your E is actually nothing but i minus q inverse and this is what I wanted to say that you know you can relate the elements of i minus q inverse and give them a meaning. And let us just look at the entries of i minus q inverse for the example that we have we were discussing just with four states where two were transient and two were absorbing. So, in that case your r was half and 0 and 0 1 by 3 this was you know going from 1 to 1 and this was going from 3 to 3 sorry 2 to 2 1 and 2 were transient and 3 and 4 were absorbing and then q was the matrix of transitioning from transient state to a absorbing state. So, the diagram you can just refer back to the diagram and so this will be 1 by 4 and 1 by 4 that means from 1 transient state you could go to 3 in the probabilities 1 by 4 or you could go to sorry sorry I will just repeat this q is basically from transient. So, this is 1 and 2 and this is 1 and 2 I am sorry. So, 3 and 4 may be I will just again just draw the figure. So, this was 3 and this is 4 and so you had this you had a loop here then you could come here and you could go there this was your diagram sorry. So, I should be careful yeah. So, r q is just the matrix consisting of where the rows and the columns both correspond to transient states and so in this case this loop had probability 1 by 4 and then you could go from 1 to 2 also with probability 1 by 4 and from 2 you could go to itself with 1 by 3 and this was also and then you could go from 2 to 1 probability 1 by 3. So, you can just verify these numbers now. So, I minus q will be this matrix and therefore, I minus q inverse will be 8 by 5, 3 by 5, 4 by 5 and 1 by 5 we can and then we also computed we made these computations. So, your matrix A which is I minus q inverse times r is this matrix and this agrees with our calculation that we had done right 4 by 5 and 2 by 5. So, we said that this is higher than this and this is higher than this and these computations were all there. So, that is there, but now let us look at the elements of I minus q inverse. So, look at e 2 1 for example, because this is our matrix E. So, e 2 1 is this 4 by 5. Now, this is the mean number of times that state 1 will be occupied before absorption occurs given that the initial state of the system was 2. So, starting from 2 you want to find out the probability that you would be going on finally, going to an absorbing state and, but in the mean time the mean number of times that state 1 will be visited before absorption occurs. So, that is 4 by 5. So, now you know it will be interesting you can just take up any physical process that you know can be modeled as this reducible Markov chain which has only transient states and absorbing states and then you can try to give meaning to these numbers right. And then if you add up now for example, e 1 j where yeah this is mean number of total transitions before absorption occurs given that the system was initially occupying state 1. So, now you want to compute these mean number of times total transitions before absorption occurs because remember we computed 2 1 here. Then I could have also computed e 2 2 right e 2 1 or for example, here if you are starting state is 1 then you go to 3 or to 4. So, this is so we said that e 2 1 is 4 by 5 which is the mean number of times that state 1 will be occupied before absorption occurs given that the initial state of the system was 2. So, starting from 2 then I will occupy 1 on the average 4 by 5 times before absorption occurs right. And similarly, now if you add up e i j's sorry e 1 j over j which is again transient. So, this will give you the that means starting from state 1 then both the transient states will be occupied before absorption occurs. So, this will give you is. So, therefore, for example, when this is 1 then in this example it will be e 1 1 plus e 1 2. So, mean number of times the state 1 is occupied or state 2 is occupied both are transient. So, this total will give you the total transitions before mean number of total transitions before absorption occurs given that the system was initially occupying state 1 right. So, starting from state 1 how many mean number of transitions can occur before absorption occurs right. So, here this was a particular element which is 2 1 that means starting from 2 how many times will mean number of times 1 will be occupied before absorption occurs. Now, you are saying starting from 1 what is the total number of mean number of transactions or transitions which will occur before absorption occurs. So, it will be e 1 1 plus e 1 2 because either 1 can be occupied or 2 can be occupied before transition before absorption occurs. So, that will be 11 by 5 and in the case when you are starting from 2 then 2 1 plus e 2 2 that probability is 1 sorry the mean number again it is not probability it is the mean number of total transitions. So, when starting from 2 it will be 1 and this number is 11 by 5. So, starting from 2 that means it is faster the absorption occurs faster mean number on the average that is what we are saying right. So, see the question that we had answered was that when we will starting from state 1 which is one of the transient states how many transitions before absorption occurs. So, therefore, as we are looking at the elements of i minus q inverse. So, this is e 1 1 plus e 1 2 because either absorption will occur from state 1 or absorption will occur from state 2. So, therefore, you add up the 2 elements of i minus q inverse and this comes out to be 11 by 5. Similarly, you can find out if you started from state 2 then how many transitions occur on the average before you transition before you go to an absorbing state. So, that sum will be e 2 1 plus e 2 2 and therefore, so that was you know I thought I will just emphasize again the interpretations of the elements of i minus q inverse. Now, a different question this is suppose you specify an absorbing state and then you want to know how many transitions will occur before the specified absorbing state is occupied. So, this is also because that will give you an idea as to how long the process will continue and before because once the absorption occurs before once you occupy the an absorbing state then your process is over or it will just continue to stay in that same situation same state. So, and of course, this question will arise if there are more than one absorbing states because otherwise if there is only one absorbing state then you know that ultimately your process will reach that state and your process will stop or will come to an end. So, if there is only one absorbing state the question can be answered by mean first passage time. So, that means you will ask the question that starting in state i in the transient state i how many transitions on the average before you reach the state j j may be any state. So, if there is only one absorbing state you can find out your first mean first passage time f i j and that will be answer to your question, but if there are more than one absorbing states then final transition will occur to only one of them, but you do not know it is not known to which one of them and therefore, the mean first passage time can be infinity in this case because you do not know to which absorbing state you will go. So, and once you reach one absorbing state the other one will not be visited ever and therefore, the first mean passage time will go to infinity. So, therefore, here when you have more than one absorbing state then you need to compute make some computations to be able to answer this question that how many transitions will occur before the specified absorbing state is occupied. So, what we will do is in that case we will compute the conditional mean first passage time that means given that you are in a particular state then we want to know. So, mean first given that passage has occurred to an absorbing state. So, that is what you are saying here that you are specifying the absorbing state and then you are. So, the mean conditional the conditional mean first passage time is what we need to compute. So, given that you are going to be occupying the absorbing state let us say j then you. So, here you are defining this m i j as the conditional mean passage time mean first passage time it should be mean first passage time from i to j. So, let m i j denote this number and this is different from the m i j the mean number of transitions required for going from i to j when you are considering the ergodic process when all states were required. So, that m i j is different from this one. So, here we are saying this is the conditional mean first passage time from i to j. So, you are occupying state i and you know that and you want to transition to state j which is a which is an absorbing state. So, now here we will write the following equations. So, a i j remember was your matrix A that we had computed to compute the absorbing probabilities. So, then a i j is the probability of transitioning from i to j i is a transient state j is a absorbing state then m i j as we have said conditional mean first passage time from i to j. So, you are occupying state j then this can occur either in one step. So, 1 into a i j and then here it will be you may not transition to j right away. So, you will transition to some k and p i k into a k j. So, k will again be a absorbing state. So, you transition to p i k into a k j m k j. So, therefore then the transition from k to j absorbing probability from k to j and then m k j. So, this will be you know trying to write down the equations for m i j. So, that we can we have a system of equations and we can solve for these conditional mean first passage times. So, is that ok. So, here this is i to k that means you may transition from i to another absorbing state, but then in that case and then no it has to be sorry if k is not k cannot be an absorbing state right. Because you see you can immediately see that the argument is not correct because if you are from i to k then you cannot transition from k to j because once you are in absorbing state then it is done. So, k is a transient state. So, you are transitioning from i to k. So, i is a transient state k is a transient state and then you are finding out the absorbing probability of going to j and m k j will be the mean first passage time right conditional mean first passage time. That means you want to go to j when you are in k. So, m k j into a k j into p i k. So, this you sum up where k is so I should not say that k is k is not equal to i right. So, we are saying that because you are wanting to find the transition probability from i to j where j is an absorbing state and for the first time. So, first passage time you are computing mean first passage time. So, this will be that you transition from i to another transient state then from that transient state your absorbing probability. So, a k j into m k j. So, the mean number of transitions that you will require for going from k to j m k j. So, this gives you an equation for relating the m i j's with the other mean first passage times to the absorbing state j. This is the right. So, if passage to state j is certain to occur then of course, a k j will be 1 and in fact all a i j's where j is fixed will be 1. So, in that case you see these equations will transform to because this will be 1, this is 1 and this is 1. So, then if you recall your equations for m i j's which you wrote down for the ergodic process you will get the same equation mean first passage time right. That means now you are considering that there is only one absorbing state and so the mean first passage time equations will be valid here. So, the whole idea is when you have more than one absorbing state you want to look at the you want to see the how to compute these mean conditional mean first passage times. So, let us say that you now in the example that we have been all along following the fourth state example in which 1 and 2 are transient and 3 and 4 are absorbing. So, you consider the state 4 state which is and you want to compute m 2 4 that means you are in 2 and you want to find out the mean first passage time of going to 4 to absorbing state 4. So, here if you look at this equation let us just write it out. So, it will be a 2 4 m 2 4. So, now 2 cannot be equal to so when we we will now take make use of the conditional mean first passage time. So, I am defining m i j as conditional mean first patches time from i to j and I am defining the conditional mean first passage time this may be a general definition. But in particular now I want to say that suppose j is an absorbing state I want to talk about that only. So, in that case I can write this as a i j m i j right because the transition probability or the absorbing probability from i to j is a i j and that into m i j the mean conditional mean first passage time then either this transition occurs in one step or it will occur that means you will go from i to k where k. So, I need not write here essentially I mean that k is a k is a transient state is transient state that is all I want to say. So, then this will be you may transition to another state or to itself p i k then a k j. So, k cannot be j that is all because j is an absorbing state. So, once I transition to j then it is done I do not have to. So, it will be p i k that means I transition to another transient state from that transient state I go to the absorbing state j. So, the probability of that into the mean first passage time from k to j. So, k is a transient state. Now if passage to state j is certain then you all these absorbing probabilities are one right because no matter where you are since you know that you are going to go to j. So, all these probabilities will be one and in that case these equations will reduce to your computing a mean first passage time from an for an ergotic process which we have done already. So, same equation I mean this will reduce to the equations for computing the mean first passage time for an ergotic process ergotic process right. So, now let us the see how we make use of these equations to compute your m i j's. So, let us say for example, I want to compute m 2 4. So, that means the question here that we asked how many transitions will occur before the specified absorbing state is occupied. So, of course I will compute m 2 4 then m 1 4 also and the sum will tell me. So, that means your to answer that question it will be m 1 4 plus m 2 4 that will tell me the mean number of transitions that are required before I occupy state 4 this is what we want to compute. So, let us just write out the equations here. So, a 2 4 into m 2 4 this can be equal to a 2 4 which is actually 1 times a 2 4 plus then from 2 I can transition to 1 which is a transient state then a 1 4 into m 1 4 or I can transition from 2 to 2. So, p 2 2 then a 2 4 and m 2 4. So, this clear right. Similarly, if I want to look at m 1 4 then it will be a 1 4 into m 1 4 then 1 into a 1 4. So, I will again transition from 1 to 4 and this will be. So, this will be 1 step and then I mean the mean first passage time will be 1 plus p 1 1 a 1 4 m 1 4 plus p 1 2 a 2 4 and m 2 4. So, here again from 1 you can transition to itself or you can transition to 2. So, if you transition to 1 then again you want to compute a 1 4 and then this will be m 1 4 plus p 1 2 into a 2 4 m 2 4. So, this is the conditional mean first passage times that we are computing, but if I am in 1 then I can transition to 4. So, what is the conditional probability? So, now substituting for p i j's I should not say p i j's substituting for a i j's. So, remember we had for this thing we had computed the matrix A and if you look back at the matrix A it is the numbers are given to you. So, a 1 4 for example, there a 2 4 is 3 by 5 then this was 3 by 5 plus 1 by 3 your this was 1 this was 1 a 1 4 was 1 by 5 then 1 by 3 was p 2 1. So, anyway you had the matrix this and you had the matrix the transition matrix p also the transition diagram. So, looking at those if you can just travel back to a few frames earlier then you have all these numbers and. So, by substituting there I get these 2 equations and then solving them easily gives me the answer that m 2 4 is 93 by 45 and m 1 4 is 153 by it should be 45 only by 43. So, as I was saying that now if you want to look at the m 1 4 plus m 2 4 then this is equal to 93 plus 153 45 and this will be 6. So, on the average the mean number of conditional that means if transitioning to 4. So, the number of transitions that will be required a mean number would be 246 by 485 provided you are transitioning to the absorbing state 4. So, one can you know I have tried to look at these processes in different ways and given you interpretations and. So, you know given a physical process there are a lot of these kinds of questions because you want to know if one has to plan then one has to know for example, if these are reducible Markov chains that means you know with the processes which will terminate in short time then you want to have an idea about these numbers these. So, that you can plan accordingly and let us hope that in future after having gone through this you will be coming across such situations or such processes where then you can look at them in a more meaningful way. So, I will now discuss exercise. So, let us look at question 1 this is a particle moves by the way these questions I have taken from the book Hillier and Lieberman the reference to which will be given to you at the end of the course. So, a particle moves on a circle through points that have been marked 0, 1, 2, 3, 4 in a clockwise order. So, and the particle starts at point 0. So, let us see I can just so here is a circle you have 0, 1, 2, 3 and 4. So, this and the ideas that you can either move from here to here or you can move backwards. So, at each step the particle starts at point 0 at each step it has probability 0.5 of moving the point clockwise. So, clockwise would mean this way or 0.5 of moving 1 point counter clockwise that means either backwards or forwards and both the probabilities are the same. So, you remember this was your we were looking at the random walk we were looking at the random walk and when we said that probability is half then we also showed that it will be an ergodic process because in that case all the states will be recurrent. So, same situation now let X n denote its location of the circle after step n X n is a Markov chain. So, we have already seen that this will be a Markov chain because it will just depend on where you are. So, that you know the probability of transitioning to the next step will just depend on where you are it will not depend how you reach there. So, this will be a Markov chain now construct the one step transition matrix you can do it. So, it will be 5 by 5 matrix and determine the n step transition matrix P n for n equal to 5, 10, 20, 40, 80. Now, I have given up to 80, but just because you can either because you must be familiar with MATLAB or by writing a small program of your own then you can iteratively find out multiply get the power of P raise to 5, P raise to 10, P raise to 20 and so on. So, just to familiarize yourself then determine the steady state probabilities of the state of the Markov chain. Now, you also this is a little may become tedious because you are solving a 5 by 5 transition matrix is 5 by 5. So, you will have 5 variables and 5 equations, but since the values of P's are half. So, therefore, it should not be a difficult system to solve should get the answer quickly. So, determine the steady state probabilities of the state of the Markov chain describe how the probabilities in the n step transition matrix is obtained in part b compared to these steady state probabilities and as n grows large. So, want to see that you should feel the. So, once you find out the steady state probabilities and you also have you know like P raise to 40 or 80 then you can see that in fact at P n equal to 80. That means, when you have P 80 P raise to 80 then they should definitely be very very close to your steady state probabilities and in fact you can see the pattern even when you compute P 40. Now, question two is just to determine the period of each of the states in the Markov chain that has the. So, here you have again a 5 by 5 matrix and I am asking you to determine the period of each of the states in the Markov chain that has the following transition matrix. So, this will be all the states are periodic well you find out. So, you have to then determine the period that means you will have to compute P square P cube and. So, here also you can make use of the same program that you wrote for b and then you can compute P square P cube and. So, to determine the periods of the periodic states. Now, question three a transition matrix P is said to be doubly stochastic if the sum over each column equals 1. So, you know for the transition matrix that we have seen the roams must add up to 1. So, now I am giving you an additional condition and that is that the columns also add up to 1. So, therefore P i j as i varies from 0 to m is also 1. So, column sums are 1 in that case such a chain is irreducible a periodic and consists of m plus 1 states. States are already m plus 1 show that your steady state probabilities in such a case you do not have to do any computations just right away you will be able to show that your pi j's are 1 upon m plus 1 for j varying from 0 to m. So, this is simply you can do it orally, but write down a few things and then make out your. So, this is now question four a computer is inspected at the end of every hour it is found to be either working it means up or failed down if the computer is found to be up the probability of its remaining up for the next hour is 0.9 if it is down the computer is repaired which may require more than 1 hour whenever the computer is down regardless of how long it has been down the probability of it is still being down 1 hour later is 0.35. So, that means your unit of time is 1 hour and then you have to write down your transition matrix. So, construct the one step transition matrix for this Markov chain and find the mu i j the expected first passage time from state i to state j for all i and j. So, this you will be able to do. So, here I have asked you to now sort of this question depends on computing the first passage times which we have also discussed quite thoroughly. Now, the question five which I told you in the lecture is the you know based on the gambler's ruin problem and so I thought that I leave the computations to you for I just explain to you how the what the problem is. Now, here gambler bets there should have been a space between. So, now this is a gambler bets dollar here it is dollar 1 on each play of a game because this is a game from Hillier and Lieberman each time he has a probability p of winning and probability p of winning and probability q which is 1 minus p of losing the dollar bet. He will continue to play until he goes broke or nets a fortune of t dollars. Now, let x n denote the number of dollars possessed by the gambler after the n th play of the game. Then you want to find out x n minus x n plus 1 will be x n plus 1 with probability p right because he has 1 more rupee if he wins and that is with probability p otherwise he will have x n minus 1 with probability q which is 1 minus p. Now, here of course, he continues playing only if his x n is less than t and x n plus 1 will be x n for x n 0 or t because if he has no money then he cannot play and therefore, he cannot bet and so he continues to be in the same state that is means he continues to be broke. And if he has t dollars he has earned t dollars then again he does not play because he has earned his fortune. Now, x n is a Markov chain we have already discussed this the gambler starts with x naught dollars where x naught is a positive integer less than t. So, as I said you can say that x naught is i construct the 1 step transition matrix of the Markov chain. So, this you will have to write down and you see that it only be 1 step forward or 1 step backward the other entries will be 0s. Now, find the classes of the Markov chain these I have already told you let t equal to 3 and p equal to 0.3. So, if it is a question of earning up to 3 dollars so that means your states will be 0, 1, 2 and 3. So, then I have asked you to find out the first passage probabilities f 1 0 f 1 t f 2 0 and f 2 t and then again the same things when p is 0.7. So, it will be not difficult at all if you write down see again the computations the formulae are given to you for the first passage probabilities and so once you write down the transition matrix you should be able to complete the problem. So, in the lecture I had told you that we will you will I will be asking you to compute the probability that the gambler will end up with rupees t or in the lecture it was I think rupees n or dollars n whatever it is does not matter. That means the gambler with what is the probability that he will earn the fortune that he is wanting to. So, that has not been asked in this question 5, but you see you should be able to set up equations. So, essentially it is just define the probability p i p i minus 1 you can interpret the same way and then it will be p i is equal to p p i plus 1 plus q p i minus 1 and then you multiply p i by p plus q because p plus q is 1 and then in from this equation you have an iterative relationship for different values of i and then you can find out you can find out the iterative relationship and then you will be able to find out the probability p i and there of course you will use the fact that if you have zero dollars then the probability of making your fortune is zero and if you have you have earned t dollars then your p p t p t is 1. So, using these initial boundary conditions you will be able to solve for p i's. So, please do that because in the problem I have not asked you to do it, but you can certainly do it. Now, question 6 is which is again a simple one a leading brewery on the west coast labeled a has hired an OR analyst to analyze its market position. If it is particularly concerned about its major competitor let us say labeled b. So, another brewery which is the competitor for this brewery a and so they want to find out how good a competitor or how bad a competitor this other brewery is. So, the analyst believes that brand switching can be modeled as a Markov chain using three states with state a and b representing customers dinking beer produced from the aforementioned breweries and state c representing all other brands. So, you know state a will represent. So, that means a to a in this the transition matrix that was given below. So, say probability of a that means somebody who is taking using the you know beer from brewery a will continue to do that use the brewery a only is 0.7, but may switch to b with probability 0.2 or to other brands with probability 0.1. Similarly, you have the you can explain the rho b entries and then c for other brands and they switch to a will be 0.1 and 0.2 0.1 again to b and 0.8 that means they continue with the same brand that they are already using is 0.8 that point is missing here and we just you can do make the entry. So, 0.8 now what are the steady state market shares for the two major breweries. So, we want you to find out pi 1, pi 2 and pi 3. So, the answers that they are asking for is pi 1 and pi 2. So, what are the steady state market shares for the two major that means when the process is gone and for some time you think that the choices have all stabilized then you want to know the steady state market shares for the two major breweries.