 So, in the last class we started motivating the need for conditioning on a random time right. When we had defined a Markov chain, the initial one we defined it condition on a deterministic time and when we condition a deterministic time we said that future is independent of past given that stated that deterministic time. Then we also discussed ok maybe sometimes it is not always we are going to condition a deterministic time sometimes we will be faced with to condition on a random time then we wanted to see if the Markov property holds then we have to condition on a random time. So, we saw an example where when we are going to condition on a random time the Markov property need not hold right and then we said that if it holds then that Markov chain we are going to call it as and we are going to call that Markov chain to satisfy strong Markov property which basically said that there is that x t plus this one x 2 j 1 divided by x t plus sum this sum plus 2 j m given x naught equals to i naught x 1 equals to i 1 all the way up to x t equals to i what is this equal to we said that x of x 1 equals j 1 x of x n equals to j m given x naught equals to i as if from that time onwards as if the Markov chain begins afresh. Then the question is here what is this here t is my random time. So, I hope you guys all remember the notations here what I the way I have chosen this indices are this i 0 i 1 all the way up to i they are some states and also I have chosen s 1 s 2 all the way up to s m they are all time indices belong to time and also j 1 j 2 all the way up to j m they are all belong to count from state space s what s 1 s 2 these are time indices we have just said that s 1 is smaller than s 2 like this they are not subsets they are time they are indices. So, this is basically we are saying that whatever time you condition upon a random time from that s 1 steps further and this s 2 sorry s 2 is further s 2 rounds from my time t. So, now if this t is a random time our now the question is whether this condition holds for what random times. So, we have already seen a case where for arbitrary random times this need not be the case right. So, if you recall our example where we said that if I am t my random time is 1 step before visit 1 step before the time of my second visit to state j then this was not the case. Then the question is fine for arbitrary random time this property need not hold, but is there any special type of random time for which this property holds. It so happens that a random time if it satisfies the stopping time criteria then this property holds. Now the question is what is the stopping time. So, let me write this definition here a random time t is said to be stopping time process for all n greater than 0 there exist a function. So, let us focus on the definition of it is going to say that a random time t is said to be stopping time for a process I have right now I am not saying this process is Markov process or anything it is an arbitrary process here. If for any n greater than 0 equal to 0 greater than or equals to 0 there exist a function fn which takes n plus 1 states remember s is what s is the set of states. If you are going to take n plus n 1 elements into s that maps it to either 0 or 1 that is fn is just like binary valued function takes 1 gives 0 or 1 based on n plus 1 inputs such that if you are going to ask this question on your random time t what is the question the question is whether my t is less than or equals to n. If you ask this question you need to say s or no to this to say s or no I have made it indicator. So, if this condition this indicator is 1 if this condition is not correct or does not hold then this is 0. So, the left hand side is just s or no kind of answer at this depending on whether this condition holds or not. If I can answer this question using this function fn based on my n observations I am going to make tilde point n. So, what is this saying to answer question. So, to answer this question whether my random time t on the sample point omega is less than or equals to n all I need to know is we need to know x 0 of omega x 1 of omega all over to x n of omega. So, if this is just like expansion of this condition here. So, if there exists some function fn which tells you if you want to know whether t is less than or equals to n at sample point omega all you need to pass on is the n observations you have made about the state at the sample point omega. If you tell that then this function should answer this question. Let us look at some examples I will say like some some preview examples could be before that let us see this. Suppose let us say fix a state j and I am going to now define a random time to t to be first visit to state j. Now if I define t to be like this and if I want to answer the question whether t is less than or equals to n on a particular sample point let us say t of omega is it enough if I pass on you this much of state information till n. So, let us take a sample. Let us say this is a chain right this is a process evolving. Let us say in round one I started with i then I went to like another state let us call it i 1 then let us say I went to i 2 and all the way up to some i n state and let us say some of this that could be j here. So, the state j has appeared before n. So, if I ask if I if I have shown you this sequence can you answer this question yes or no. So, if you see that okay the j has occurred at the second slot itself. So, then and if let us say n equals to 10. So, whether t of omega is less than or equals to n that is whether my first occurrence of state j has happened within 10 slots that is true right by looking at this sequence is enough for you to answer this question. If whenever this happens then I am going to call this random variable this random time to be what a stopping time. Time t is a random variable right t is defined on what t is which where is the random variable from omega to r. If you are interested in a sample point and on that sample point you want to ask this question whether on that sample point the random time is going to be less than n. What you are going to look at is you are going to look at this sample path you are observed for that sample point. So, this is a sample path for that sample point omega and then look at whether you can answer this question. You look at whether j has happened before n if that is the case then it is true. So, but this is a one case where let us say j has happened let us say none of the states contain j and j is let us say somewhere here i n after n plus 3 let us say j. But when I ask this question I am only going to tell you till this point I am not going to tell what is the value of my sample point sample path at x i n plus 1 i n plus 2 i n plus 3 they are not told to you. Can you still answer this question yes or no? Why? We are going to say no n this value is not less than n this has to be greater than n. So, either you you say yes or no right then in that case no this is not the case the other is the case. So, your answer is no. So, as long as I give you samples till time n or the first n samples you are going to you will be able to answer this question either yes or no. Let us look into another example let us say now t is the last visit to state j is the time of last visit to state j this means after this j has happened j is not going to happen after that at all is the meaning of this random time is clear to you this random time t is the last visit of state j. So, now if I ask the same question can you answer this question based on your first n observations why is that j might happened somewhere in between this, but it may still happen after that right that is not the end of happening of j it may still happen subsequently. So, you cannot answer this question either yes or no. So, in this case this random time t is not a stopping time we have seen another random time where we define t to be one step before the second visit to state j is that going to be random time sorry is that going to be stopping time check that yes no check that ok. Now those who said no can you tell me why that was not a stopping time right so it depends so to only when I have all the n plus 1 information I can affirm until you say yes or no till that I cannot say anything clearly ok now theorem says that basically the result says that this Markov property holds if I tell you a priori that my t is a stopping time with respect to my process xn with respect to my Markov chain. So, let us say we are going to say that probability that x t plus s is equals to j given x naught equals to i naught all the. So, then I can write this probability here even though this is a random time here I could write this probability that I observed all the way up to t and I have noticed that I have taken state i at time t then subsequently I am going to state j in the next s steps that probability is simply pij to the power s what is pij to the power s this is the s step probability right. So, now we are just saying that as long as this t satisfies this stopping time criteria this property holds whether this t here is a deterministic one or a random one yeah I mean that is the whole point of this right n was earlier in deterministic now that n is replaced by this random quantity t, t is the random time here earlier used to condition on a n which was deterministic. Now this is a random time here your a priori again not telling you what is the small n I am conditioning on any random t here yeah yeah this is what like this is what like this is what like we are replace this this is what this is a definition or on t here whether t is a random time we are saying if this t is a random time this property holds that means to define this t to be less than or equals to n I mean to verify this quantity t is less than or equals to n all I need to know c is the n observation the first n observations if that is the case you can whatever this t it is this is a random time it can take different different values. If you are going to condition on that random time t and now look at what happens in the next s states that transition is simply governed by your s step transition probability matrix that is what we are saying. So earlier what so just forget this earlier our definition was this right if n is a fixed deterministic quantity so the probability that I go to stage n n plus n given all this n first we said that okay it only depends on x and I does not matter on the previous one and we had called that p i j s now we are saying instead of fixing this n it could be arbitrary it could be any t a random quantity we have already seen that if this t cannot be arbitrary t for this property to hold we remember when we define t to be one step before the second visit to state particular state j this was not the case that did not equal to p i j s but now we are saying that if this t random time has a special property which we called as stopping time then this is true even though this t is also time index but this quantity is a random here we are to apply this all I want you to guarantee is first tell me t is less is a stopping time to verify that the t is a stopping time you are going to do whether t of omega is less than or equals to n once you did this then you have satisfied the hypothesis of this theorem then you are just going to say that okay if that then this property holds I want to apply this you need to have a stopping time that is where your question of whether we are going to say t of omega less than or that is already we have we have to have verified before we apply this okay now how to prove this why this is true okay so basically to prove this so what we need to show is okay just reorganize this this is a conditional distribution on my left hand side right this conditional distribution I will write as a joint distribution so what I need to show is this is equals to p i j s right so this left hand side I have just rewritten in this ratio format this is just definition of my conditional probabilities right so now let us start looking into the numerator here I want to show this and t here is a random quantity random quantity random time and we have specifically further assume that this is a random time okay so now let us try to exploit that property so what I will do is this t here is a random quantity but it takes values integer valued it takes integer values right it can take 1, 2, 3 like that so what I will do is I will just try to add it for all possibilities so notice that I have bought in this t equals to t event here and I have just added it over all possible t values so these are still an equality here okay now let us try to further unwind it what I will do is I am going to group it into I am going to group in this three parts 6 naught right and then yeah probability given what right so first I took this and I wrote this as a probability then this probability condition on this then this probability condition on the previous two all the way to x 0 to x t and then x t plus s equals to j okay so first quantity I am going to return like this why is this quantity now we are saying that now see the t is a deterministic quantity here and I have written this thing yeah so this is I go to state j at time slot t plus s given that at time t I am already in state I and all the previous states are also given but if I already assuming that this x 0 sorry this makes sense is x n sequence is dt mc right so what is this quantity is going to be of s now let us focus on this now what I have given I have given my dt mc all the way up to time t and further I have also given my dt mc value at t plus s now to answer my question whether t equals to t now I have to basically compute probability that t equals to t right or this is same as knowing whether my t is equals to t if I know something about this now if I am going to apply my random stopping time property on this to answer this question whether t is equals to t what all the things I need to know here I need to know only till t does it depend this this t equals to whether t is equals to t does it depend on x t plus s no right so I can get rid of this term here to answer whether t is equals to t whether my random my stopping time is t equals to t all I need to know is only the sample still here so this guy vanishes right and now I can write it as simply okay I will just rewriting this now let us reorganize this a bit I am going to pull this guy PIG outside because that is independent of t summation t equals to 1 to infinity this is basically for all values of t right now focus on this what you have probability that x i equals to i not all the way to x t i and then you have probability that t equals to t conditioned and only these many terms so now can I write it as a joint distribution joint probability of these two quantities so this is going to be probability that t equals to t and now further this has been somewhere all possible values of t right this probability so if I remove the summation then what is this quantity is going to be so what is this quantity is going to be see when I started this from this step to this step I added the summation because I have introduced this event right now I want to just reverse that process so then what is this quantity summation will be not there but what is going to remain here till then x equals to i right so are we done now we have this quantity take this quantity on the left hand side this is exactly equals to PIG and that is what we wanted to show so notice that like the crucially crucially we use the fact that my random time is a stopping time at this factor so if I because I could do this I could pull out my PIG term in this fashion okay okay so whenever we want to like whenever we are not certain that at from which point I want to see the future then I will definitely need to you make use of this property like because if the point here at which I am going to look is a random quantity and then I want to use the standard whatever the Markovian properties we have then if I want to do that I should first ensure that the the random times that I am conditioning on is a stopping time.