 So, last class we started with our CTMC study and we kind of defined the notion that the so John times that how much time my CTMC is going to stay in a state, I will continue to stay in a state before it leaves it and in the last class we showed that that has a exponentially distributed, is exponentially distributed with a parameter ai that ai is state specific the i whichever state you started looking at and the time after which it has left that state. So, basically it is we had this random variable. So, we said that y t is the time remaining time after which you will leave the current state. And then we have said that suppose if I conditioned this random variable knowing that at time t I am in that state i. So, here in this definition here what is that? So, here when I define this this is not state specific I just looked at I start looking my CTMC at time t and just see what is the minimum time before it leaves my current state. And here suppose if I conditioned that when you start looking at your CTMC you are in a particular state i and then how much what is the probability that it is going to take at least you more units of time before you are going to nuke. We had shown that this to be e to the power ai of u for some some ai which we say can be between 0 to infinity. Let us say I have this random variable. So, y t is a random variable here right. Suppose I conditioned this random variable like here i x t i equals to 0 or let us say more generally instead of particular t I am going to look at time equals to 0. So, I am saying that at in the beginning at the 0th instant I am in state i and then I am looking at 0th instant I am in state i and then I am looking at when is that I am going to leave my state. How long I am going to state? So, what is this y naught is going to give you what in this case what will be the distribution of y naught? It is basically the first time you are going to leave out of state i right. So, you are saying I am going to start from state i y 0 is nothing but the minimum time when you are in a state which is different from i that is basically in a way it is going to say when is that your state is changing. In a way this is same as like our random variable t 1 right. So, you remember how you define t 1, t 1 is the instances at which the renewals were happening. Here if you are going to interpret your renewals are when you are going to change your state right then t 1 is the time here that is given by this random variable y of 0 condition that. So, this is t 1 is the basically the time that you are going to leave state i here because I have conditioned on i here ok fine. So, now you see that the amount of time you are going to stay or the rate at which you are going to transit to other states is going to be governed by this parameter a i here right. So, based on that how this value of a i is then we can classify the states in my CTMC right. So, let us see this what are the possible things we can get. See like the sequence of random variable sometimes I write it X of t like this, sometimes I simply write like subscript by t we will assume that both are indicating the same ok. And a state i is say I have classified them into two things. So, if it takes the value at the boundaries either if it takes the value 0 or infinity then we are going to either call them absorbing on instantaneous specifically when it is going to take value a i equals to 0 we are going to call it absorbing and when a i equals to infinity we are going to call it instantaneous. So, let us see this suppose a i equals to 0 what is this telling u a i equals to 0 it is basically telling you that anyway this probability. So, if a i equals to 0 whatever u you are going to take this is going to be 0 right sorry this is going to be 1. So, what however large u is however large you are going to take it is still going to be 1 that is probability that your y t is going to be larger than that quantity that is you continue to stay in that however large u it is going to be 1. Then that makes sense like that is why we are going to call it absorbing that it is basically got stuck in that whichever you started with that it is it is basically never leaving how far we are going to look it has been. And now suppose you put a i equals to infinity now. Now if put a i equals to infinity or whatever value of small u this guy is going to be 0 right. So, however small value of u we are going to looking that that that your Markov chain is going to stay in that state is just 0 you put u arbitrarily close to 0 once a i is infinity this probability is going to be 0. That means there is no way like my Markov chain is staying in that continuing to stay in that change it is just like as soon as it leaves it is leaving that instantaneously that is because however close u is close to 0 this probability is 0 that is the reason where a i equals to infinity we are going to call it absorbing. Now anything in between this sorry instantaneous we are going to call it instantaneous and anything in between this when a i is strictly away from infinity or like strictly finite and it is a nonzero quantity then we are going to call it as stable case ok. So, now see like in our DTMC we had a classification of our states into there also we had transient for recurrent right, recurrent further null recurrent positive recurrent there also we have the notion of absorbing state. But here we have another classification of states when it comes to CTMC that is in terms of whether how that parameter a i associated with state is going to be whether it is 0 infinity or something in between these two. So, we are going to call so basically now we are going to focus on a CTMC in which the states are slightly well be favoured like my CTMC is this not like which is changing very very far like instantaneously right like that means it is a kind of a highly unstable scenario right like every instance your CTMC is taking different states. So, I do not want to have such case. So, henceforth I am restricting to CTMC's in which such states does not arise. So, is this clear? We are just going to say that like my in a way my CTMC is well be favoured like it is not going to change its state at a very high rate it will change its state, but at some at some bounded rate. So, like in this case when it is absorbing once it hits state i it is not at all changing its state here the rate of change is 0. But whereas in this case it is still changing its states, but at some bounded rate, but we are in this case it is changing its states at a very unbounded rates. If you put if you put a particular states where i equals to infinity. So, we are assuming that my CTMC does not have such a states ok ok. Next we are going to focus on. So, now we are going to study how does this CTMC's which are pure jump structure they are going to look like. What are the things I need to characterize them ok. See like in in DTMC we always looked my Markov chains to a time index which was kind of discretized and we know that at time at some discrete times we were looking it. Here that time itself is a random quantity. So, suppose in a discrete time Markov chain you looked at T1, T2 let us say at periodic time not necessarily periodic, but at some discrete times. But now when you are going to look at the CTMC's a event is happening at some times right and these times are random ok. So, the jump is happening at some time next jump is happening another time. So, and you if you are going to focus on this quantity this events when the state is changing and you are not interested something in between right as long as my CTMC continues to stay in a particular state fine. But when it changes that is the event of may be of interest to you and that is happening at random times. So, often in this case we have to condition on random times and naturally we will be asking the same question as we asked in DTMC that is this my CTMC is a satisfies a strong Markov property right. So, in DTMC we also looked at when we condition on random times whether it still satisfy Markov property if we dissatisfied then we call it as a strong Markov property. So, we want to see similar thing holds here. So, for that first we are going to define what is a stopping time for a CTMC. So, this definition is exactly same as that in DTMC we just want to know that if there exists a function F such that if I want to answer the questions that my random variable T is going to be less than or equals to T I only need to look at my random process up to time T nothing beyond it if this is going to happen then we are going to call that time T as my stopping time. Now, let us say Tn to be the instance of nth jump. See I am using the terminology jump here that means that at this time my Markov chain is changing its state right it is leaving its current state and going to something else that is what I am calling it as jump. Now, let us say Tn is the instance at which my Markov chain is changing its state for the nth time is Tn a stopping time right right. So, if I need to know whether my nth jump has happened till time before time T you just look at your Markov chain till time T and see how many times it has changed state and you can know this answer. So, you can always have such a function f here which will say s or no. So, Tn in here happens to be a stopping time and you can also see that if you random time T happens to be just some given T then also it is a stopping time as we did it in the DTMC case. So, now with this we are just going to define our a strong Markov property like we did it in our DTMC which is as follows. So, what we are saying now just saying that if you have if T is a stopping time if you are given given you have observed all your CTMC till that time T, T is this stopping time and then you are looking at your Markov chains taking the value I1 after T1 rounds and I2 after T2 rounds like this. This is going to be like as if I start looking my Markov chain from the beginning that is as if I start my Markov chain from state 0 where it has taken state I0 and subsequently it has taken state I1 at time T1 and I1 at time T1. It is just like at random time whenever it is happened I am just pushing it to the origin and then I am pretending as if I am looking my Markov chain from the origin. So, if you are going to condition on a Markov chain that is fine like as long as that is going to be a stopping time with respect to that Markov chain it is good like you can pretend as if you are starting from the beginning ok fine. So, this is the definition of a strong Markov property we are going to assume. Now, you see that even though my time is continuous I am basically looking at discrete events right. I am going to because I have focusing on my pure jump CTMC here I am going to look at the events where my state is changing. So, these are like discrete events because the things are not happening instantaneously they are happening after some time and this time is positive with probability 1. So, you are going to see kind of events that can be possibly indexed with discrete indices. So, for example, first transition, second transition, third transition they could be happening at different random times, but still these events you are indexing by discrete numbers right like 1, 2, 3 and they are all you can you can make a countable indices set. So, based on this we can out of my continuous time Markov chain I can extract an underlying embedded chain ok. So, let us see what is that. So, I am going to assume like Tn this sequence is what is my nth jump this is the sequence of jumps. Now, I have my continuous process Xt right. Now, this process I am going to sample at this sequence this random times. So, let us say now I am going to get from this I am now going to define xn to be x of Tn and now I am going to look at this sequence Xn and I am going to call this sequence as an Tn is the Xt is your process right continuous time Markov chain. It is just like you you subtract T equals to Tn right exactly. So, you are let us say if you are going to if you are looking at your CTMC as a function of time you let us say initially you started let us say for simplicity you from state 0. Let us say you are going to stay here till this point let me call that realization T1 which is the realization of my random variable T1 and after that you have jumped for let us time being let us assume it is a counting process. So, you have taken a value of 1 here and after that you will make this sometime here then go here maybe take a long time here and then make another jump like this ok. So, these are all your samples T2 and like this. Once you tell me these realizations I am going to sample this process and I am going to derive. So, in the counting process it so happens that like every time there is a triggered we are always counting 1 and it is going 1, but it need not be always the counting process right it could be something else. For example, it is passing to some arbitrary states. So, in that case this could be something like these are different different states you are going to transit and that is captured by this jump process Xn. So, this is a kind of an embedded discrete time process which is embedded in my continuous time Markov chain right. So, this is what this is an embedded process and we are going to call it as a basically a jump chain here. Whether it is the DTMC we will see that if we have a we have a chain now which is a discreetly indexed what are its properties and how this is looking at the properties of this jump process we can say something about the properties of the CTMC is what we are going to study next. It is it will not make an assumption like here in this particular case like whatever let us say if it is not right continuous whatever whatever the value of your xt at time t1 you are just defining that to be Xn. So, in this case if it is right continuous you have taken the jump whatever the new value after the jump it is happening otherwise would have just taken this value. So, be it if you want to define it like this. So, whatever that we are just going to call it as Xn sequence ok fine. So, next we are going to see that now I have a jump sequence here which is discreetly indexed and let us say this is now defined on the same state space on state space. So, my initially CTMC is defined on state space S my jump chain is also defined on the same state space ok. Now, let us say that on this I want to define a transition probability ok. So, how can I do that possibly because let us say here also like what I can ask the question ok initially. So, now I have basically this chain Xn I can ask what is the probability that this chain takes value j given that it has started from a particular state ok that you can always ask this question right like what is the probability that in one step it takes a particular. So, suppose that is a on this state I could always ask like if i and j belong to my state space I want to ask this question ok what is this probability? What is this probability? This probability could be simply let us say probability that my X1 equals to j given my X naught equals to i like we did it in the DTMC ok. So, fine, but I am I have not started with this I have started with my CTMC for which I know P of t right. What is P of t? For every t it is going to give you the matrix right for your original CTMC. So, fine how you are going to define it in terms of my initial P of t? See like going from state X0 to X1 this is happening in how much time according to my initial CTMC t1 rounds right. Now, basically what I am asking this question is probability that X of t1 is equals to j given that X of 0 is i right. Now, this was like the embedded process, but now I have written in terms of my continuous process rather I can write using my Pt I could find out what is this probability right. So, depending on how your distribution what is the distribution of this t1 and what is the corresponding values of matrix Pt you can go and compute this value ok fine.