 Now, we are going to state a relation which is kind of going to combine what is the next going state I am going to take given that I have already spent this much of time I have already spent in my given state right. So, this is already what is the state you are just going to state j whenever the jump has happened when you started from x not from state I, but you may also ask the questions like on this state I have already been there I have already spent this much of time and after this what is the probability that I go to state j ok. Let us formalize that I am going to ask this question now. So, what we are asking is here suppose you have been given that the jumps have happened at time t 0 t 1 all the way up to nth jump and you have been told what are the states that has been taken when this jumps have happened that is x 0 corresponds to the state at time t 0 and x n corresponds to the state at time t n of your continuous time Markov chain ok. You understand the meaning of conditioning like this. So, basically saying that these are the instances at which my jump is happening and these are the corresponding state that has been taken when the jumps have happened. And now we are saying ok fine it looks like so far n jumps have happened and after that you are asking the question before the next jump happens that is t n plus 1 it is going to take at least u units of time and after that that the jump I am going to make is to a state j and now what it is saying that this probability is nothing but the probability that you are going to stay in that state this is a to the power a i u right that is what we have shown and then from that state the probability of going to p i j. So, in a way what you are saying is see when I define p i j I defined it for one full life cycle right like t 1 before this is for all t 1, but here you are further asking this question that before you starting from this state i when the n jump happened you are going to state j after spending at least this much of time it is saying that then if you want that extra conditioning that not only I want to go from i to j, but I also want to ensure that before going to state j I am going to spend at least u units of time then multiply by that probability and in a way it is saying that this is a design that these two are independent right because it is just multiplying both of them ok. So, let us quickly write down couple of steps in this proof probability that t n minus n plus 1 minus t n being larger than u x n plus 1 plus j given. So, first thing it is a CTMC continuous time Markov chain. So, already know that it is going to satisfy Markov property. So, once I know the nth jump is going to happen at this time I do not need to know any of the things which is happened before to answer question about my feature. So, this is about the future after nth event has happened right I am going this is about that. So, to answer this question I do not need to know what has happened before the nth jump. So, I am just going to condition it on this event that t n is some number which is given to me that is a random quantity. So, basically this is exactly ok this is nothing, but x n in our notation x n we have defined x n to be x of t n. Now, I am going to use my strong Markov property let us say and then also nothing not strong Markov, but just now let us say I am going to use my time homogeneity property if I can use it. Then yeah along with my strong Markov property I can shift whatever happened at t nth instance as it has happened at origin and I can start looking from that point onwards right. So, t n I can assume it to happen at the 0th time in that case I could write it as now t 1 greater than u q 1 x 1 equals to j q 1 x 0 equals to i. So, now everything this Markov chain I will just shift it to origin and now from the origin I am asking this. So, I am basically saying ok at the beginning I am in state i. Now, I want to t 1 I should before I jump to next state I should have spent at least time u and my next state should be j. This one I am going to write a probability that t 1 greater than u q 1 x 0 equals to i probability that. So, I have just applied my chain rule here and I could write like this. Now, what is this quantity here? You are saying that you are a state i in the beginning and you are going to spend at least u times on it before you are going to make a jump. What is this probability? e to the power a i u that is the definition we have right. So, good we already have this term. So, now let us try to deal with this term. So, instead of x 1 I am going to write it as x of t 1 here x 1 is nothing but x of t 1 right and then x not equals to i t 1 greater than or equals to u and then what I will now do is I know that at least my t 1 is going to be at least u that has been already told right. So, I am going to replace this t 1 here probability that x of u plus some y of u. Now, I have been also when I have been told that t 1 is going to be greater than or equals to u that means I have basically know my chain all the way up to u. So, this is correct. So, what I have used? I have been told that my t 1 is greater than or equals to some u. I know that my Markov chain sorry my CTMC is continue to stay in that state i at least till u and this t 1 I have split it into two parts i because t 1 I know is at least going to be greater than u yes that much of u and after that this is my residual time it is going to take at least larger than this before you are going to change your state. So, I can write my t 1 as these two component I can do that because t 1 is already been told me to be larger than this u okay fine. So, now what we will do is okay fine this has been given to me like this in this what has been observed. Now, I am going to so anyway that this part I have dealt with I have to deal with this probability here. This probability we will assume that by assuming the homogeneity property exploiting the homogeneity property that if I shift everything the whole process by u amount. So, u is a fixed quantity for me I am going to shift all this process by u amount the probability should not change. So, u to the power u and this one is like x of. So, this is I will set u to be 0 then this is like u of 0 taking value j and now this is like x of 0 is equals to u. What I have done is basically I have shifted my entire process to the origin that is why I am going to get my y of 0 that but I mean this is all just manipulation but now what you will see that once what is y of 0 condition that you already know that x of 0 equals to y what is how much time this is going to be it is going to be t1 right like if you are going to be how much more time you are going to need before you leave it minimum times. So, this is again x of t1 and this is j and this is x of 0 and according to our definition this is nothing but pij. So, what we have basically done here is to decouple these two process that was amount of time the probability that I am going to stay in this process at least for u amount and after that if I am going to look at probability of jumping to another state that is going to be still governed by my pij process. We can make some more observations here. So, now let us say that so, this is what we have so now let us see that what is that it is going to happen when AI is greater than 0. What is AI? AI is basically giving the rate at which I mean the amount of time that AI governs before you are going to leave state I right. Suppose let us say AI is going to be greater than 0 that means probability that you are going to stay at least for some amount in the state I is going to be positive right. If AI is going to be positive that like it is not like it is it is not at least instantaneous right instantaneous is the case when AI equals to is equals to infinity. Let us see that this is so AI greater than 0 allows me the possibility of taking AI to be infinity also right. So, whatever suppose AI equals to 0 what is this probability is going to be? Suppose let us say I started from state I and now I am looking at going to state J after the first jump. So, if my AI is going to be positive it must be the case that I would be jumping to some J other than I right like because if my J is by definition my x 1 is the state. So, here may be the importance of right and left continuity comes suppose if x 1 when time t 1 has already happened I am already assigned the state which is the new state that has been taken ok. So, I buy the time t 1 by definition I have moved to a next state. So, if this quantity is J 5 if my J is equal to i what is this probability is going to be? It is going to be 0 right because by t 1 by definition I am saying that I have left that state and gone to other state. So, only if J is not equal to i this is positive if again J equals to i like then this is a this is a 0 quantity. So, if for J equals to i let us say this guy is quantity is 0 and if AI is greater than 0 right at least what is this quantity is going to be some unless let us say this is also not equals to infinity. So, AI is some let us say finite quantity then this guy is not a 0 quantity then what is the only possibility it must be the case that PII is equals to 0 right. So, what is coming from the definition of Pij? Why is that? Oh right. So, what would we say t 1 is something it jump has happened right. So, if if I am looking at a some state which is other than i that is not going to happen right that is by definition. So, fine. So, we do not need to. So, I mean the same arguments we are applying here. So, we are going to say that this is going to 0 if J equals to i and this can be some non-zero quantity only if J is not equals to i ok. So, let us understand this suppose now. So, by this definition it looks like this should hold for all values of AI right whether the state i is absorbing or instantaneous. So, I really need to worry about whether this AI greater than or equals to 0. So, suppose my state is i is instantaneous. If my state is instantaneous what is t 1 going to be for that? t 1 is going to be 0 right like it is by definition of y t yeah it is going to be 0. So, in that case t 1 is going to be 0 here. So, I am just looking at why is that then x of 0 state i then it is going to be 0. I would ideally like that instantaneously like like suppose if my AI is equals to infinity that is my instantaneous case. So, in that case like this is like a probability of t 1 is like almost surely going to be 0 right in that case because t 1 in that case like i would be in this case and what this quantity I would like to be? I would like it to be 1 actually if J is equals to i is not it? So, we have started with instantaneously i oh no in that case it is going to be right 0 only because you are like leaving that state very very fast right like even t 1 is kind of 0 it is like instantaneous you are quickly leaving that. So, if J equals to i that is going to be 0 again so that is fine now let us consider a case where my state is absorbing when is my state is absorbing I said AI equals to 0 right. So, when my state is absorbing what is my t 1 is going to be my t n is going to be infinity right like. So, in that case what I will be looking at. So, I will be looking at basically equals to i and here what we are saying is is this still consistent here and i equals to J that is fine right like even when I am at very after large amount of time I am going to look for a different state but here that different is still happening only at infinity time so fine this is also fine. So, right we do not need to really worry about whether AI is is strictly positive or whatever. So, it appears like as long as any AI I am going to take this p ij happens to be p i 0 if i j equals to i ok. So, let us see that what the assumptions or whatever the arguments we have are going to be consistent here. Now, in this definition whatever I have here if I am going to just set u equals to 0 we already just saw that probability that my t n plus 1 minus t n greater than 0 then x 1 plus 1 is equals to 0 all the way up to x 0 x 1 to x n. What are these values? 0, v 1, t 1 all the way up to t n because u equals to 0 this is simply going to be now p ij right by our definition because e to the power a i u that has shrunk onto 1 and we are going to get this. And now suppose I am in the case of pure jump process right because of the pure jump process I am not living my state instantaneously. So, there should be gap between my nth and n plus 1 process jumps right because it is a pure jump process. So, I mean this is fine. So, this probability is nothing but simply then probability that x n plus 1 is equals to j that is not equals to i 0 all the way equals to i n t 1 equals to t. And this is in a way like same as what we have got here it is just like after shifting the t n process to the origin right. So, here I am just say ok x n plus 1th jump is going to take state i given that all the way up to this this is my p ij which is exactly nothing but this. So, now with this definition of p ij embedded process which we called as jump process with this let me call this as a metric transition probability matrix is it a DTMC. So, I have for my continuous time Markov chain I have derived an embedded chain which has this transition probability matrix p ij which I defined like this with this p ij's we are all in this setting let us assume time homogeneity and all. Is my Markov chain is my chain embedded chain or my jump chain is a DTMC. So, you need to check this indeed it is true like you can show that like if I have an underlying if I focus on this underlying jump process and whatever the jump chains I have is going to be a Markov chain with respect to this transition probability matrix p. And henceforth we are going to simply recall going to call it as EMC that is embedded. So, we are looking even though looking at looked at a continuous process Markov chain, but what we actually did is we extracted a discrete version of this continuous time Markov chain which is called as embedded Markov chain. And what we are going to focus on here is kind of renewals here right. What are the renewals here? I am this particular stage I am going to take another stage before that I am going to spend some amount of time in this stage and after that I am again going to jump another state after spending certain amount of time. So, between these two jumps you can think that as a cycle and jumps you can think it as renewals right. So, basically what is this? This underlying embedded chain is a if you are going to look at this sequence these are what? These are the jump instances, but from this can derive your life cycles right like a lifetimes like the way we did earlier. Now, what are the distributions of this life cycles? Is there this life cycles have any distributions? So, what would we say? We have my ok let us take this I have already my CTMC my event happened at these points I looked at this time. So, this is some this defined my T1 this entire thing defined my T2 and all the way this defined up to my T3, but I could just focus on this U1 this interval U2 this interval and U3 this intervals and each one of them I could think of a cycle. Now, this interval what did govern it? What what did what govern the length of this interval? Suppose let us say at this point I am at state I ok. What did govern the length of this U3? That is U3, but what did govern it? Did it have any underlying distribution that governed what is the length of this U3 is? We just said it right already what did that govern it? So, what is this basically this is a sojourn time right the way we defined it U3 is the amount before you are going to make a change in your state. So, U3 is the sojourn time and we have just said that that has a exponential distribution, but with what parameter with the parameter ai which is state dependent that state I dependent. So, here what we have is we have basically renewals, but the length of that renewal cycle depends on from which state it started it right. So, when I defined the actual renewal process how did we define let us consider a sequence of UIs U1 all mutually independent and then we said U2 U3 are all also identically distributed, but here if you are going to look at U1 U2 U3 this sequence are they mutually independent they are right like once you know you are in this state Markov chain previous to that does not matter and I am going to state the next state I am going to focus on that that is independent. So, they are mutually independent are they identically distributed are they they are not right because every cycles the length of the distribution is going to depend on from which state they are going to start. So, I have a renewal process here like, but unlike in the our process where all my life cycles were have a identical distribution except maybe the possible first cycle subsequently everybody has same renewal it is not here it depends on which state you are going to start with ok. Now, let us focus on Poisson process. So, we know that Poisson process is a CTMC right we showed in the first class itself of this lecture about CTMC that if you are going to take a Poisson process it satisfies Markov property. Now, what is the state space of your Poisson process it is going to be natural numbers right you are just counting when the first one happened this is a counting process. So, it has to be all natural numbers. Now, if I say I have a Poisson process with parameter lambda whatever my life cycles distributions are exponential with what parameter lambda right. So, what is ai's in my Poisson process it is going to be lambda right and does it depend on which state you are in it is not like. So, you can just verify that if you have a Poisson process with rate lambda that means you have basically ai is equals to lambda for all life right. So, we have a Poisson process is a special CTMC in which all my ai's are that common parameter lambda, but if you have this different parameters ai then it is a more general CTMC. So, Poisson process is a special case of my CTMC in which all my ai's are the same value and depending on my sequence of ai's maybe I can get a different continuous time Markov chains. So, when the theory of this CTMC it is going to we are going to develop in a way very parallel to what we did for DTMC. So, now to understand all the properties of CTMC now we are going to only look at the properties of my DTMC because I know DTMC well we have already studied all its properties. So, now let us see whether there is a notion of transience recurrence in my CTMC also right and how to define them. So, is there a notion of communicating classes in my CTMC? So, do you think there should be a analogous version of communicating classes for the CTMC? So, let us see. So, what is this notion? i goes to j we will have the same interpretation as in DTMC i arrow j means i is reachable from j sorry j is reachable from i. So, we are going to say j is reachable from i if there exists some t positive such that pij of t is going to be positive and similarly we are going to show that we are going to say that i and j are reachable from each other if i is equals to and this is exactly what we had in our DTMC. But we are just now looking at earlier we wanted some positive n such that pij of n is positive, but now we want pij of t to be positive. So, like in DTMC we can also argue that this equivalence this relation is an equivalence class and it partitions all my states in my CTMC. Now, here is the theorem how to and we can define my communicating classes in terms of this equivalence classes set of all states which have satisfies this equivalence relation. Now, it so happens that the communicating class of my CTMC should be same as the communicating class in the underlying embedded Markov chain. So, a CTMC and its embedded Markov chain have so we know that both my CTMC and its underlying embedded Markov chains they have the same state space whatever the communicating class my embedded Markov chain has that is also going to be the communicating class for my CTMC. So, if I know my DTMC well the underlying embedded Markov chain I already know about my CTMC. So, I am not going to prove this it is just a brief proof in the book just look into that it is just like under the simple intuition that if in your DTMC if you have to reach from state i to j there should be some same finite amount of time in some finite amount of time you should be do with this some positive probability right. That means there is an that is like that many finite number of intervals should happen, but that finite number of intervals you can translate in a jump process to some finite time in your continuous CTMC. And you can come up with finite time under which you can go from state i to j ok. So, that is made bit more formal in the proof just look into that. So, now we know that my CTMC has an embedded Markov chains and it has an association transition probabilities Pijs which I can derive. Now, to define my CTMC what all the parameters I need. So, what are the properties what you feel so far are the characteristics of my CTMCs ok I have this P of t. So, my CTMC has this transition probability sorry my state space it has associated transition probability matrix for all P and let us also say I have my initial distribution. Now, this P of t from this I could derive my P which is my Pij. In addition to this was there any other characteristics of your CTMC that is associated with each of your states a i's right like there is an another parameters a i. So, depending on your a i's your CTMC could be different and it is not like this a i's are independent from this p t's they will be this p t's are going to influence how this a i's are going to look, but a i's are one of the important characteristics you just saw that for a Poisson process a i's simply lambda if a i's are different that is going to give a different CTMC. Now, in general and we also give a probability in which a i's and Pijs governed my transition from a given state to another state after spending certain amount of time in that state right. So, in a way Pijs and a i's kind of summarize the information I have in my p t and using this possibly I can describe my CTMC. So, now this is the question we want to make more precise. Given me CTMC P of t and my state space my initial distributions I have I can find out this ok. So, my CTMC defines these characteristics. Now, the other question is it possible that if you just give me this matrix P and this a i's can I have a CTMC that will define my CTMC. You understand the question? So, this is ok let us think of this these are two set of parameters. We initially said that ok let me for concateness also write all these distributions this is my initial distributions are known. From this we could derive these parameters. So, these parameters we said that this p t's and this x naught initial distribution they completely define my finite dimensional distributions. Once I have understanding of my finite dimensional distribution I know how my CTMC behave and these are like extracted features of this. Now, the question is this a complete characterization of my CTMC is it that if I just give you this parameter you can uniquely recover a CTMC with these parameters ok. In that case all I need to do is instead of giving this p t which is a which I have to specify for every t I will just specify this few parameters transition probability matrix and this a i parameters. Then is it sufficient to completely characterize my CTMC ok. I just want to say this in two minutes so that I finish what I planned today. I am just going to state it so that we can start with a fresh topic from next class. So, it so happens that suppose you have this quantity. So, what is this? This is the sum of your lifetime intervals right. So, this is like if you take n equals to 0 this will give the length of my first interval and then n equals to this will give me second interval. Let us say this is going to be psi. If somehow the expected value of this spans my entire line then from these parameters I could recover my CTMC complete characterization of. So, in a way like what are these? These are lifetimes right. These can be characterized in terms of my AIs and PIGs right because I do not know what state I am going to start with. If I know which state I am going to start with I am going to jump and how much time I am going to spend on those states. I could compute this Tn and Tn plus 1 right using these parameters. So, this is going to define these distributions and if it so happens that the expected value of this psi happens to be infinity then P that is my transition probability matrix A I am just going to call this as A enough to reconstruct. So, I will just leave it to read you it is a pretty straightforward why if this happens you can able to reconstruct your CTMC using these parameters otherwise why you cannot. If it so happens that if my epsilon psi is finite then you will not be able to reconstruct in a unique way your CTMC using these parameters alone. But if you so happens that this quantity happens to be infinity then these parameters are enough to reconstruct your CTMC in a unique fashion. But now we are going to say the pure jump process is called regular if psi equals to infinity with probability 1. Now, suppose psi is not equals to infinity here is not finite here that means psi is extends entire real line right. Then that means this difference the lifetimes they have span the entire real line. So, then looking at this process I could define a process of every possible t right if it has expand the entire real line had it been finite I could know the process only for finite t but not entire thing. But if it happens to be infinity with probability 1 I could with probability 1 I could define it for all the point x t. So, that is what this condition is requiring it should be the case that if this psi equals to infinity with probability 1 then we are going to call it as a regular 1 and we will be just focusing on that. So, let us take an example. So, if I have a Poisson process is it a regular process? Let us say Poisson with rate lambda Poisson with rate lambda what? So, just take the expectation of that this is expectation of this difference let us say I could interchange this infinite summation and expectation after some argument. Then expected difference of this is nothing but expected value of an exponential random variable with rate lambda right. So, you are just adding lambda infinitely many times that means this quantity is already in. So, if I have a Poisson process with lambda strictly positive that is already a regular process. Now, what else could be a regular process? It so happens that any CTMC which takes values in a finite state space is going to be a pure jump process which is going to take value in a finite state space is always going to be a regular process. Why is that? So, now, so I am going to just write it before I say that. So, this is a statement for a pure jump there exists some new positive such that this quantity is ai this is going to be new for all i then just border on this like we are just saying that if all the ai's. So, remember this quantity is ai's which governed your Sojourn times they can be any quantities between 0 to infinity based on that we have said they are either absorbing stable or instantaneous right. But suppose if this ai quantities happens to be uniformly upper bounded then it must be the case that your process is going to be always regular process this holds true. I mean this needs a couple of line of proof, but let us skip this. So, let us understand that as long as my ai's are all bounded it is going to be a regular process. For example, in the Poisson trace you are going to say Poisson process with rate lambda you know that ai's equals to what? For the Poisson process for the Poisson process ai equals to lambda for all i right. So, lambda is an uniform upper bound in that case for your Poisson process. So, this condition already holds and it is we already I will do that Poisson process is a regular process. Now, as a corollary to this. So, like most of the things I am going to now discuss in the CTMC we are just going to state the results and with respect to Poisson process we will understand the result. But even though they hold in a more not necessarily just for Poisson process like more general continuous time approaches. So, as a corollary to this if CTMC I am going to say always we are assuming that pure jump CTMC with finite is regular is this obvious this corollary now. So, I am saying that my CTMC takes only finite number of states and it is a pure jump CTMC. So, ai is not allowed to take infinity right it can be either 0 or some positive, but bounded value. So, then in that case I can always come up with a uniform bound on finitely many ai's right. And in that case I will have always such a nu I will find it and that is why I can appeal to this theorem and I can claim that my CTMC is regular. Okay, another result is so you need to like the go through quickly the proofs of this given the book like we are not lengthy they are just like couple of lines. But the point is you need to know these properties. So, that you can you are free to use these properties wherever in whichever problem you like. But you need to tell clearly before you applying this results you need to make sure that all the hypothesis are satisfied then you can appeal to that. For example, if you want to apply this theorem, you need to first tell me what is that nu uniform bound on all this then you can say fine I am going to then I am going to clear it is a regular. But before that you need to give me the value of nu it just saying there exists right. But if you are going to use this value you need to demonstrate existence of such a nu value. Okay, another thing is pure jump embedded marker chain is recurrent. Does this theorem make sense intuitively at least what we are saying? Okay, take your CTMC look at its embedded marker chain and if that embedded marker chain has happens to be recurrent then your CTMC is also regular. Does that make sense? So, like now let us say once I said my embedded marker chain is recurrent right that means I am going to revisit my state infinitely often times right. So, that is the definition of my recurrence. I am going to keep revisiting my states in my embedded marker chain that state again and again. If I am going to coming back to that state again and again that means you are going to first leave it in your continuous marker chain and then coming back to that state. And this is happening repeatedly that means in some way it must be the case that this process is like a spanning spanning the entire real life. You come back to that J at some point and then again leave it and go back whenever you leave it you are supposed to come back to that state again because it is a recurrent state. So, this is happening and because of that the lifetimes in that states they are going to add so many times to this and it is like a pure gem process. So, you are going to see that that expected value of y is going to be finite. This is just like broad level idea and that you can make this bit more formal. So, in either of this case either you can show that there is a uniform bound on AS or your state space is finite or you can show that my embedded marker chain is a recurrent then it is automatic that your process is going to be regular. In that case all you need to focus on is all you need is this information about all to generate your process x t which is uniquely represents the process with the underlying parameters.