 Now, we want to move to continuous time Markov chains. So, so far we studied discrete time Markov chains, when we studied discrete time, time was discrete right like we allowed n equals to 1, n equals to 2 like this. We only looked for the change in the Markov chain at its specific discrete times. And we had allowed ourselves to look only into some countable set of states and that is why we called it a chain because we have certain states going from one state to another state like it is kind of forming a chain there. So, that is why we studied so far discrete time Markov chains. But now, I may do not want to look at the things at particular time instances, but maybe I may want to look at my process continuously. And now, I want to now study what is the Markov property on such a continuous time. We have defined what is a Markov property, what is a Markov chain when my time is my time is discrete. Now, we want to go and see how we are going to deal with the case where my time is not in discretely indexed, but it is a continuous random variable. We already know what is a continuous random process. Continuous random process is nothing but a sequence of random variables indexed continuously, right, at every time. Now, we want to understand what is continuous time Markov chain, ok, so definition. So, as usual, we are going to always assume that this state space S is discrete. What it is saying, the probability that you are interested in knowing your process takes value state J at time t plus S given your process all the way up to S. What is this is telling? So, this condition here tells that you have been told what is your process till time S. And now you are interested in knowing what is the probability that in further t second, further t more time that is S plus t that you are going to take state J. What is this probability? So, like this you are your process is taking value over continuously this is S and let us say this is T. So, your process is taking continuously value and you have been given what is your process all the way up to time t. And now from this you want to know what is the value your process is going to take at time S plus t here. This process, this probability only depends on the value your process has taken at time S. I really did not care about all this before time S only. If you are going to look something beyond S, I only know to know what has happened at time S and this is what this is telling. This is same as what we did it in a Markov chain right. So, there we have said that the future depends only on the current state and I do not care about my past. Here it is also saying the same thing ok. Past I do not care just tell me what is going to be happening at the current. But now it is just extension to the continuous time version. Here t and S both are continuous value. So, we have already seen some processes right continuously indexed. What is such a one process or here renewal process where what was m t right m t was continuously above them. And what would you say that m t is distributed how? We just said that it has it is distributed Poisson with rate lambda t right like what was. So, what would you say Poisson process, but for a general process could we say anything like if you have a renewal process u i's with certain mean value what is m t is going to look like. So, fine. So, we we had specifically mentioned about my Poisson process. By the way is Poisson process is a continuous time process. How did you define a Poisson process? Poisson process is a counting process we said right and it is going to give the number of arrivals or number of some events happening till time t. So, there also it was a sequence of continuously indexed random variables ok fine. Let us say now if this Poisson process is a CTMC Poisson process is a continuous time random process we want to see whether it is a it satisfies this definition ok. Let us say n t is a Poisson process with rate lambda. So we have we know that at any time t this n t is what n t is the number of arrivals or lifelives that has occurred in the interval 0 to t. So, all these values are integer valued numbers n of t is this is a counting process right. So, every time it is going to count how many times things have happened in the interval 0 to t. So, n of t is going to take value 0, 1, 2, 3 like that. Now, whether if I am going to ask the question number of renewals or number of arrivals till time t plus s given till this. Does it depend only on n u n of s not all u less than or equals to s. And therefore, specifically let us assume that n of s is equals to some i. So, when I said this n of u given given n of u u less than or equals to s means I have described you my process all the way up to s. When I said I have described my process till s I have told you what are the states that has been taken by this process till time s ok. So, let us say at time s it has taken a particular state i and now I want to see that whether this probability it only depends on what is the state taken s s and does not depend on what has happened before it is that true ok. So, let us say that is true right like because what has happened till s and I am just counting. So, my counting accumulates after that point I am only going to see what more has come. So, let us write that in a formal way. So, this is the intuitive idea, but we have to bring in the definition of my Poisson process to argue this. So, instead of asking this question n t plus s going to j with n of s equals to i, I am asking that ok u the number of increment that has happened between the interval t plus s to s is j minus i. So, it is the same thing right like instead of going directly from i to j at time t plus 1 it is I am asking that between s and t plus s interval j minus i at intervals have happened and with this we are done right. What is the property of a Poisson process? Poisson process has independent increment. So, what is this is saying? This so far this is saying that ok this is like the number of arrivals in the interval 0 to s is i and this is the number of arrivals that has happened between s to t plus s. So, the interval here is s to t plus 2 s and here the interval is 0 to s and these are independent event. This is nothing, but probability equals to j minus i and here i. So, this is I have assumed that n of s is already known to me. So, you already see that you have eliminated dependency of this probability on all the things that has happened before time s and this you can just now go and do the reverse process and this is saying ok this is equal to j equal to 1, yeah this is still n s equals to i and then you are done. So, by definition of the Poisson process you already have this condition. So, what is the crucial point you use? You wanted to make sure that the increments in disjoint intervals are independent. If they are not you do not know it right. So, ok let us say something has happened in this interval and the amount of the increment that is going to happen in the next some interval is going to depend on what has happened in the freeways interval then they are not independent right. Then in that case I actually needed to know what has happened before before I can say what is the probability that this is going to happen in this interval. So, there is no Markov property there, but in the Poisson process given that like this in the increments in disjoint intervals are independent this allows us to make sure that I do not need to know before what happened to say what is happening in this current interval ok. So, that is fine now. So, now most of the things now we are going to know next is a very similar to what we did for the DDMC like what we when we started with DDMC we defined what is my transition probability matrix, how I defined what is my one step transition probability matrix then we said how I am going to write my joint finite time distributions in terms of my this transition probability matrix. So, let us see how we can do all the things for my continuous time Markov chains. So, first of all we are going to for simplicity we are going to assume time homogeneity like the other like in the DDMC we are going to assume that. So, if you are asking the question at time s you are in state i and what is the probability that you go to state j at time t plus s. So, here you are looking for a interval s to s plus t right. So, its length is simply t we are going to assume that this probabilities only depend on that length of the interval not exactly from the point where you started ok. So, it does not matter where you are going to start what matters here in this probabilities what is the length of that duration this is we are just going to call it time homogeneity it does not matter which s you are going to put as long as the length of that interval is same I am going to it is going to be governed by the same probability. Now, we are going to denote by this p of t is equals to p i j t. So, all this collection of probabilities for every state i and j for a given t I am going to denote it by p of t. So, is my p of t is a stochastic matrix my p of capital P of t it is going to stochastic matrix right all the row is going to sum to 1. And we are going to like in the earlier case we are going to define p of 0 to be i identity matrix at time t we are going to assume that in 0 times you are going to be just falling back in your own state not you are going to move to any other state. So, that is why only diagonals will all be 1 other side is 0. And then this we are going to skip, but we can verify that if I have transition probability matrix for time t plus s this can be written as p of t and p of s. So, this is in a similar way what we did for t d t m c where we could split my transition probability matrix for sum of 2 discrete times into that of individual times. So, we are just doing the same thing here. So, it follows the same step there just skipping this ok. Now, here is the contrast in our d t m c we had written p of n what is p of n? p of n there in the in the case of d t m c is the n step transition probability matrix that we had written in terms of 1 step transition probability matrix by raising it to the power n. So, now it is such a similar analogy holds for c t m c. So, what is the meaning of n here n steps? When you are going to take t here what is the interpretation in c t m c? It is like interval of length t there you will be interested in transition at every instance right. When you are looking at n you have looked at n time steps, but n time even in some interval 0 to t you will be interested in transition at every instant. That means, it has infinitely many uncountably many transitions possible in that. So, that does not make sense like I am going to write p of t to p to the p for this p to the power capital T or like. So, even it is not clear like how to define a 1 step 1 step transition probability here right like my transition can happen at any instance starting from 0. So, that makes does not make sense to represent my transition probabilities matrix here in terms of 1 step transition probability matrix. So, earlier in a d t m c for a time homogeneous case it made sense to give my n step transition probability everything in terms of 1 step transition probability, but in when you move to d t m c sorry c t m c you have to specify my transition probability matrix for every t ok. So, in c t m c I need to know this p of t for all t greater than or equals to t we will see that why it has to be of that form not anything else. So, p of t plus 1 we have. So, we are saying that it is going to split. So, is that exponential is the only one which will have this property is that the case t plus delta t whatever like you just define s as delta t ok. So, we will see bit in a couple of minutes that it is indeed true that this distribution is going to be exponential distribution, but not in on this what in this like here it is clear, but we will sit in a different setting and then we will again revisit it, but whether this p of t has to be exponential again. So, I said I have skipped this you have to go and use the same steps as we did it in d t m c case ok. So, here also we had split drive there we did it for any n 1 plus n 2 we had said this is going to be p of n 1 to p of n 2 you have to just do it like. So, here just like split this probabilities conditioning on two parts and just apply the Markov property that you are going to get ok fine. So, this p t here that is why we are going to call it as transition probability function because it has to be specified for every t. So, every t it is a matrix I just going to call it as transition probability function. Now, you can verify that simply I am going to write this finite dimensional distributions for this. So, we will see that this can be just like after applying the Markov property 0 i 1 t 1 and if you further write it as x naught plus this is your initial distributions the same thing you could write it as unconditional one. So, this is the conditional joint distribution right. So, you are looking at your c t m c taking value i 1 at time t 1 i 2 at time t 2 and i n at time t 1 given that initially it started to state i naught you could simply split it like this just apply the Markov property repeatedly then this probability is nothing, but probability that you go from state i 0 and i 1 in time t 1 and then probability that you go from state i 1 and i 2 in time t 2 minus t 1 similarly. But this is like a conditional distribution, but now if you are interested in this simply the joint distribution and which we call final dimensional distribution FDDs. So, now, for this you could just bring in this unconditional to our your initial value. So, if you know your initial value you could write this. So, we always said like to complete the characterization of your process you need to give full description of your final dimensional distributions right. So, to give final dimensional distribution for the DTMC what all the things we said we needed right we said we need one step transition probability matrix and your initial distributions. So, here also what all the things you need to give a complete distribution description of your process you need to give your initial distribution and then you need to give your transition probability function here ok. So, far what we have been doing is we have a continuous time Markov chain. We have described about Markov property and how to write down its final dimensional distributions. Now, in a CTMC once you are going to hit a state you are now you are continuously watching it right at every instant you are watching your random process. You want to know how much time mass process is going to stay in that state before it jumps to a new state like if it is a CTMC it must be the case that ok it hits some state maybe stayed in that state for some time and then went to a new state and trade it on that state for some time and then moved to another state or it may it may be such a very very shaky random process that it is not staying in any state for any good amount of time. It came to one state immediately switched to another state and it can be very very like unstable thing like it is not spending any time in any state for a longer time. But you want to capture this ok. So, this is anyway continuous time process like I am watching it continuously. So, in the discrete times you only worried about your Markov chain at particular instances, but here I have to now completely describe how it is behaving at every time instance. May be one way to describe that is just alternate give an alternative characterization by saying that if I hit this state it is if I am at this state at particular time I am going to stay in this state for this much more time. Maybe that is a random value, but if you can give the distribution maybe you can alternatively characterize your continuous time Markov chain right. So, that those are all can be given through your Sojourn times, Sojourn time in a. So, Sojourn time is the amount of time you are going to spend in a state before you move out of it ok. So, we are going to define at any time t. So, can you parse it what is happening here? So, I am define this function y of t for any given time t. What I am looking is I am looking let us say x of t is any state at time t. Now, I am looking for t of s. So, t of s will take some state right like this is the state taken at time t of s. I am looking for s from time t in which it is going to take a time taking going to take a state other than x t and what I am looking at I am going to look for s which is the smallest amount all of them. So, I am going to look for the time that yeah. So, the time I am going to leave I am going to stay in this state right and basically I am basically here when I check this condition I am looking when I actually left my state. So, when I said t I do not care in which state I am whatever that state is x t that is let us some random state. What I am just looked is how long I stayed in that state and I am just looking at the minimum amount of time that is required and this could be a random quantity right because both x of t and x of t plus s themselves are a random quantity and this is what we are going to call it as a sojourn time. So, sojourn time is basically the minimum amount of time you are going to spend on a particular on a state at that time at a time t. So, this is defined for every t. So, you look for a process at some time t you just see when is the earliest that it is going to change its state that is going to give you a sojourn time. Now, comes this theorem what it is saying is okay take any CTMC let us focus on a particular state I and at time t you have given that at time t you are in a particular state I. Now, what have been asked this the probability that y t is greater than u what is y t greater than u means the probability that is going to stay in that state I for at least u more units okay at least u more units going to be e to the power ai u for some ai which is between which is a positive real number. So, what is the distribution of y t condition on x t equals to i then it is going to be exponentially distributed with some ai which is function of that state which your process has taken at time t okay. So, if you realize that so, okay fine. So, this ai's let us say for every i I have this ai's once you know what is your state space you have for all i's you know ai's and let us at time arbitrary time t you realize that you are in particular state I okay. Now, the amount of time that it is going to continue to stay in that same state is now is exactly according going to happen according to this distribution which is now exponentially distributed with parameter what is parameter ai. So, in a way what we are saying is if you are continuous time process has this Markovian property it must be the case that the amount of time it is going to stay in a particular state at any arbitrary time is also memory less exponential distribution we know it is a memory less property right. So, what is Markovian property? Markovian property in a way like I do not care what has happened before it what matters is my current state after it is only going to cover next thing. In a way there it is also given the current state it is it does not depend on anything which is happened before right. So, for that in a way like we would not be surprised with such a result that if you already know that you are going to be in a particular state at time t it must be the case that the amount of time you are going to spend in that state is again exponentially distributed. So, fine. So, let us quickly go through the arguments. So, let me define this quantity here I am interested in probability that x of s equals to i. So, I am going to define this probability as that given that I am in state i at time t that I continue to stay in that state i in the interval t to t plus u plus v ok. So, alternatively this is nothing but saying I am in this interval I was already in state i at time t I will continue to stay in that state in this interval. So, this one you can simply expand it again by chain rule. So, for this probability I just applied chain rule I split this interval t to t u plus v to t to t plus u and then t plus u to t plus u plus v. So, I have just split my total time between t to t plus u plus v into this part t plus u and then. Now by definition what I am looking at like given x t equals to i I am now asking the question x of s that I am going to stay in the same state i in this interval starting from t to again t plus u. According to this definition this is nothing but g of u right. Earlier it was for g of u, but now I have split it is only for the interval g of u. And what is this other part now for this other part I am going to again apply my Markov property from Markov property I know that actually I have already applied my Markov property and I have already know that my x of s is i in the interval t to t plus u and I do not care what has happened before that. So, I already know at x of s I have going to i and now I am asking the question ok with what is the probability that it is going to continue to stay in the state i again. And now what is the length of this interval it is going to be v. So, it must be the case that this is g of v. So, this probability I could split it like this and as one of you say like it is only the property of the exponential distribution that I could do this. So, it must be the case that this probability here that I am going to stay. So, what have we just showed is g of of time is equals to is equals to e to the power a of i t for some yeah I do not know So, what we have basically said is given that I am in state stay equals to i the amount of time I am going to continue to stay in that is again exponentially distributed and that is why I am going to get it. But you say that fine this is the distribution of the amount of time we are going to continue in the same time we are looking at end. But what is this here? What is this probabilities here and why we could split it like this? So, what is p t plus s? It is the probability that given some state at time 0 that you are going to take some other state at time t plus s right and that I am going to express it as product of these two terms. Do you feel that it should be also like the only possibility that this can happen is if all this my probabilities. So, here their probabilities they are all going to be exponentially distributed. So, what is the definition of this? g i of t is like this is amount of time I am going to continue to stay in state i again for t more times right like this is let us say this is going to this i here is the your initial state let us say let us put this is equals to t equals to 0 here and let me make this is equals to t. Then I will be just looking at 0 to t in this interval and this is going to be this. So, the amount of time I have spent is going to be this much distribution then is it not exactly the thing what I am asking for ok fine. So, this is the step we need to check that exponential is the only distribution that can be split like this that is what I said like from that property we are going to use this. So, how we are going to do this we have already not come across this ok. So, fine. So, how we are going to do this it requires a couple of steps. So, maybe like you guys need to figure it out how we are going to do this or we will just put it as an assignment question ok. Is it the case that is what we have to try to argue right. What is the interpretation of a p there and what is the interpretation of g here? So, fine. So, there are two things you need to convince that it is not really necessary that my each of this p i j is here in this matrix has to be necessarily like exponential or like is it the case that that is the only way. So, now, if you are going to use this argument that the sum has split if you could split it like this if exponential is the only possible distribution why is that not necessarily be here or it is the only case ok. So, let us check that and then here is p i j or exponential. So, let us stop here.