 So, today we are going to look at interesting phenomena related with Markov chain and that is you know time reversible Markov chains. So, let us just first start looking at what we mean by all this. So, suppose you are given this transition matrix P i j and you are given the stationary state probabilities. So, this is let us assume that the pi i's are all positive because if this if a pi i is 0 then surely we can drop that state from the process because it is not of any interest. So, this is a ergodic process and pi i's are all there stationary probabilities and the system has gone on for some time. So, we are looking at the stationary part of these things. So, now consider a sequence in the reverse order. So, this is x n x n minus 1 x n minus 2 and so on. So, at this point of time you are looking backwards at the process. Now, we will show that this is also a Markov process. So, this is the interesting part that is when you Markov process is going on and you at some point you want to look backwards and then you see the transitions and so on. So, there also the sequence will follow the Markov will have the Markov property and so it will also be a Markov chain or a Markov process. So, now how do we show this? To show this I have to show that you know if suppose the current time is m plus 1 then you are occupying state i or the system is occupying state i and then you want to look at the probability that in the time period just before that means today and yesterday. So, it was x m is equal to j. So, you want to look at the probability conditional probability. So, therefore, for if you are looking at like you are today and you are looking at yesterday's situation then all these days ahead that means x m plus 2 x m plus 3 and so on all these. So, the conditional probability of you know having the history future history and then x m plus 1 is i and you are wanting to know the probability of x m equal to j. So, this would be when you are at time m plus 1 and you are looking backwards. So, if you are looking at this thing then this is all fast for when you are looking backwards. So, therefore, this conditional probability should be equal to x m equal to j given x m plus 1 equal to i. That means it would just depend on the current situation or the current state being occupied by the process and then so this probability should be you know here all the states occupied in the past because now we are looking backwards. So, all the states occupied in the past do not matter it is only the current state that is occupied by the system and then. So, this is what you want to prove if I show this then it would imply that the backward process at any time the backward process will also be a Markov process. So, present time is m plus 1 and we know that we are given that x naught x 1 x 2 this is a Markov chain as I said and the corresponding transition probabilities are p i j's and the pi i's are the stationary probabilities. Now, the conditional distribution of x m plus 2 x m plus 3 and so on given the present state that is given the present state x m plus 1 then we the Markov property tells us that the conditional probabilities of x m plus 2 x m plus 3 and so on do not depend on x m right because the conditional probability of the whatever state is being occupied at time m plus 2 is dependent on this right and then of course x m plus 2 will be x m plus 3 will be dependent on x plus x m plus 2 and so on. So, because this is a Markov process we know that the present state x m plus 1 is independent of no no. So, this is conditional distribution of x m plus 2 x m plus 3 and so on given the present state x m plus 1 is independent of x m the past right and all things beforehand x m minus 1 x m minus 2 right. So, the conditional distribution of x m plus 2 or x m plus 3 and so on. So, conditional distribution of x m plus 2 would depend on the present state which is x m plus 1 and will be independent of x m. Similarly, conditional distribution of x m plus 3 will depend on the state being occupied at time m plus 2 and so it will be independent of m plus 1 and x m and so on right. Now, we know that independence is a symmetric relation you say that x i and x j are independent that means x j and x i are also independent. So, it is a symmetric relationship right. So, therefore, given that x m plus 1 x m given. So, that means your when you given x m plus 1 x m is independent of x m plus 2 x m plus 3 and so on. So, then I can say the reverse also right. See we just now said that given x m plus 1 x m is independent of x m plus 2 x m plus 3 and so on. This is what I want to say because x m plus 2 is independent of x m x m plus 3 is independent of x m. So, therefore, the reverse because this is a symmetric relationship. So, I can say that x m given x m plus 1 x m is independent of x m plus 2 x m plus 3 and so on and therefore, this probability can be written as probability x m equal to j and x m plus 1 equal to i. And so, we immediately conclude that the backward process is also a Markov process and now we want to look at another special case of this. So, therefore, forward or backward a Markov process have this property right. Now, let us define these backward or the reverse probabilities right. So, I will say that let q i j be equal to probability of x m is equal to j given that x m plus 1 is equal to i right. So, I am defining the backward or the reverse probability reverse transition probability. So, this is present currently you are in i and so, we are occupying state position j just one period before. And so, this conditional probability I can write as by the conditional probability formula I can write this as x m equal to j comma x m plus 1 equal to i divided by the probability of x m plus 1 equal to i right. And then again this product probability I can write conditional as a conditional probability x m plus 1 equal to i given x m equal to j into probability x m equal to j divided by probability x m plus 1 equal to i. And this because we already have the transition probabilities for our forward process and this and the stationary probabilities pi i. So, this can be written as p j i because j i right. So, p j i into pi j probability x m equal to j and divided by probability i that is being in state i at time n plus 1 is a stationary probability. So, that means given the transition probabilities and the stationary probabilities I can always compute the q i j which are the reverse transition probabilities. So, we can compute the q i j's right. And so therefore, now I can say that this is a the backward process is also a Markov process and the conditional the transition probabilities the reverse transition probabilities are also available given the regular Markov process. Then the backward process I can and once you specify the transition matrix the reverse transition probabilities the process is completely determined right. Now, the special case and the special case is in case q i j is p i j in case the reverse probabilities are the same as the forward probabilities transition probabilities. So, if q i j is p i j then you see this equation q i j equal to p j i into pi j upon pi i this reduces to see you write here p i j it will be p i j. So, p i j into pi i p i j into pi i is equal to pi j into p j i. Now, if you look at the left hand side this says. So, you this is the probability of being state i and then this is the probability of transitioning from i to j. So, this is the rate at which the process transitions from i to j right. And in our case this is you know because I am assuming that currently I am in i state i and have transitioning backwards to state j. So, this is your probability the rate at which you are transitioning from i to j in the backward way and this pi j p j i. So, this is the probability of being in state j and then you are transitioning from j to i right. So, therefore, this is your x m equal to j and then you are transitioning to i. So, forward probability. So, this is the rate at which the process transitions from j to i. So, therefore, now you say that this Markov chain is said to be time reversible Markov chain with respect to time because the forward transition rate and the backward transition rate are exactly the same. And so now you can see that you know it is something like saying that if you play a tape then you will not be able to differentiate whether it is playing backwards or forwards this is the idea right. If the tape is the you know has captured the process of a Markov chain which is which has the time reversible property then whether you play the tape forward or backward it will exactly look like this it will look exactly the same you will not be able to make a difference because the rate of transitioning backwards and forwards is the same right. And this is the necessary condition for time reversibility. So, once you have that means if you have a set of transition probabilities and a set of and a set of probability state probabilities which we will show our stationary probabilities then and they satisfy this equation for all i j then you see this system the Markov process has the property of time reversibility. So, this is what we are trying to say right. So, and again just may be it is a matter of again repeating that you are saying that you are see here this is i or this we are saying is j and this is i. So, then you are going backwards or you are coming this way right when you are here you are looking backwards. So, then the transition the rate at which you transition is exactly the same as you were here and then you transition to i right. So, this is what essentially pictorially also this is what this equation says. So, this is a necessary condition and therefore, now we will show that the converse of this is also true. And then you know look at some examples of time or how exactly and of course, the other advantage that we will show that reversible Markov chain time reversible Markov chains possess. So, now suppose p i j s are given and s is a vector of probabilities such that this condition is satisfied which is your necessary condition for. So, we are looking at the converse what we are saying is that suppose you have a transition matrix and you have a vector of probabilities such that this condition the time reversibility condition is satisfied. That means the process that we are given the Markov process is the time reversible Markov process then. So, the process is reversible Markov chain then s is the vector of stationary probabilities. So, therefore, what we are saying is that in case you have a probability vector which satisfies the reversibility equations with corresponding to the p i j s which are your transition probabilities then the s i s can be nothing else, but the stationary probabilities stationary state probabilities. So, this is a convenient way of because. So, now we know that because we said that the condition that means we said that if s i s were stationary probabilities and these were the transition probabilities and these conditions were satisfied then we defined a reversible Markov process. So, now we are saying talking about the converse that if any probability vector along with this given transition probabilities for a Markov process satisfy these time reversibility equations then s the s has to be nothing, but the state probability vector this is what you want to say. So, suppose I start from here and then I sum up these equations with respect to i. So, this is sigma on sigma with respect to i s i p i j is equal to summation or summation with respect to i s j p j i. Now, since s j is independent of i. So, I take s j outside and this would be summation p j i over i, but then these being transition probabilities you are summing up the probabilities of rho j p j i with respect to i. So, you are summing up the elements of a rho and that must add up to 1 because these are elements of a transition matrix and therefore, this is equal to s j and so when you write down for all j this is satisfied. So, this gives you the matrix equation that s s is equal to s p and therefore, s is the because remember we said that when you do this and you have the condition that components of s add up to 1 then you have a unique solution and that unique solution is the vector of state stationary probabilities. So, therefore, we now know that any system if we can find a vector s and we have the transition probabilities for a Markov process satisfying this then s must be the s must represent the stationary probability vector. So, knowing this now the of course, the question is we will certainly want to look at some examples of reversible Markov chains and then we will show you that through this examples that you know computing the state probabilities becomes very easy and so you do not have to work out you know apply matrix methods iterative methods to solve for s because given this you want to know the state probabilities then you have to remember we solved system of linear equations, but when the process when the number of states is very large then it will be very tedious to have to solve these equations. So, now through examples we want to show you that computing these state probabilities is very simple and of course, in the process we also look at these examples. So, the idea is that now here let us look at an undirected graph with four five nodes and the links connecting them the arcs connecting them. Now, what I am doing is that I am writing probabilities that means transitioning from 1 to 2 to or 1 to 3 these are the two edges. So, I am giving them equal probabilities and that is why it is called random walk because you can wander around this graph and what we are saying is that if for example, 2 has 4 edges incident on it then it can you can traverse any of the edges with equal likely probably I mean traversing any or picking up an edge to go along is equally likely and therefore, I am giving probabilities like p 2 1, p 2 3, p 2 4 and p 2 5 equal to 1 by 4. So, it is when you are at node 2 traversing the h 2 1, 2 3 or 2 4 or 2 5 is equally likely. So, therefore, these are the probabilities and then similarly, p 3 1 is equal to p 3 2 is equal to half and p 4 2 is 1 and p 5 2 is 1. So, here from here you have no choice you have to go to 2 only and from 5 also you can go to 2. Now, let me define d i as the degree of node i. So, that means for example, for this the degree is 2, for this the degree is 4, for this is 2, this is 1 and this is 1 and then by the definition because of this definition you see that immediately since p i j is simply suppose you are at node 1 then your p i j is 1 by 2 because the 2 nodes are incident. So, this is equal to 1 by d i since we are saying that equally likely. So, d i into 1 by d i similarly, d j into. So, for example, if you are looking at 1 2 then here suppose i is 1 and j is 2 then this is half and d 1 is 2. So, this becomes 1 and then when you are here d j is the degree is 4, 4 into p j i 2 1 is 1 by 4 and therefore, the product is 1 here again. So, this holds for all i j and so now looking at this necessary condition is being satisfied. So, that means this random walk where you know you can go from at any node you can traverse any edge and go on wandering around this graph that will be. So, I mean what we are saying is that this is a Markov process and it is time reversible because it does not matter the process has gone on. So, wherever you are then again you start traversing and or you look back to your this thing to your traversals before this. So, it will be the same process there is no change because we I mean we interpreted this with the rate of going forward and the rate of going backwards is exactly the same right. So, once and of course, you can look at the numbers as I have written down here. Now, you can try to verify for all other nodes and arcs that these conditions are satisfied. So, therefore, this is an example and so now here we would want to convert the d i's into probabilities and so what we would do is so the probability of the probability of traversing an edge i j is equally likely for all edges i k incident on i right. And so now let me generalize this discussion and we will say that. So, then I will give you a process of writing down the corresponding state vector and how you define the transition probabilities. So, take a undirected graph now with n nodes. So, just take the general case and we will define the probability vector s by saying that the i th probability state vector is component is d i upon sigma d i. So, you take the degrees what we had defined here and now we just normalizing. So, essentially if you want to convert these two probabilities you have to just define this by the total number of edges which will be summation d i. So, you are adding up the degrees of each node. So, which will actually become so like for example, sigma i if you do it here for this case it will be the degrees are 6, 7, 8 and 10. So, summation d i is 10 which is twice the number of edges. So, when you add up all the degrees they add up to twice the number of edges in the graph. So, here we are normalizing. So, therefore this is d i upon sigma d i then s i's are probabilities because when you add up sigma s i, sigma s i this will be sigma d i divided by sigma d i which is sorry which is 1. So, these are probability vectors and as they will satisfy this because I am defining this is p i j as I am saying that p i j is 1 upon d i at node i whatever the degree of the node then I take the probability of traversing each of the edges which are incident on i as equally likely. So, it will be 1 upon d i. So, whatever the degree of node i and then all edges which are incident on node i I will take the probability as so i to j. So, this will be 1 upon d i and similarly p j i will be 1 upon d j. So, here we are at node j then whatever the number of edges which are incident on j then 1 upon d j and so p j i is 1 by d j. So, as we saw in this example and now you can easily verify that because you have simply divided each s i by this thing. So, therefore the equations will remain the same and so these will be satisfied. So, these this probability state vector and these transition probabilities define your satisfy your time reversibility equations. And so we will say that you know wandering around or this random walk on an undirected graph can be looked upon as a time reversible Markov chain right. And you see here one one did not have to do any hassle with solving a system of linear equations to compute your state probabilities stationary state probabilities. So, just simple formula gives you the way to compute them right. And this is what we really want to show that because of the property of time reversibility things become so simple fine. So, so this is finally our conclusion that s i's and p i j's satisfy the reversibility equation and so s i's must be the stationary probability vector. Now, you can generalize you can talk of a generalized random walk and here suppose what we are saying is that you have weights attached to the arcs. So, there is a weight w i j which is non negative for the arc i j and if the arc i j is not there or the edge I should say actually this should be called as edge. Because in the directed case it is the normal picture is that you call them arcs when it is when they have a direction. So, otherwise undirected links are called edges. So, this is edge i j and so w i j's are non negative and w i j is 0 if edge i j is not present right. So, we will discard that. Now, what we will say is that again we want to generalize this concept. So, what we will say is that probability of traversing an edge i j when at node i is proportional to w i j let us say. So, I have shown you the weights here for example, edge 1 2 is 1 2 2 5 is 2 and so on. So, I have given you the weights w i j and now what we are saying is that the probability of traversing the edge i j is proportional to w i j. So, obviously because these are numbers integers. So, they cannot be probabilities. So, I will have to normalize them. Now, so I do this I define p i j as w i j upon yeah one more thing I should have spelled out here that in this case in the random walk case you see what is happening is that your probability your s i's the state probabilities are being defined by this right. So, d i upon sigma d i and that means that remember state probability is the stationary state probabilities also represented the fraction of time the system spent on a particular state right. So, here since s i is d i upon sigma d i you see the system will spend more time in state in the state which has higher degree. So, the higher the d i the higher the value of s i because the normalizing factor is the same. So, therefore, the magnitude of s i gets determined by the magnitude of d i and so the system will spend more time in a state which has higher which has more edges incident on it. And of course, I should have spent little more time on the analogy see when we said that this is a Markov process. So, here essentially the nodes are the state of the system we are saying that these are the states in which the system will I mean occupied by the system and then the links are the gives you the transition from that means you can transition from 1 to 2 or you can transition from 1 to 3 and so on. So, the possible the possibilities of transitioning to different states. So, this is the analogy. So, now let us talk about generalized random walk and that is where we are saying that we will have weights attached to the edges and the weights will be 0 whenever the edge is not present. And then we will define the probability of traversing an edge as w i j divided by the total weights of the edges which are incident on that node. So, sigma w i k so the submission respect to k. So, you add up all the weights. So, for example, if you want to look at the probability p 1 2 p 1 2. So, the total weights here are 5. So, p 1 2 would be the weight of the edge 1 2 which is 1. So, 1 divided by 5 similarly, p 1 3 would be the weight is 2 here. So, 2 by 5 and p 1 4 is 0 and p 1 5 there is an edge. So, p 1 5 is 2 by 5. And so and now we have to define the state probabilities and to show you that again you know generalized random walk. That means, now the probabilities of traversing an edge when you are at particular node will be given by this. And therefore, this will again be a this is a random this is a Markov process where again the nodes represent the states and the links the edges give you the states to which you can transition. And now when I define for you state vector stationary probability vector such that the necessary conditions for reversibility are satisfied. Then this is also another example more general example of a reversible Markov process. So, you see the weights attached to the edges then we see that if I define my p i j as w i j upon sigma k w i k. So, now here the notion of you know of going to an edge is equally likely that has been replaced by. So, this is the weight of the edge i j and then divided by total weights of the edges which are incident on that node. So, then that is how we will define p i j and you can see that if all w i j are the same then this will be exactly the same. That means, if you just take this as I mean the number of edges which are incident then this p i j will reduce to 1 by the number of the degree of the node. So, this is the general this is the generalization of the random walk and so you will define p i j as w i j this. So, now we will if you write this as this then this will be sigma w i k summation respect to k p i j is w i j and of course, this condition we are imposing that w i j is w j i. So, then in that case and so again since I have been able to write this as this. So, w j i if you take the same equation w j i can be written as p j i into see the p i j represent p j i summation w j k summation over k because here it was w i k. So, here it will be the summation w j k k because you are at node j. So, from j you are transitioning to i. So, therefore, all weights of all the edges which are incident on node j which are of the kind j k. So, you add up all the weights of the edges incident on node j and this now. So, now once you get this then you see this is your reversibility equation because your p i j's are the transition probabilities and now I just have to define my corresponding state probability probabilities and then you see this will give me a reversible time reversible Markov process this is the idea. So, as I said that your s i's we will be proportional to summation w i k over k and so we will normalize s i that is you let s i be take the summation of all the weights and so sigma w i k with summation with respect to k divided by summation with respect to i and k of w i k total weight. And so therefore, by our result that we proved earlier s i's are the stationary probabilities. So, essentially the same concept go through and you can now take a general case you can assign any sets of weights to the edges and then you can define the corresponding transition probabilities and you will see that this will again be this will be generalized random walk. So, you can and you can very easily see that it is you know reversible in the sense that the process can go on and but if you start going backwards then again it will be the same process that is repeated. So, exactly the same forward or backward does not make a difference. So, therefore, in other words we can now get a feeling for the time reversible Markov processes and the converse will also help you to fix ideas better. Now, what we are saying is that any reversible chain is of this form. So, given a reversible chain we want to say that you will be able to associate undirected graph and give weights to the edges such that you know and then you can define the corresponding transition probabilities and your state vector this is simple. So, therefore, now see that means if you are given a reversible chain then this is the set of equations. So, that means there are some s i's and p i j's which satisfy this necessary condition. So, this is given to you that the time reversibility equations are satisfied by the state vector s and the transition probabilities p i j. So, some reversible chain is there. Now, we will start a sign we will say that we can draw a undirected graph and the nodes will be the. So, we can construct a graph with nodes as states and the edges i j for which p i j is positive. So, wherever there is a positive p i j then the corresponding link will be there otherwise it will not be there. Now, let me define the weights on the edges. So, w i j I will simply define as s i p i j and this again by the definition because same s j p j i will be w j i. So, immediately you get that the weights are symmetric. So, w i j is w j i. So, by using this equation now you want to compute the transition probabilities and which we will show can be done in terms of the w i j's. So, you want to compute the probability x n is i given that x n minus 1 is j. So, this because I am constructing a undirected graph and I am associating weights w i j. So, remember with the from the generalized random walk this transition probability we defined as w i j upon sigma k w i k exactly as here. So, since given a Markov process I am constructing a graph undirected graph where the nodes are the states and then now I have to and then the weights are well defined through this equation and w i j is w j i. So, once you have the weights then our process of you know generalizing random walk gives us that the probability of transitioning from I am sorry this should have been also let me write this as see it should have been I am writing w i j. So, this should be j and this should be i. So, from i to j you are transitioning and so the probability would be w i j upon summation w i k summation in respect to k. This is exactly what we have defined here and now let us substitute for w i j from here this is s i p i j and then summation if you sum up respect to j or k it is a dummy variable does not matter. So, you are summing up this s i p i k with respect to k now since i is independent of k. So, s i comes out and sigma p i k with respect to k is equal to 1 this is 1 remember transition matrix and you are summing up the components of a row. So, therefore, this is equal to 1. So, then s i s i cancels and of course, as I earlier said it and I again repeated that s i's are not 0 because if an s i is 0 then the probability of being in that state is 0 and so we can always reduce we can remove that state from the process and come talk and work with a reduced process right. So, therefore, of course, these are meaningful only when s i's are not 0. So, s i gets cancelled and you are left with p i j. So, that means once you given this then I can assign the weights by this equation and then once I have these weights I can now define my transition probabilities in terms of these weights right w i j upon sigma w i k and once I have these transition probabilities I can also define my s i's they just reverting back to the process. So, here s i's were this and so similarly we can say that from here summation w i j is summation j is equal to summation j s i p i j. So, this becomes this and therefore, this is s i. So, here your s i's are proportional to summation w i j some with respect to j and since s i's have to be probabilities I can normalize them. So, define by the total sum of weights and so this gives me probabilities and so my this thing is complete that means given any Markov process which satisfies the time reversibility equations I can assign random walk with it and assign weights I can define the weights I can define the transition probabilities and the state probabilities. So, therefore, any time reversible Markov chain can be modelled as a random walk and you can determine the weights and the you can determine the transition probabilities and the state probabilities and so this is a very simple in the sense that now you really do not to compute your these and this you can do it with respect to you know we do not have to solve system of equations and so this simplifies. But of course, this is only a small class of processes Markov processes which would be which would satisfy time reversibility condition. So, I think this brings to an end of course, I should also just mention that the non-reversible Markov chain examples one example and this is taken from Berstein's lecture you know at Harvard he has given lectures on Markov processes Markov chains. So, he says that you know worldwide web so you can imagine you know each web page as a state of the system and so web pages are states and edges. So, again you can picture this as a graph, but this will be a directed graph. So, for example, just take 4 web pages or maybe you can take 5 web pages does not matter and then you see it is like if you are at page 1 here then you can go from here to page 2 or you can go from here to page 3. So, these are the hyperlinks right you are looking for some searching for some word remember you get a page you open a page and then it links you to other pages it shows the hyperlinks they are called to other. So, therefore, and this is a very small example because you know the millions and millions of web pages and they will be connected and it is you know anytime you open a page it will link you to hundreds and thousands of pages. And of course, there is a way of ranking and so on all that algorithm is there, but in any case. So, the whole idea is that you can picture this as a directed graph. So, each node will be a web page and the pages which are connected to a particular node will be directed by a link. So, for example, from 1 you can go to 3, but you cannot go from 3 to 1. Similarly, you can go from 1 to 2, but you cannot go from 2 to 1 and so on. But at 4 you can go from 1 to 4, 4 to 1 and similarly from 4 to 2, but not 2 to 4. So, you can immediately see that this will not be a time reversible process. Of course, there is algorithm to show you and then how do you compute the state probabilities and so on. Again there is a whole algorithm interesting one which gives you a method of not actually having to solve system of equations again and you can compute the state probabilities and so on. But there is a way of computing the transition probabilities also. So, example web page is worldwide if you look at this then this will be I am searching on the web is not a it will be a Markov process because it will depend on see the where you are just the probability of where you want to go will be this probability will not be dependent on how you reached one. So, it is easy to picture that the search on the web will be a Markov process, but it will certainly not be reversible Markov chain. So, I think with this I would like to end the discussion or this thing on Markov processes and now you would like to talk about continuous Markov processes and then go on to specialized continuous Markov processes. See after having looked at the discrete Markov time process Markovian processes stochastic processes discrete stochastic processes with Markovian property. So, we have spent quite a bit of time on looking at the properties and characteristics of such processes. We will now be looking at because again the continuous time processes are also very important and especially the Markovian ones and would like to now through a series of lectures show you the particular kinds of Markovian continuous time processes. So, and so want to show you the transition from discrete time processes to continuous time processes and same and how the Markovian property also translates when you consider time as varying continuously instead of discrete time. So, we say that a continuous time process we describe it with as x t x is the random variable. So, x t comma t greater than or equal to 0. So, this is as t varies you get different values. So, and since t is greater than or equal to 0. So, it is simply varying continuously the time is varying continuously and we say that if continuous time stochastic process taking on values in the set of non negative integers. So, these values would be positive integers non negative can be 0 also and the property the process is Markov process if for all s and t your and non negative integers i j and x u where u is varying between 0 and s. So, these are all non negative integers probability that x t plus s is j given that x s is i. So, at time s the system is occupying state i let us say because these are the non negative integers. So, we have the non negative integers describe the state it is occupying. So, this tells you the state the value of x t will tell you the state that the system is occupying at time t. So, here probability x t plus s is j given that x s is i and that x u is small x u again these are positive values as u varies from 0 to s. So, given all the past history that means the states which the system occupied from time 0 to s then at time s it is in i and now at time t plus s it is in j. So, this probability is equal to the probability that x t plus s is j given that x s is i that means this past history is redundant you. So, you do not want the probability this probability will only depend on this that means what is your present state and then after time t it is occupying state j. So, this probability is independent of how you reached a state i at time s. So, that means whatever happened between 0 and s is not is immaterial. Now, which I am saying in words here that is the conditional distribution of the future x t plus s given the present x s and the past depends only on the present and is independent of the past. And this property and of course and if this probability is also independent of s that means it does not matter what time and if you remember the condition we were saying for stationarity that we were saying that probability in the discrete case we were saying that x plus 1 is j given that x n is i is equal to probability x 1 is j given that x naught is i. So, the same property that is so it does not matter when you are considering this conditional probability whether at time 1 or at time n plus n plus 1 does not matter. So, then we said that in this case these the system or the process is stationary because it is independent of the time. So, same property is being carried over here. So, if this probability is independent of s. So, essentially you are saying that no matter s is 10th day or 15th day or the 0th day does not matter if in the 0th day the system is occupying state i then at x t at x t it will be j and this will be the same no matter what time s takes. So, if this probability is independent of s then we say that the continuous Markov process is stationary. And now just let us consider the finite case that means the system it is a continuous process but it can occupy finite states i varying from 0, 1 to m. And now we associate a random wearable t i which is the amount of time the process spends in state i. So, it continues to be in state i and how do you sort of express this property or how do you describe this t i. So, we say that suppose the system enters state i at time t prime equal to s. Then for any fixed amount of time t greater than 0 this greater than t will be possible if it has been continuously that means if x t prime is i for all t prime in the interval s to t plus s. So, at time s it started it entered the state i and now you want to know for how long it will continue in that state that means for all values of t of t prime between s and t plus s this value should continue to be i. So, this is the kind of random wearable we want to. So, the amount of time the process spends in state i. So, these are the important thing and we have just now said that and we say that this is a Markovian property with stationary probabilities implies that probability i greater than t plus s given t i is greater than s is the same as probability i greater than t because I can take s to be 0 and then this will simply be that initially it is state 0 and then now it is in continues to be state 0. So, the probability in that case amount of time the process spends in state i. So, I will take the time s to be 0 sorry it is not the state. So, s is 0 then simply you started in state i and so it will be independent of when you are considering this probability. So, that means only the duration of occupying the state i that is important it does not matter. So, it does not matter at what point of time you are considering this. So, essentially this just means that the process has been in time in state i for time t. So, therefore, this is now this is the that means t i is memoryless. So, the kind of continuous that means when you take the continuous process and you impose the Markovian property then it actually translates to saying that this random variable t i is memoryless. And if now you remember of course, we did not prove this part when we talked of exponential distribution negative exponential distribution we said that any random variable which has a negative exponential distribution is memoryless and exactly this property. That means t i would be and because the course level was such that I could not prove the reverse thing that any distribution having a memoryless property has to be negative exponential. I did not prove that part, but may be later on sometime when you do an advance course you can see how that property is proved. So, in any case since this is a Markovian process memoryless. So, therefore, t i will have a negative exponential distribution. So, now I will be describing to you talking about Poisson processes and then birth and death processes very interesting and they also model lot of situations in practical life. And lot of processes you can show have this property approximately of course, you cannot say that you can always model the real situation very accurately. So, we will be talking about and so then see I will be referring to birth and death processes as mm 1. So, it will be that the arrival process is Markovian and the departure process. So, you suppose you are in a situation suppose you are at a counter, at a bank counter or at a post office counter and you want to people are coming in and then they get serviced and then they leave the system. So, you want to model that situation. So, here you describe such processes by mm 1 property which means that you know the arrival process. So, you can actually show that if the arrival pattern is Poisson then the inter arrival times will be exponential. And so the inter arrival times have a Markovian property then the service times will also be shown to be under the conditions of course, the conditions that we will impose will be this thing. So, they will under the service times will also follow an exponential distribution. So, we will call it mm and then 1 server. So, this is the connection and therefore, you see that why it was very important that I talk about discrete Markov processes and then continuous Markov processes which again have the same property Markovian property. In this case can again be written down as this and this will be the memory less property which implies that the T i has a negative exponential distribution. So, the birth and death processes that we will consider will have the same problem under this we will consider the birth and death processes where the inter arrival times follow have a Markovian have a negative exponential distribution and the service times also have a negative exponential distribution. So, then we can very easily describe the system to be mm 1 with 1 server and of course, you can also consider more than 1 server and we will derive lot of interesting results for the parameters related with such distributions.