 So, I am going to talk about discrete stochastic processes, and without you know spending time on first trying to define stochastic processes, and on that discrete stochastic processes you know in abstraction. I would prefer to give you examples, and then we would try to come to a conclusion, and hopefully you know be able to define. And in fact, you would have by then by that time form your own definition of stochastic process. Of course, here we are going to write first talk about discrete stochastic processes. . Now, let us just look at one example, watch selling shop keeps a particular brand of ladies watch, and the D I let D I denote the demand for this brand in the I th week. So, let us just say that our planning horizon is 3 weeks, and so D 1 will be the demand for this particular brand of watches in the first week, D 2 in the second week, and D 3 will be in the third week. And these are you know D I is a random wearables, because the demands are not certain commodity, because otherwise the shop keepers shop keepers job will be very easy. So, here D I is a random wearables, and they are identically independently distributed random wearables. So, this one simplification has been added here. So, the D I's are not known, they are not certain events, but they are they have same distribution and independent. So, that means the demand in the first week is independent of the demand in the second week, and independent of the demand in the third week. Let N I denote the number of watches on hand at the end of the I th week. So, let us say by Saturday evening he the man takes talk of his things that he have that he has on hand. And so N I will be the same particular brand of ladies watch he has N I of them. So, that means N 1 at the end of the first week, N 2 at the end of second week, and N 3 at the end of third week. Now, orders placed for watches on Sunday evening are identically are sorry are delivered before the shop opens on Monday morning. So, this could be Sunday evening or Saturday evening whatever it is. So, before the new week begins, so on Monday morning before the shop opens the watches are delivered whatever the ordering policy. Now, suppose the ordering policy followed by the shop owner is as follows, if no watches in stock order four watches. That means by Saturday evening if he realizes that he does not have any watch of this particular brand, then he will order for four watches, and they will be delivered by Monday morning. So, that means if N I is 0 order four watches. If N I is 1 that means if he has one watch at the end of the week in stock, then he will order for two watches. And finally, if he has two or more watches left over by the end of the week, then he will not order any. So, do not order. So, this is his policy. And of course, sales are lost when the demand exceeds the number of watches in stock. So, if there is more demand and you do not have that many watches, then you lose those sales. So, fine. So, now, let us look at what would be the position in the following week. So, N I plus 1 will be let us say. So, this will N I plus 1 will denote the number of watches on hand at the end of the I plus 1th week. And how will you compute N I plus 1 given N I? So, this will be you see if N I is 0 that means at the end. So, this is your I th week and this is your I plus 1th week. So, therefore, at this point you had N I watches. Now, if N I is 0, then you ordered 4 and they were delivered by the time your I plus 1th week started. So, that means, then you will have at the beginning at this point you will have N I plus 4 watches. And then there is demand D I plus 1. So, that means, you would meet the demand and then depending on whether D I plus 1 is. So, since N I is 0 you will actually have 4 watches on hand and that is why I have written 4 here. So, actually your this thing will be 4 minus D I plus 1. And if your demand is more than this, then of course, you will say the max of this, you cannot have negative number of watches. So, either you have 4 minus D I plus 1, if D I plus 1 is less than 4 or you have no watches left at the end of the next week. So, at this point is if you are able to meet the demand D I plus 1, then whatever the difference that will be the watches on hand at this point otherwise it will be 0, if D I plus 1 is more than 4. So, similarly, if N I is 2, what were the policies? No, no N I is 1, sorry this is N I is 1. So, if N I is 1, then he orders 2 watches. So, this will be N I plus 2 minus again whatever the demand and if this number is positive, then that will be taken as the number of watches on hand at the end of the I plus 1 at week, otherwise it will be 0. So, N I of course, you can write 1 here. So, this is actually max of 3 minus D I plus 1 comma 0. So, whichever number is positive, that number you will take. So, when N I is 1 and similarly, if N I is greater than or equal to 2, then you are not ordering any watches. Two or more, you do not order, so your watches on hand at the beginning of the I plus 1 at week is N I, N I minus D I plus 1 will be what you left with at the end of the I plus 1 at week and so it will be again max of these 2. So, this is how you can, so you see the situation at the end of the I plus 1 at week is dependent on your situation at the end of the I th week and the demand. So, here are two random phenomena on which your state of the system, if you can want to call it, that means the state occupied by the system at the I plus 1 at week is given to you by N I plus 1 and here this is the current state. So, therefore, you can say that here your N I plus 1s are dependent on just N I and D I plus 1. So, the current demand and the state in which you were at the beginning of the I plus 1 at week. So, this is sort of trying to show you the dependence because the variables N I plus 1, which we are trying to tell us the state of the system at the end of every week. So, this phenomena is dependent on the two random phenomena N I and D I plus 1. So, this is one example and then we will, so now I can sort of give you a definition here saying that N I is indexed by the number of the week form a discrete stochastic process. So, then when you take these N 1, N 2, N 3, so these are three random variables and they form as, so see the thing is that you are giving them an index, which is discrete. So, N 1, N 2, N 3 and the unit of time can be anything, here it is a week, it could be month, it could be an hour or whatever it is. So, when and therefore, the discrete word. So, this is a random phenomena, which is being you know sort of indexed by a discrete time period and therefore, we will call this a discrete stochastic process. Another example and therefore, you may, so of course, this and now the next question to be asked is why study this. So, for example, I have just tried to state one or two questions that the shop owner may want to have answered, but of course, they can be many other questions that you can also raise. So, for example, the shop owner is interested in knowing the following, long term loss of sales due to his reordering policy, you see because if he can by some mechanism find out, what is the sort of estimate may not be an exact number, but he can estimate the number of sales that are lost, when this number is negative that means, he is losing out on sales, whenever this number is negative. So, if there is a mechanism by which he can find out, what is his long term sales loss of sales due to the to his reordering policy, because he wants to know whether he really has a good reordering policy or not. Then and also he may also want to know the effect of changes he makes in his reordering policies in a reordering policy. He may also want to change some of the orders there and then he would want to know, would that make the situation better for example, would it reduce his long term say, because long term word I am using here, because you know it takes a while for any system to settle down. So, we will most of the time when we talk of any stochastic process, we want to analyze it, we would be talking about its long term behavior. So, whatever the disturbances and perturbations, they all settle down after a while and then you want to look at the system, because otherwise it is very difficult to you know model any such a process, you know when the initially lot of tribulations or lot of perturbations, you cannot really analyze or you cannot model such a situation. So, therefore, it is a long term loss of sales due to his reordering policies and then he may want to know, if he makes any changes, how will that effect his again, his revenues. Essentially he is finally interested in the revenues that he gets. Now, let us look at another example, which is probably a simpler one. So, there is an automobile manufacturing company and has the policy of assigning its white collar employees. So, white collar employees means who work in their offices, office of the sales and so on. So, the three sections it has. So, the three sections it has are production, HR, you know handling human resources and sales. So, these are the three and then see we will now look at this model example and again give you another feeling about the stochastic processes. So, the three sections are production, human resource and sales. So, these are three sections in the automobile manufacturing company, where he wants to assign the white collar employees and then I mean by he I mean the owner of that manufacturing automobile manufacturing company. And there is no set pattern for reassignments, at least the employees do not know. So, there must be something in the mind of the owners, how they would reassign. So, since there is no set pattern known for the reassignments, one does not know in which section he or she will be assigned next. So, after you have been in one section for a while, certainly you know that you will be transferred, but then you do not know to which one you will be transferred. So, the next assignment may depend on the current assignment. It is possible that wherever you are right now, it may have a bearing on where you will be next. So, these are the kinds of. So, then if we let X I denote the section assigned during IH 6 month period. So, that means now you look at one employee's profile. Suppose, just take one employee, look at his profile in the sense that you want to keep on measuring. So, your time period is a 6 month time period. That means, when you get assigned to a section, it is for a 6 month period and then after the, at the end of the 6 month period, there will be another reassignment to sections and you may either stay in the same section or you may get transferred to another one. So, anyway, so let X I denote the section assigned during the IH 6 month period. And then, so the whole process can be, that means the whole process of the sections being assigned to a particular employee can be described by the sequence X 1, X 2, so on. So, as long as you are planning horizon, you will have. So, X 1 will tell you that in the first 6 months, the particular employee is in this section, whatever the value of X 1, then X 2 will tell you the section he is in in the second 6 month period and so on. And of course, X I can take the possible values. So, let us state, let us number the three sections. So, the first section is production, second section is HR, human resource and the third is sales. So, X I can take three possible values, whichever the three sections. And so, this will describe to you, if you like, you take it up to X 10. So, that means, over the five years, the sequence X 1, X 2, X 3 up to X 10 will tell you the sections to which the particular employee has been assigned. So, this assignment of sections to an employee is a discrete stochastic process and it is indexed by the periods 1, 2, 3 and so on. So, now, you get the meaning that. So, it is something like the process is evolving over time and there is uncertainty about the, what the state, where the system would be after you know each time period, one time period is over, then where will it be next. So, therefore, there is some sort of uncertainty about the whole process and so, this is why we are calling it a stochastic process. Now, for this particular company, an employee may ask the following questions. If an employee is working in sales, what is the probability that after two assignments, he will be working in sales again. This particular employee may want to want an answer to this question or for example, if the employee is currently in production, how many months must pass on the average before he enters HR, human resource. So, you know as I said again, just as for the first example, I am stating some questions, you can also add some more. The third one for example, is if the employee has been with the company for four years, how many times on the average he would have been assigned to HR, to human resource, then what percentage of an employee's assignments will be in sales. So, these are the questions and many more. Now, why would these questions be important, because a prospective employee who is going to join the company can ask questions like this, so that he can judge about his prospects in the company. Basically, he would like to know whether he will professionally be satisfied with the company or not. If it turns out that he comes to know that you will most of the time be with sales, then of course, he may not be wanting to you know stay with the company, because he may not be interested in sales and so on. So, I am just giving an example, but there can be many such questions that can be asked. So, the NIs of the first example and XIs of the second example are not independent random variables that you can see. In the first example, the NIs were the number of particular brand of watches that were left at the end of the week that were in stock. So, we saw that this was dependent on what should be manned is in the following week and dependent on your ordering policies. So, you cannot say that N1, N2, N3 and so on, they are independent random variables that you can see that there is some relationship and similarly, for the XIs it is possible see the whatever the way the organizers or the owners of the company decide to reassign the sections, certainly where you were and how long you have been in a particular section will have a bearing on where you will be next. So, you can feel that these random variables are not independent and therefore, any kind of computations about these random variables will not be easy thing. So, now, we will attempt to define this stochastic process after these two examples. So, any random process for which time can be measured discreetly and can be represented and can be represented as a sequence of random variables. So, I should add the word here and can be represented as a sequence of random variables, then this is I will call it a random process or I will call it a stochastic process is called a stochastic process. Or very simply you can simply say it is a sequence of random variables index by time. So, a stochastic process and definitely you can see that it is evolving over time and then you want to now look at its behavior. So, of course, now you see that if you want to answer any of these questions that I have posed and even in the earlier one, then you see you may want to know if you would need to know the joint density function of for example, if you are planning horizon is 5 years, then you may want to know the joint distribution of x 1, x 2 up to x 10 since they are not independent. And therefore, you cannot say that the joint density function of x 1 to x 10 will be product of individual density function. So, you will have to need to find out and of course, if your planning horizon is much bigger, then you can just give raise you know you can throw up your hands and say that you know you cannot compute joint density function of so many random variables. So, therefore, we need to really look at the methods by which we can sort of simplify analyzing such a process or under what conditions can we try to answer questions like this when we are looking at a stochastic process. So, for the automobile company, just look at the we can diagrammatically describe the profile of a employee and so you see here the horizontal axis is giving you the time period. So, this is the beginning of the planning horizon. So, 0 period that means the start of the process, then this denotes the first six month period, this is the second six month period, third six month period and so on. So, this is what it is and then here you have the three sections to which the person can be assigned. So, for example, what it is saying is that here in the first six months period, he was with HR, the second section. And then in the next six months period, he got assigned to production, which is your first section. I think this is production, this is HR and this is sales. So, then he got assigned to in the second six month period, he went to sales and then again after that he went to your sales in the next this month. That means one year is over, this is the next six months even. So, therefore, you can see that this diagram and here for example, here from this onwards, he continued for two periods consecutively in the production section. So, this you can diagrammatically describe the profile of an employee in the manufacturing company. Now, let us just give some more terminology. So, set of all possible values the random wearable X i takes is called the state space. So, we always describe. So, whenever wherever the system or the process is whatever situation it is in. So, that will be described by the state space and normally what we do is we give it numbers. So, the possible values in the state space we describe by the numbers. So, for example, here the three sections I numbered as 1, 2, 3. So, it is easier because otherwise you cannot go on writing the possible values that the state space contains. It may be different different things. So, we can just distinguish them by the numbers and. So, here for example, this will be X n is i means the state in which the system is at time period n. So, the value of X n. So, if I am describing my X i's are the wearables which are the random wearables which describe the process. Then when you change these that means when the system changes from one state to another we call such the process as the change is called a transition. So, are called transitions. Now, as I said that and we have seen the two examples already simple ones. We saw that the real life situations the processes will be many, many processes are stochastic because there are elements of the process which are which are not which cannot be determined with certainty. And then we also have seen that you know even in such simple examples your X i's are not independent. So, there will be some sort of dependence among the random wearables. So, therefore, as I was saying earlier that it will be very difficult to have a combined joint density function of all the possible random wearables which describe the states in which the system can be over a long time period. And so you cannot just analyze or answer any questions about the process. So, Markov suggested the following simplification. So, he said that the transition from one section to another may depend on the current section occupied. And here I should say the word only the transition from one section to another may depend on the current section occupied. So, when we said depend this of course, that means the computation of the probability, the probability with which the process will transition from one section to another would be the probability would be dependent on where you are right now. So, the current section. So, Markov suggested this simplification and the for example, in the watch shop example value of n i plus 1 to depend on the values of n i and d i plus 1 only. And the way I was describing to you the values of n i plus 1 which was max of the formula I wrote down. So, that from there we saw that we were computing n i plus 1 only depending on the values of n i and d i plus 1. So, that was that anyway. So, therefore, according to Markov's definition this is already satisfying the Markovian property right now in the, so once you have this and then that means in the in the in the in the section assignment problem what we are saying is that yeah that means we are saying that if you want to look at x i the value of x i then that will depend on. So, the probability that you will go from whatever the value of x i it will depend on where what is the value of x i minus 1. So, sort of the transition from here. So, this will depend on this and then x i will affect the value of x i plus 1 with certain probability right. So, this is the kind of dependence we are only allowing or you can say this is a simplification that. So, this makes the analysis of stochastic processes which satisfy Markov's property quite tractable and we will see this as we go on we will see that about we can probably answer almost all the questions that I wrote in the beginning about the automobile manufacturing company and the question the kind of questions that an employee may be interested in knowing. So, we should we will be able to answer the questions because if we say that the section assignment process would be would satisfy the Markov property right. Now, any stochastic process which satisfies Markov property is called a Markov chain or a Markov process. So, I will be using the word Markov chain or Markov process with the same meaning synonymously right. So, now what happens that with the Markov's property being satisfied by a process then we just need to compute the joint or the conditional probability mass function remember I am talking about these discrete processes. So, joint or conditional PMF of neighboring x i's is computed. So, it simplifies and therefore, you know when have you have two variables you can very easily compute the joint or the conditional PMF of two variables and so with that we can then able to we are then able to analyze the process over long term and whatever it is. So, now if you want to formally state Markov's property that is see essentially what you are saying is probability x n plus 1 is equal to j that means at time n plus 1 your system is occupying state j and if you look at the past history starting from x 0 is i then it will be x 1 is some i 1 and so on x n minus 1 is i n minus 1 and x n is i. So, this is the entire past history. So, if you are not assuming the Markov property being satisfied by the process then of course, to answer this compute this probability you would need to know the entire past history, but then Markov's property simplifies it and says that this whole thing can be made equal to probability x n plus 1 equal to j given that x n is i. So, wherever they can be also the current state of the system that helps you to determine. So, with some probability where the system will be in the next state next term time period and so these are known as one step transition probabilities and we will call them as p i j. So, now I am here not writing anything else why because I am now making one more simplification and what we are saying is that this is actually equal to probability of x 1 equal to j given that x naught is i. So, that means the starting state of the system suppose if you were in the system was in i state i then the next period that is in j. So, we will denote that one step transition probability and we will say that over the long period that the process goes on this does not change. That means whether at time period n plus 1 you are considering the change from x time period n to time period n plus 1 or you are considering the change from the starting initial state to this first period. So, those probabilities remain the same and that is why the word stationary. So, what we are saying is that the one step transition probabilities are stationary and essentially the explanation here is that whatever process you consider we are saying that after the initial perturbations and so on this system has settled down to stationary this system has become or the process has become stationary. So, it is not and therefore these transition probabilities are not being affected by where you are considering at what time period you are considering the transition. As we go on we will be looking at lot of processes and lot of situations real life situations where we will see that to assume that your transition probabilities have the stationarity property is not very unrealistic. So, we will continue with the yeah so I will just continue with defining and giving you how to compute these probabilities and so on or once you have these probabilities then what can you do with these. So, let us start looking at how we will now continue with the analysis of the process and so where therefore what we would need first to describe the process and what are the quantities that will require before we can continue with our analysis and trying to answer the questions related to the process. So, if x naught is the let us x naught is the present assignment by our notation right 0. So, this means whatever the value of x naught that tells us the present assignment of the employee and then we are interested in his next assignment that is we want to know the value of x 1 in the next time period. So, if suppose x naught is 1 that is the man is currently in production then x 1 can be 1 2 or 3 any of the 3 sections he can be assigned to right. So, that means you want to know the probability. So, it means this has to be given to you that is if he is already in he is starting his career with x naught equal to 1 that means he is in production right now and then what is the probability that he will be again kept in production only. So, x 1 is 1. So, we will call this as a p 1 1 and as I told you that these are 1 step transitions probabilities and they are we are assuming a stationarity. So, it does not matter whether it is x n plus 1 equal to 1 given x n is 1 or x naught is 1 given that x naught is 1 then x 1 is 1. So, the probability p 1 1 and then similarly you would need to know if x naught is 1 then what is the probability that he will be in h r right. So, that probability is p 1 2 and the probability that he will be in sales is given by p 1 3 right. So, these are the first step transition probabilities if you know where he is at the beginning of the planning horizon right and if you know x naught is 1, but if x naught is 2 then of course again the transition probabilities will be different in the sense that now what is this probability of going from 2 to 1. So, that means he is in h r and then the probability that he will be assigned to production. So, that must be some probability you see these are the transition probabilities which are now describing to us whatever the assignment process is and. So, therefore, again these three transition probabilities p 2 1 p 2 2 and p 2 3 are given to us and then for if x naught is equal to 3 that means if he is already he is currently in sales then his probability of going to production will be p 3 1 probability of going to sales will be p 3 2 and probability of going remaining in sales will be given by p 3 3. So, these we call as the I am not all the time saying one step transition probabilities, but that is understood. So, this is transitioning from states. So, here these three numbers these three probabilities give you the transition probabilities of transitioning from state 3 to any of the three states right. So, therefore, the process and now of course we will see that this is not a complete description of the process and we will as we go along we will find out what more we need, but let us just first look at this. So, now the nine first step transition probabilities can also be written as a 3 by 3. See remember because whatever the number of states if the number of states is capital N then your transition probabilities will be n square because you can go from one state to any of the n states. So, therefore, you will always have n square numbers and so these transition probabilities can be written in a matrix form. So, if you have n entries sorry n states that this system can occupy then it will be n cross n matrix that you can you can record all these transition probabilities in a n by n matrix. So, here since our states three states are there three sections. So, I can record all the nine transition first step transition probabilities in a 3 by 3 matrix. So, P will be called transition matrix. Now, since the man must transition from let us say from production to any one of these sale either he stays in sale production or he goes to HR or he goes to sales he must transition to one of the because after every six months that assignment is announced. So, therefore, these three probabilities will add up to one. Similarly, and therefore, also another way of saying this that these probabilities must add up to one is that they are the P 11 and P 12, P 13 describe the conditional PMF remember we have talked about it while talking about you know conditional probabilities and conditional expectation. So, this is the conditional PMF of x 1 given that x naught is 1 and. So, therefore, since this these three numbers describe the conditional PMF they must add up to 1 because. So, same way you can argue that the second and the third rows must also add up to 1. That means P 21 plus P 22 plus P 23 is equal to 1 and P 31 plus P 32 plus P 33 is 1. So, now any square matrix which has because these are probabilities. So, they have to be non-negative numbers. So, any matrix A a square matrix which has all entries or all elements non-negative and the rows add up to 1 can be will qualify to be a transition matrix. That means we can say that they must be as to castig process which can be associated with such a matrix. So, all entries are non-negative and the rows the elements of the row add up to of every row add up to 1. So, this will be this will qualify to be a transition matrix. Now, another way of looking at this process because diagrams always help they fix ideas and I think they also help in the understanding of the process. So, let us see I will describe the three states by the nodes of this graph. So, first is your production HR and sales and if I am showing it arc from 1 to 2 and this is transitioning from 1 to 2. And of course, I have not entered all the probabilities, but they can be written down here. So, the arc 2 to 1 will be the transition from 1 step transition from your HR to production. And this loop describes that means you stay in 1 that is your transition from 1 to 1. So, you do not go anywhere you continue with the same state. So, this way you can look at this and so you can write down the probabilities here also p 1 1 this will be p 2 2 and this will be p 2 3. So, this will be p 3 2 and finally, this will also be p 3 3 and here this will be p 3 1 and this will be p 1 3. So, this diagram also helps you to look at and you can see that currently you can transition from for example, from 1 to 2 then you can go from 2 to 3 you can come from 3 again you can come back to 2 or you can go from 2 to 2. So, actually you can play around and you can see lot of things that you can do with the you can see how the transitioning is taking place and so on. But of course, this you can do when your number of states is small and if the number of states is large then drawing a picture like this may not be a very good alternative. And so, we will have to look at other ways of handling this process, but anyway this makes the thing look interesting. I mean the picture is there and you can just see how the process is evolving over time going from one state to another and going through these arcs. So, you see we describe the first step transition probabilities and through a diagram and so on. Now, suppose we want to now look at look at x 2 the random variable that describes the state of the system at time 2. Now, again I try to show you through the diagram. So, if you look at this for example, you start with state 1 that means you start with production and after 2 transitions you are back in production. So, what would that mean? So, the possibilities are that you start with production then you next period you again transition to production only that means you stay where you are and then finally you in the next step you again transition to production. So, that means you continue through. So, this path describes one possibility which I have written down here and then it could be that you start from production you go to H R and then again you get transition back to production. So, that will be your second path. So, I am talking now in terms of paths because this is how you will when you go for 2 step transition probabilities this is what you will have to you will have to compute the probabilities of these paths and then finally you will start from production go to sales and then you are back to production. So, that will be your third path and so just to give you and of course we will continue this discussion in the next lecture also. So, if you look at probability x 2. So, here your actually the arrows are in the wrong direction. So, it should be x naught to 1 and then x 1 to x 2 you start from here then you transition to production and again you transition to production from the first period to the second period. And since we have Markovian property tells us that you know we just need a one step transition probabilities that means the transitioning from x naught 1 to 1 and then from in the second period from the first period from 1 to this. So, these are independent and therefore, I can write them as the product of transition probability 1 to 1 here in the first period and then again 1 to 1. So, now here again the second property that we have used is the stationarity. So, Markovian property and stationary transition probabilities both tell us that you know the probability of the first path that means transitioning from 1 to 1 in two periods along the first path the probability is p 1 1 square. And so, we will continue with this kind of computation and then show you very interesting results and then you will see that how far our analysis can go of stochastic process which satisfies Markovian property and of course, we are talking when the stationarity conditions are met.