 So, in this lecture I will be talking about convolution and again this is one of the tools that we will use just like FGF to sometimes compute the distribution function and the density function for different kinds of random variables or the functions of random variables. Mostly the convolution is used for computing the distribution function of sums of random variables. So, here the definition says that if x and y are independent random variables, the distribution function of x plus y is said to be the convolution of the distribution functions of x and y. So, the distribution function that we obtain for x plus y is called the convolution of the distribution functions of x and y. Now, if I let f of x plus y f x and f y denote the distribution functions of x plus y x and y respectively. So, by notation it is clear that the f x plus y is the distribution function for x plus y and f x is for x and f y is for capital Y. Now, for the discrete case the definition would be as follows. So, when the x and y both are discrete random variables and they are independent random variables p of x and p of y denote the p m f of x and y respectively. Then p of x plus y equal to t we will write as summation of p x x. So, you fix the value of capital X, then the y will take the values t minus x and of course, the summation will be over all x for which your this probability is positive and also it should be such that t minus x the probability y at t minus x is also positive. Otherwise, since it is a product of these two probabilities whenever one of them is 0 the whole the contribution of the that particular term will be 0. So, simple definition, but we will see how we can apply these definition and similarly for the case when for the continuous case that means when both x and y are independent continuous random variables that let t denote the sum of the two random variables that is t is equal to x plus y. Then your distribution function for t at small t is probability capital T less than or equal to t and this we can write as minus infinity to infinity probability x plus y less than or equal to t conditioned at x equal to x just as here we chose the value of x and then the corresponding value of y got fixed at t minus x. So, here you condition it on x equal to x and then probability x plus y less than or equal to t. So, then that into that means this is the distribution function into the probability that x takes the value well the pdf of x at small x which will be capital f of capital x x into d x and your x can vary from minus infinity to infinity. So, this is a general expression and of course, it will depend on your the range will depend on the values that your random variables take. So, I have used the concept of conditional distribution here and so this is the expression and therefore, in terms of f f f y capital f y and small f x you can write this as minus infinity to infinity probability x plus y less than t conditioned on x equal to x into f x x d x. So, this we can write as f y integration of minus infinity to t minus x of f y y into f x x d x upon f y x and this we can do because x and y are independent. That means the probability x plus y less than t conditioned on x equal to x I am able to write break it up into f y y into f x x upon f x x d y and so because x and y are independent and then into f x x d x. So, your f x x cancels out and we are left with the integration minus infinity to infinity in again integration minus infinity to t minus x f y d y f x d x. So, therefore, the first part I can write I mean the integral minus infinity to t minus x f y d y I can write as probability y less than or equal to t minus x. So, whole integral is then minus infinity to infinity probability y less than or equal to t minus x into f x x d x. So, I hope there is no doubt about going from here to here because when x is equal to x your y will be required to be less than or equal to t minus x and so that is why I have written this probability as capital F y t minus x into this. Now, differentiate with respect to t since the limits are independent of your t therefore, the differentiation will just go inside. So, therefore, this will become small f t t and this will be equal to. So, only this thing gets differentiated this is a function of t and therefore, it will be p d f of y at t minus x and then integral from minus infinity to infinity of this function product of these two. So, this will be your convolution you can either write it in this form or you can write it in terms of the p d f's. And of course, the understanding is that wherever you know for all values of t for which this is defined and you also take the values of x for which this is defined. Now, let us so the basic definition is this and then we will just see how we apply it to different cases and of course, there will be repetition in the sense that for some for sums of independent random variables in quite a few cases by other methods by using the transformation method or by the moment generating function. We have already obtained the density functions for the sums of independent random variables, but here I just want to show you the working of this particular method and therefore, we will just go through a few examples. Now, suppose x 1 is Poisson lambda 1 and x 2 is Poisson lambda 2. So, then t is the sum of the two Poisson random variables and now you want to find out probability t equal to n. So, then if I choose x 1 is equal to x then x varies from 0 to n and this will be probability x 1 equal to x into probability x 2 equal to n minus x. So, from here if this is n I fix x 1 at x then x 2 will be n minus x. So, you just sum and now since they are independent and therefore, I have written it this way product of the probability. So, now this particular probability can be written as e raise to minus lambda 1 and we have missed out on the summation should have been there. So, the summation x varying from 0 to n e raise to minus lambda 1. So, this probability is e raise to minus lambda 1 lambda 1 raise to x upon x factorial and this probability would be e raise to minus lambda 2 lambda 2 raise to n minus x upon n minus x factorial. So, here also summation x varying from 0 to n. Now, then just rearrange the terms e raise to minus of lambda 1 plus lambda 2 then I have divided and multiplied by n factorial. So, this is n factorial and then here it will be n factorial divided by x factorial n minus x factorial and then you have lambda 1 raise to x and lambda 2 raise to n minus x. Now, this is the you can see that this is the expansion of the binomial expansion of lambda 1 plus lambda 2 raise to n because you are summing see this thing is independent of x e raise to minus lambda 1 plus lambda 2 divided by n factorial. So, this I take outside and then the summation x from 0 to n of this will be your lambda 1 plus lambda 2 raise to n and so this will be now Poisson with the parameter lambda 1 plus lambda 2. This is probability equal to n. So, therefore, e raise to minus lambda 1 plus lambda 2 divided by n factorial into lambda 1 plus lambda 2 raise to n. So, you can immediately conclude that this is now Poisson with parameter lambda 1 plus lambda 2. So, another example I want you to show that x 1 plus x 2 plus x n is negative binomial. And so here now we are extending this concept to more than 2 sum of more than 2 independent random variables where x i's are actually I could have straight away said that x i are independently identically distributed geometric random variables and geometric random variables with probability of success as p. So, that means probability x i that means the probability this is just a geometric random variable and so and the probability of success is 1 that is it. When you describe a geometric random variable you just need to know the probability of success and then you want to find the probability that you will get a success in n trials. So, now here we first consider the sum of 2 random variables x 1 plus x 2 and so probability x 1 plus x 2 equal to n by convolution we will write as x equal to 1 to n minus 1 because there should be I mean I cannot make this 0 I cannot have x going from 1 to n because in that case when x is equal to n this will be 0 and so anyway this will be not defined on the probability we will write it as 0. So, there will be no contribution to this sum. So, I have to take the summation from 1 to n minus 1 and then if I fix x 1 at x that means for the first geometric random variable the success the first success occurs for x trials then for the second one the first success will occur at the n minus x trial. So, the probability of x 1 equal to x is 1 minus p raise to x minus 1 into p at the x trial the success must occur otherwise before that all failures and similarly here it will be 1 minus p raise to n minus x minus 1 into p. So, when you rearrange the terms here see x minus 1 and this is minus x. So, x minus x cancels out you are left with n minus 2 and this is p square. So, this is equal to sigma x varying from 1 to n minus 1 1 minus p raise to n minus 2 p square, but you see x is not present here. Therefore, you just add up these terms n minus 1 times. So, this is n minus 1 into 1 minus p raise to n minus 2 p square. Now, this you can see is the probability that out of n successes out of sorry out of n trials 2 are successes and n minus 2 are failures. So, that means this is the probability of 2 successes in n trials right where the of the last success occurs where the second success occurs at the end. So, when the 1 success can occur anywhere in the n minus 1 trials. So, therefore, this is n minus 1 1 you can write this as n minus 1 1 choose 1 and then 1 minus p raise to n minus 2 and p square. So, 1 success occurs anywhere in the n minus 1 trials and then 1 success occurs at the end and therefore, this is probability of 2 successes in n trials. So, therefore, x 1 plus x 2 if you let y be equal to x 1 plus x 2 we have shown that y is negative binomial with the parameters 2 comma p right. The probability of success is p here in the example I probably did not really mention, but it is understood that probability that probability that x i is equal to the probability of success sorry. I should say that probability of success is equal to p I should have mentioned that here. So, it is understood anyway right now we can iteratively show that if you now consider the random variable y plus x 3 then by the same argument we will be able to show that this is negative binomial 3 comma p and so on. So, therefore, you can show that this will be negative binomial when x i is a geometric random when each x i is a geometric random variable for all i and x i are independent. In the example where we consider the sum of the geometric independent geometric random variables all with probability of success as p then you see x i was the number of trials required for a success for 1 success. Therefore, see if the corresponding number is n i that means the number of trials required for 1 success for the ith geometric random variable. So, then you see when you say x 1 plus x 2 this will be the number of trials required for 2 successes right and that number will be then n 1 plus n 2. So, essentially when I wrote that the sum x 1 plus x 2 is 2 comma p negative binomial. So, that the understanding is that you want 2 successes and therefore, the number of trials of course, will be equal to n 1 plus n 2 then. So, therefore, finally, when you go on doing iteratively this procedure of you know adding on or convoluting these random variables then x 1 plus x 2 plus x n will finally, give you the number of trials required for n successes right and therefore, this was negative binomial n p right and the total number of successes trials required would be n 1 plus n 2 plus n n right. Now, the next part was that without any calculation you can also conclude see this was just to show you how you would apply convolution and now here we can also because by just by description because when you say x 1 plus x 2 plus x n that means you are asking for n successes and your probability of success is p and therefore, x 1 plus x 2 plus x n gives you the number of trials required for n successes when the probability of success is p. So, just by saying it aloud you know since each x 1 x 2 x n is the is geometric random variable independent therefore, when you add up you can say that this will be a negative binomial n comma p. So, that is all we just to because here you very did not require to use convolution right. Now, let us consider again apply convolution to sum of independent uniform random variables both are 0 1 right. Now, so therefore, the corresponding PDFs will all both be 1 as long as x and y are between 0 and 1 and it will be 0 otherwise right. So, let us define the random variable t equal to x plus y. Now, the thing is that you see we will have to and this is where the this part comes that sometimes of course, you can easily determine the ranges, but sometimes it will not be that easy. So, here for example, you see when I write the formula f x x x and f y t minus x. Now, you see this is defined only for between 0 and 1. So, my t minus x also should vary between 0 and 1 and therefore, I have to get this I have to write I mean I have to do this computation for first for t between 0 and 1 because t minus x greater than 0 implies that x is less than or equal to t. See from here this is not defined for t minus x being negative. So, therefore, t minus x has to be non-negative which requires that your t should be greater than or equal to t greater than or equal to x and then also t minus x should be less than or equal to 1. So, you cannot take a value of and therefore, t must be less than or equal to from here t must be less than or equal to 1 plus x. Since x can take 0 value therefore, you see immediately from here that your t cannot be more than 1. So, here we will have to the way we are defining this it will have to be the limits for t would have to be from 0 to 1. And then your x can vary from 0 to 1 when you write now I mean I am writing the integral this way, but since for x non-negative and x between 0 and 1 this is 1. So, therefore, this will reduce to simply integral 0 to t f t minus x d x. Now, when I say that my x varies from 0 to t. So, this is also defined and therefore, this is also equal to 1 as long as x is varying from 0 to t this is well defined and it is within the range of the values for y. And therefore, this also is 1 and so this integral comes out to be t and t between 0 and 1. So, this is what I have drawn you for you the curve this value is also 1. So, therefore, the p d f for the sum of the 2 random variables when both are uniform will be given by this. Now, because x plus y and both can take values between 0 and 1. So, of course, the range for this is from 0 to 2 and so we have to consider the values of t lying between 1 and 2. And in this case you see again the convolution formula is this. So, t minus x less than or equal to 1 will imply that your x is greater than or equal to t minus 1. So, here immediately you see that t must be greater than or equal to 1 or x is greater than or equal to t minus 1. So, that is why the range t greater than or equal to 1 and then of course, it cannot go beyond 2. So, therefore, again the same reasoning that this value is 1 in the valid portion region where we can define this and why am I writing. So, this is t minus 1 to 1 that means my x can vary from t minus 1 to 1. So, t minus 1 yes. So, this thing you see at t minus 1 this will be 1 because t minus t plus 1 and when this x is 1 this will be t minus 1 because your x has to be greater than or equal to t minus 1. So, therefore, the range is this and again in this range this function is equal to 1. So, the integral is t minus 1 to 1 1 into d x which comes out to be 2 minus t and that will be your this graph will be the function will be represented by this straight line. So, therefore, the sum of the two random uniform random variables both independent and defined on 0 1 the pdf is a triangular distribution thing. So, therefore, you see here you could not have just straight away done this integration from 0 to that means you could not have allowed t to vary from 0 to 2 because that would not have given you the valid answer and here you had to break up the region of integration from 0 to 1 and then 1 to 2. And I suppose because we are writing t minus x. So, we have to do it this way that it x has to be greater than or equal to t minus 1 and we are integrating the respect to x. So, x cannot go beyond 1 therefore, this will be t minus 1 to 1. So, another example is now sum of two independent gamma random variables. So, suppose x is gamma s comma lambda and y is gamma t comma lambda then and x and y are independent. So, we have to obtain the pdf of x plus y. So, define t as x plus y and then by the convolution formula f t a we will write as this. Now, here again we have to apply the same thing because gamma pdf is defined only for non-negative variables. So, therefore, this has to be a minus x has to be non-negative. So, therefore, this a minus x non-negative this implies x less than or equal to a. So, I can that means here my integration has to be from 0 to a. Now, why am I writing this as yeah. So, right now I will you know write the correct range, but we are just right now substituting for f x x which will be lambda e raise to minus lambda x and lambda x raise to s minus 1 because x is s comma lambda gamma s comma lambda. So, the pdf is lambda e raise to minus lambda x into lambda x raise to s minus 1 and this is pdf for y is lambda e raise to minus lambda of a minus x into lambda of a minus x raise to t minus 1. You see as we said x has to be less than or equal to a and now if you. So, therefore, this infinity will get replaced by a and you see here e raise to minus lambda x then e raise to plus lambda x that will cancel out you will be left with e raise to minus lambda a and then here it is lambda x raise to s minus 1. So, if you just take out the lambda here it will be lambda raise to s minus 1 and this will be lambda raise to t minus 1 and you have lambda square here. So, the whole thing will be lambda raise to s plus t minus because this is 2 and this is minus 2. So, you have lambda raise to s plus t and that is what we have written here. So, this is lambda raise to s plus t e raise to minus lambda a and then you are left with x raise to s minus 1 and a minus x raise to t minus 1 d x. Now, make the substitution that x upon a is equal to z. So, this will imply that your d x gets replaced by a d z and your range goes from 0 to 1 instead of 0 to a because x by a we have put a z. So, range for z will be from 0 to 1. So, therefore, the constant terms I have put outside and so this a comes from here and then you have this in this. Now, you see this looks familiar and this is the beta function and therefore, since we want to. So, therefore, we know that this integral from 0 to 1 d z will be equal to gamma s into gamma t upon gamma s plus t. So, from the definition of the beta distribution we know that this integral must be equal to gamma s into gamma t upon gamma s plus t. And therefore, I replace this whole thing by this thing. So, then gamma s, gamma t cancel out you are left with the gamma s plus t and this is lambda raise to so the lambda we write outside here and then it will be lambda a raise to. So, this makes it s plus t minus 1. So, therefore, lambda a s plus t minus 1, 1 lambda I have written out here just to conform to the form of the gamma distribution. And so this is therefore, this is I should have written here, gamma s this is gamma s plus t lambda. So, we have concluded that if you take two independent gamma random variables x is gamma s lambda and y is gamma t lambda then the sum and their independent and the sum will be again gamma distribution having a gamma distribution with parameters s plus t and lambda. So, the same lambda if the second parameter is the same then I can go on adding the gamma random variables and the first parameter gets added of course, you are adding independent gamma random variables. So, corollary first is that if x 1, x 2, x n are gamma s i lambda and they are independent random variables then by repeated use as we did earlier by repeated use of convolution it follows that x 1 plus x 2 plus x n will be gamma sigma s i I varying from 1 to n comma lambda. So, here of course, the this thing is immediate that is you are adding the distribution is not changing it is just the parameter is changing and that also the second parameter the first one that is a common one that remains the same. Another corollary which is that if you take x 1, x 2, x n are identically independently distributed exponential lambda random variables and we know that an exponential lambda here only you can see from the gamma pdf that is a gamma 1 comma lambda. An exponential lambda is gamma 1 comma lambda and so when you take x 1 plus x n exponential random variables I did independently distributed then this is a gamma n comma lambda because the first parameter gets added and this. So, x 1 and we will see uses of all this when we talk about random processes, stochastic processes which are Poisson and Markov and so on. So, I have tried to depict by various because I want you had to you could not just blindly apply the convolution you had to make sure that when you are applying it your values of the variables should be such that these pdf are defined and so on. So, we did this now of course, the question is see we have defined this for independent random variables that means the convolution right now we the definition says that they are independent and then you take the sum and then you can talk about the convolution. But can you answer the converse can you that is if you can show that the for two random variables x and y the distribution function for x plus y can be written as a convolution of the distribution functions of x and y would it imply that x and y are independent. So, this is the question and I will try to answer through a counter example to say that no it may it is not necessary the you may get the distribution of the sum by the convolution, but the variables may not be independent. So, this example is I have taken it from Dodevich and Mishra and as I told you I have often taken examples from this book Dodevich and Mishra and Sheldon and Ross and I have give you the references also at the end of the course. So, here and this example is originally due to W. T. Hall and you will see how cleverly the this construct this has been constructed. So, see as I said the we try to answer the question that if the distribution of x plus y is the convolution of the distribution of x and of y does it follow does it follow that x and y are independent random variables. So, we want to answer this question because we have defined the convolution for independent random variables. So, the table here gives you the joint I should make the at least it will look nice. So, this is it. So, therefore, this table gives you the joint probability function probability mass function of x and y and theta is a fixed number, but it is absolute value less than 1 by 9 because otherwise the entries here will become negative. So, we want this to be a valid table for joint probability mass function of x and y and you can see that when you add up these. So, the marginal p d f's are independent of the marginal probability mass functions are independent of theta and this is where I am saying that the thing has been very well constructed. So, see these will 3 will add up to theta theta will cancel plus theta and minus theta. So, it will be 1 by 3. Similarly, here this minus theta and plus theta will cancel. So, this will also be 1 by 3 and finally, this is also 1 by 3 and your column sums also give you the marginals which are all independent of theta. And now you want to write. So, we want to verify that the distribution function of x plus y is a convolution of the distribution functions of x and y. So, the values that x plus y will take are minus 2, minus 1, 0, 1 and 2. These are the possible values that x plus y can take because your x takes the values minus 1, 0, 1 and y takes the values minus 1, 0, 1. So, you start with probability x plus y equal to 0 and we will have to verify for all possible values to make sure that this is that x plus y, the distribution of x plus y can be obtained as a convolution of distributions of x and y. So, here by convolution I mean I am taking x to be equal to 1, then y will be equal to minus 1. And if you take x equal to minus 1, then y will be 1 and if you take x equal to 0, then y will be also 0. So, these are the 3, you can convolute here this and probability for x equal to 1, 1 by 3 that is what we mean by marginal 1 by 3 into probability y equal to minus 1 that is also 1 by 3. So, therefore, this is 1 by 9. Similarly, you can immediately see that probability x equal to minus 1 is 1 by 3 and probability y equal to 1 is also 1 by 3 and 0, 0 also x equal to 0 gives you 1 by 3 and y equal to 0 also 1 by 3. So, this is 1 by 3 and then similarly, probability x plus y equal to 1. So, here also when you want to fix x and then the corresponding value of y. So, here I put x equal to 1, then y will have to be 0 and I put x equal to 0, y is equal to 1 and you should have put if you put x equal to minus 1, then that is not valid. x equal to minus 1, then y can only take a value 0, 1 or minus 1. So, this is these are only 2 ways you can convolute the sum x plus y equal to 1 here and therefore, this probability is 2 by 9. I have tried to compute just to make sure that see because even if for 1 single value the convolution does not hold, then I cannot say that and so for x plus y equal to 2 only 1 possibility x takes value 1 and y equal to 1. So, this is also I have taken this example because it shows you that when we write f x of x into f y of t minus x, then you see you have to take only possible values of t, you cannot just take any. So, x plus y equal to 2 will be given by x equal to 1 and y equal to 1. So, this probability is 1 by 9 and similarly, x plus y equal to minus 2 will be given by x equal to minus 1 and y equal to minus 1. So, that is 1 by 9. So, now we compute these probabilities without convolution. So, for example, probability x plus y equal to 0 will be probability x equal to minus 1 y equal to 1. So, all pairs that make up x plus y equal to 0 plus probability x equal to 1 y equal to minus 1 and plus probability x equal to 0 y equal to 0. And so, from the table we can see that minus 1 1 x minus 1 and this 1. So, 1 by 9 minus theta and then 1 minus 1. So, 1 minus 1 is 1 by 9 plus theta and then 0 0 x 0 y 0 that is this which is 1 by 9. So, therefore, this adds up to 1 by 3 minus theta and theta cancel out. So, this is 3 upon 9 which is 1 by 3. Similarly, x plus y equal to 1. So, x equal to 0 y equal to 1. So, again 0 1 1 0 and so on. So, 0 1 0 1 will be 1 by 9 plus theta 1 0 will be 1 by 9 minus theta and that is it because the sum to be equal to 1 these are only 2 possible pairs of values that x and y can take. So, 1 by 9 plus theta plus 1 by 9 minus theta is 2 by 9. So, this also matches with this and this matches with this 1 by 3. And then similarly, you can see that x plus y equal to 2 will simply be the pair x equal to 1 y equal to 1 which is 1 by 9 x equal to 1 and y equal to 1 1 by 9 which also matches with this 1 right. And then x plus y equal to minus 2 this is x equal to minus 1 and y equal to minus 1 x equal to minus this is 1 by 9 and here also we got it as 1 by 9. So, we have checked almost for all yes well not exactly may be x plus y equal to minus 1 that is left out, but we have otherwise checked for x plus 0 1 2 2 2 and minus 2. So, that 1 was but you can easily verify that for all values that means the probability mass function of x plus y matches with the probabilities obtained by convolution for all theta less than or equal to 1 by 9. So, therefore, so the 2 things match, but we know that x and y are not independent y because you just take one pair see I just have to show that for one pair of values this does not hold that is probability x equal to minus 1 and y equal to 0 that is from the gain from the table is 1 by 9 plus theta, but this is not equal to sorry what I am saying is that this is not equal to probability x equal to minus 1 into probability y equal to 0. So, x equal to minus 1 from the marginals is this 1 by 3 and probability y equal to 0 this is y equal to 0 is 1 by 3. So, that is 1 by 9 and therefore, the 2 are not equal as long as theta is some positive number of course, satisfying the condition that absolute of theta is less than or equal to 1 by 9. So, for any for a positive theta satisfying this these 2 will not be equal and therefore, x and y will be independent only when theta is 0. So, here is an interesting example and it must have taken them lot of time to construct such an example. Now, if you try to do it for when x takes only 2 different values x and y take 2 different values it will not be possible then you can think of you know trying situation where x and y take 4 values each then you will have to make sure that you will have to write down these probabilities in such a way that the marginals are independent of theta right. And then you can have a chance to show that you can construct such an example that the probability mass function of x plus y can be obtained by convolution, but the variables x and y are not independent. So, it might be a very interesting project, but I am not sure if it is possible, but here definitely not of effort went into it to show that convolution does not imply independence all the time. So, you see according to me we have developed enough machinery to be able to now show you some more interesting application complex in the applications of the tools that we have or the machinery that we have developed so far of probability theory. And I would like to now devote the rest of the course on you know stochastic processes or and only simple basic ones. And because otherwise the you know there is a full vast subject on stochastic processes and things can get very complex, but so the first thing would be the first stochastic process that I would like to talk about would be poison process. And the idea here you have already talked of a poison random wearable and then we have also looked at exponential and gamma random wearables and we will see that how these things get you know we sort of use the different kinds of distribution that we have developed into answering questions. So, essentially the thing is that you know you have it is a counting process and you have a service let us say post office a railway booking counter well there are still people who go and book at the railway counters not everybody does it online is not it is not possible for everybody to do it. So, whatever these services are there now you want to have an estimate as to how many people are coming and what is the so and therefore, then accordingly you can design the service. So, that people do not have to wait for a long time to be serviced how many counters should be there and so on. So, these are the kind of things we will be talking about and so basically we will be defining N T as the number of people who are entering let us say a post office up to time t. So, we will keep a count and we will measure the number of people I mean we will count the number of people who enter the post office say starting from 0 time to time time and then we will I answer lot of questions depending on that, but then the thing is that because we are calling it poison process. So, this four there will be certain conditions which will have to be satisfied by you know this process because when you are modeling it then we have to have some basic conditions which are met by this particular counting process. And then we will develop this structure and try to answer few questions and the other one would be the other process that we will talk about would be Markov process and so they are both interesting the basic and the we can sort of develop will answer lot of questions through these because we have developed the machinery to do that. So, this would be my you know next few lectures would be on poison process and then on Markov processes. In my last lecture I had told you that we will now be talking about some stochastic processes and in fact so and one of them is the Poisson process. This is you can describe this as a probabilistic model used for describing unpredictable events right because there is a chance element. For example, when an earthquake will happen or when a certain person walks into a post office these are all unpredictable events. So, the Poisson process is a probabilistic model and certainly when you model a situation then there are certain rules or certain and of course, so the idea here is that you are modeling a situation where the events exhibit a certain amount of statistical regularity and that of course, this means that you know you can approximate the occurrences by a probability density function. So, these events they exhibit or approximately exhibit a statistical regularity and so essentially the Poisson process is a counting process. So, that means essentially these unpredictable events they get counted by the model that we will create here and then of course, lot of decisions and lot of conclusions can be based on this counting process. This is what we will see in the next couple of lectures. Now, the examples as I said are number of persons entering a post office or a bank up to time t. So, always the measure that means when I am saying n t this is the time to start counting the events starts at 0 and then up to time t. So, we count the number of events that take place and therefore, like for example, when you are counting the number of persons entering a post office or a bank then this is so an event will occur when a person enters the post office. So, therefore, n t will give you the number of people who have entered the bank or the post office up to time t counting from 0 starting time 0. So, 0 to t the number of people who have entered the bank or the post office other examples could be number of children born in a town or a village up to t days or t months whichever this you know time period you want to fix. The time framework is decided and then you start the counting process and of course, in this case event will occur when a child is born. So, you keep counting these events. Now, number of goals hit by a hockey player up to time t. So, when the match starts and then after that up to time t may be this could be the half time or whatever it is then you identify a particular hockey player in the team and then you say number of goals hit by a hockey player up to time t. And here again the event will occur when a goal is hit by this particular player because you are counting the number of goals hit by a particular hockey player. So, this situation and for example, also you can have if you want to pick up a country or a place where a volcano is and then you want to find out the number of times a volcano erupts. So, here of course, your time span may be may be 5 years or 10 years and then you may because volcanoes luckily do not erupt very often. So, therefore, you would your time span would be much more than for these particular events. So, now through these examples we realize that whatever our counting process is and this n t must satisfy the following. So, since it has to be 0 because when this counts the number of events that have taken place. So, obviously, this has to be 0 positive then n t is integer value because we are counting the number of events and then for s less than t your n s must be less than or equal to n t. So, either no event occurs in the interval this or some event occurs after s time s. So, therefore, n s must be less than or equal to n t. So, this number this counting process this satisfies this condition then for s less than t your if you take the difference as. So, 3 and 4 are you can be clubbed together also n t minus n s is equal to equals the number of events that occur in time interval s comma t. So, here of course, n s the you know you have counted the events up to time s. So, after s you start counting the events that take place up to t. So, this will be open at this end this interval time interval enclosed at this end. So, s comma t. So, therefore, this is this. Other things that we want to impose because remember we are thinking of modeling this situation where we want to count the number of events that take place, but certain conditions will have to be met.