 So, I will be talking about some other discrete probability distributions. So, the next one is the negative binomial random wearable and this is the name is very suggestive. We took the, we considered the binomial distribution, which where the random wearable was representing the number of successes, where the number of trials is fixed. So, you had, you have to perform in trials, probability of success is p and then we asked for the probability of r successes. Now, here it is the reverse, what we are saying is that independent trials are performed, probability of success is p, p between 0 and 1, trials are conducted till r successes occur. So, now it is the opposite, that means now you go on conducting the trials till r successes have occurred. And so, x will be the number of trials required for r successes. In the binomial case, the trials were fixed and you had to then ask for the probability of r successes. Here the number of successes is fixed and you are saying what are the number of trials that you required. So, that r successes take place. So, therefore, if you now take, so here the random wearable is the number of trials. So, probability x equal to n and it is very simple to write it down, because so that means see you will stop your experiment the moment you hit the rth success. So, that means up to n minus 1 trials, if I want r successes to occur in n trials, then up to n minus 1 trials r minus 1 successes should have occurred. And therefore, the probability of that is n minus 1 choose r minus 1, here you are using the binomial concept and then p r minus 1, r minus 1 successes and 1 minus p n minus r failures and then the last one is a success. So, therefore, the total number of successes add up to r, but you are here wanting r minus 1 successes to occur anywhere up to the up to n minus 1th trial the last trial must be. So, the nth trial in this case must be a success and so your n can vary from r r because you want r successes. So, at least r trials have to be conducted. So, therefore, n is r r plus 1 and so on and this number can go on up to infinity. Now, since the experiment continues till r successes occur. So, therefore, when you add up n equal to r to infinity probability x equal to n, this must add up to 1. And therefore, this is a combinatorial argument to say that what we have defined here is a probability mass function analytically also one can show that this sum will equal 1, but then you require little more mathematics. So, therefore, we just satisfy ourselves by giving this combinatorial argument that they will continue this trial still a success occur the rth success occurs. And also probability x equal to n is non negative for n varying from r r plus 1 and so on. So, this defines a PMF. So, this is a valid PMF when r is equal to 1. That means, if you are just looking for the first success then x is the number of trials required for the first success then obviously, you put r equal to 1. So, that becomes 1 and so this is 1 minus p and minus 1 into p. That means, the first n minus 1 trials must end up in failure. So, therefore, 1 minus p raise to n minus 1 and the moment you hit a success you stop. So, 1 minus p raise to n minus 1 into p and n can vary from 1 onwards. X is now called the geometric random variable and the corresponding PMF is the geometric distribution. Now, interesting application of a negative binomial random variable and the distribution. So, this is known as the Banach match problem. Banach was a very famous mathematician and he was a heavy pipe smoker. So, that he does not waste time in looking for the match box and then lighting up a match to light his pipe. He would carry two match boxes, one in his left hand pocket and the other one in the right. So, that wherever he puts his hand he gets a matchbox and then he lights his pipe. So, therefore, that is how he was dependent on smoking his pipe. So, each time he needed a match he would equally likely take it from either of his pockets. It should be either of his pockets. So, it was equally likely that he would put his hand in the right hand pocket or on the left hand pocket. So, probability of that is the same. And now, consider the moment when the mathematician first discovers that one of his matches is matchbox is empty. So, now, assume that both matches initially carried both match boxes I should say. So, both the match boxes initially carried capital N matches. So, we are asking what is the probability that the other match box contains exactly K matches. So, that means initially both had capital N number of matchsticks in the match boxes and then when he discovers that one of the match boxes is empty, then the other one contains exactly K matches. So, this is what we have to find out the probability of this event now. So, let us say that consider the case when the left hand pocket has K matches left. So, that means the right hand pocket is empty and the left hand pocket has the match box has K matches left. Here again I should say when the left hand pocket match box is having K matches left. So, I have left out the match box. Thus the mathematician discovers the empty pocket at the, see he emptied the N match sticks in the right hand pocket and here he emptied N minus K because K are left in the left hand pocket and then on the N plus 1 th when you put his hand in the N plus 1 th or when he takes out the match box at the N plus 1 th time I mean from the right hand pocket then he discovers it is empty. So, the total number of trials are N plus N minus K plus 1. So, exactly the situation of a negative. So, taking out a match stick from the right hand pocket as a success. So, we will take taking out as a success, we will treat it as a success. So, let us say that. So, we will say that the right hand pocket taking out or putting the hand in your right hand pocket and of course, when you put as long as the match box has a stick you are also taking out a stick match stick. So, therefore, whichever way you want to put it, but anyway taking out a match stick from the right hand pocket is a success. We are looking for the N plus 1 th success, but where actually he discovers that it is empty. So, that is a consequence of the match box getting emptied, but essentially what we are saying is that putting his hand in the right hand pocket is a success and putting his hand in the left hand pocket is a failure. So, random variable X which is equal to the number of trials required for N plus 1 successes. So, this is exactly the case for a negative binomial distribution and what we are saying here is that this is actually that means this is happening when you have had this many trials N plus 1 plus N minus k. And therefore, by our argument see the last one he discovers that it is that means at the 2 N minus k plus 1 th trial he discovers that the left right hand pocket does not match box does not have a any sticks left in it. So, this will be 2 N minus k N if you N minus 1 or minus 1 and then 1 by 2 raise to 2 N minus k plus 1. So, this will be the probability that, but then since either of the pockets could have emptied first. So, therefore, what we are saying is twice this. So, the required probability is since either pocket can be emptied required probability is twice of that and so when you multiply by 2 this 1 disappears and the required probability is 2 N minus k choose N into 1 by 2 raise to 2 N minus k. Now, again these results I am just giving you without because handling this thing requires lot of mathematics you will not do it. So, I will just simply say that expected value of a random negative random variable which has parameters r and p that means r number of successes are required and p is the probability of success that is r by p and the variance of a negative binomial r comma p random variable is r into 1 minus p upon p square. So, for example, if you are throwing of a dice x is the random variable which is the equal to the number of throws of a die required till number 1 shows 5 times. So, here r is 5 here and p because die we are assuming is a fair die. So, probability of each number showing up is 1 by 6. So, probability of number 1 showing up is 1 by 6. So, our p is 1 by 6 r is 5 and therefore, by this these two formulae expected value of x is 5 upon 1 by 6 which is 30 and various that means at least the expected value that expected number of trials required to require throws of the die required. So, that number 1 shows 5 times is 30 and the variance is again by this formula comes out to be 130 or 25 into 6 no that is 150. So, this is 150. So, that is about negative binomial and geometric distribution. So, another discrete random variable which is quite useful is hyper geometric random variable. So, I will demonstrate this through an example or define this variable and the corresponding distribution. So, consider an earned containing n balls capital N balls out of which m are white and n minus m are black. Now, a sample of size small n is drawn without replacing the balls that is important the experiment is conducted without replacing the balls. So, I keep taking out the balls and put them aside fine. So, now, if x is the random variable which counts the number of white balls selected. So, sample of size n is drawn total number of white balls are m and so now we look at the probability of the number of white balls being equal to k. So, k can obviously vary from 0 1 to m because there are m white balls. So, the probability would be now here you see we are using the multinomial distribution. So, out of m white balls you want to select k and out of the remaining capital N minus m balls you are selecting a black balls you are selecting n minus k black balls. So, your total sample size is small n balls and the total number of ways of selecting small n balls from capital N balls is n choose n. So, this we have already gone through how to you have went well when we were talking about the counting procedures and so this is this gives you the total number of possible ways in which you can select n balls out of capital N balls and this gives you the number of ways in which you can select k white balls from m small m white balls and n minus k black balls from capital N minus m black balls. Now, here of course, this is meaning when k is such that n minus k is less than or equal to n minus m because see there is some connection between m k and n. So, obviously, I cannot if my number m is very large then I can select k number of balls, but then n minus k becomes negative or n minus k is. So, that means it has to be less than or equal to n minus m, but then see by convention if n minus k turns out to be greater than n minus m or n minus k is less than 0, but the convention we say that this n minus m choose n minus k is 0. So, therefore, this has a meaning. So, we do not have to worry because then there will be the probability would be if k is such that n minus k is greater than n minus m or n minus k is less than 0 then these probabilities will be 0 and so there will be no mass attached with that those values of k. So, this is your this thing and again since when you we are drawing out a sample of small n balls a white ball may appear or may not appear and a white ball numbering 1 2 3 up to m may appear or may not appear. So, this takes care of all possible cases and therefore, this is a this is a genuine this is a valid PMF. So, that means what we are saying is that summation probability x equal to k, k varying from 0 to m is equal to 1 and also these probabilities are all non negative. So, therefore, this is a valid PMF. So, that this exercise we must do every time we define a random variable and it is corresponding probability mass function. Now, let me just show you an example of where we use where we make use of hypergeometry random variable and it is a distribution except in sampling in quality control. So, what you do is of course, I will give the numbers here are small, but usually the numbers are much bigger than what I am using here. So, suppose 200 items in the lot some some instrument or something is being delivered by a manufacturer the whole lot is of size 200 and the claim by the manufacturer is that no more than 10 percent are defective. So, this is the claim. Now, obviously, people do not have time and energy and manpower to actually inspect all the 200 items and usually this number is very big. So, what did what the practice is and that is why it is acceptance sampling. So, what you do is after the shipment is received a sample of size 10 is taken again the numbers are all just for convenience sake, but usually there will be more realistic numbers. So, anyway a sample of size 10 is taken without replacement and if there are at most too defective the lot is accepted. So, you just at random choose a sample of size 10 from this whole lot of 200 items and then you inspect those 10 items and you have taken out the sample via without replacement. So, you inspect those and then in that sample of size 10 if you find 0 1 or 2 defective you will accept the whole lot and you will say that it is and if there are more than 2 defective then you will reject the lot. So, this is what you call acceptance sampling in quality control. Now, so let me just show you simple you know computational exercise here I will do. So, that means for example, if 5 percent are defective in the entire lot then it carry it contains 10 defective and 190 non defective. So, here the claim by the manufacturer is that no more than 10 percent are defective. So, let me consider the case when 5 percent are defective in the entire lot then it contains 10 defective and 1 because 5 percent of 200 is 10. So, 10 are defective and 190 are non defective. So, now if you want to compute the number of defectives in a sample of size 10 x is rather this then you want to compute the probability that x is less than or equal to 2 which means probability x equal to 0 plus probability x equal to 1 plus probability x equal to 2. So, this is what you want to compute and see I will just show you the calculations here. So, what I am showing you here the probability of number of defectives being small x. So, that will be this is the case when we are say assuming that 5 percent are defective in the whole lot. So, then it is 10 choose x and then 190 are non defective from the non defective I am choosing 10 minus x and then divided by 200 choose 10. So, this is the hyper geometric probability of choosing x defective or the sample size is containing x defective our sample size is 10. Hence, probability of accepting a shipment that has 5 percent defective. So, we are saying at most 2 are defective in the sample size of 10. So, that probability would be probability x equal to 0 x equal to 1 and x equal to 2 which I have written down here and if you look at the numbers these are the 3 probabilities you add them up and they come out to be 0.990935 that means the probability of accepting the whole lot is 0.99. Now, if there are 10 percent defective then there are 20 defective in the whole lot and 180 non defective. So, in that case the hyper geometric probability of x defective in your sample of size 10 would be 20 x and 20 choose x 180 choose 10 minus x because you are taking a sample of size 10 and divided by 200 choose 10. So, this will be the probability of having x defective in your sample of size 10 which is taken without replacement. So, therefore, in that case the probability of accepting the shipment that is 10 percent defective again we want the number of defective in the sample to be not more than 2. So, it can be 0 1 and 2. So, these are the numbers and here the probability is 0.9347. So, this is less than the probability that we attained for when the shipment had 5 percent defective. That is the probability of accepting a 10 percent defective shipment is smaller than the probability of accepting a 5 percent defective shipment. So, of course, obviously the less the defective the more the chance of accepting the shipment because the probability of getting 2 at most 2 defective in a sample of size 10 will be smaller if 5 percent are defective and when 10 percent are defective this is what it is saying. And therefore, you can experiment with other values of the number of defective items in the whole lot and then you can compute the probabilities accordingly. So, relationships among hyper geometric binomial and Poisson distributions. So, let me show you I have already shown you how binomial will approximate to Poisson when n is large and the number of trials is large and your n p converges to a moderately small number. So, now here let me show you the interconnection between all these 3. In fact, 4 discrete random variables that we have discussed so far. So, now again we say that population is of n objects and number of a number of a number of type a objects are there in the population. So, the others are not a type and a sample of size n is drawn without replacement small n. So, a sample of a size small n is drawn without replacement where of course, n has to small n has to be between 1 and capital n and a has to be between 0 and n. And while discussing the hyper geometric series I told you that even if this number is bigger than this then by convention this is 0 and so there will be no mass. So, we really do not have to worry about the values of as to what the when I am writing this oh sorry this is x. So, now if I want to n let p be a by n this is the proportion of item I am not using it now I will be using it later on. Anyway this is the proportion of items item of type a of type a in the original population. So, I am denoting this by p which I will make use of later on. Now, this probability that is your sample size of small n contains x objects of type capital A. So, that will be choosing from small a x right. A is the total number of type a objects you want to select. So, your sample should have x of them. So, this is a choose x then remaining the population n minus a are the other kind of objects from this you are choosing n minus x other kind of objects divided by the total ways in which you can choose a sample of size small n from capital N. Now, just write out these expressions. So, this will be a factorial divided by x factorial a minus x factorial this will be n minus a factorial divided by n minus x factorial n minus a minus n plus x factorial. And then this is the denominator. So, this flips over and you get here in the numerator n factorial n minus n factorial divided by n factorial. So, let us simplify this expression what I will do is this n factorial comes here then x factorial and n minus x factorial this I put together and you can see that I am heading towards the binomial distribution. And then you see here a factorial divided by a minus x factorial. So, the terms after a minus x plus 1 will cancel out. So, you will be left with a a minus 1 a minus x plus 1 from here and this is gone. Then similarly, here this many terms will go away. So, you will be left with n minus a n minus a minus 1 up to n minus a minus n plus x plus 1. One more from here that will be up to that many terms. And then here similarly, n factorial I have used here now here n minus n factorial. So, those terms will cancel out and you will be left with n n minus 1 up to n minus n plus 1. So, this is from here to here this is fine. Now what I am doing is yeah. So, this term this I can write as n choose x then from here if I remove a from each of this then this is a raise to x because this is a a minus 1 a minus x plus 1. So, x minus 1 a is come from here and this is the x at a. So, a raise to x this comes out then similarly, if you remove n minus a from here from each of them from each of these terms n minus a. So, that will also be in number n minus x because this is n minus x plus 1. So, when you include this one you will get this. So, this is a 1 and then this is this. So, similarly here it was it would be 1. So, 1 minus 1 upon n minus a and this is what you have and then n raise to n we have taken because here this is minus n plus 1. So, you take out n from each of them and divide correspondingly. So, you will have capital n raise to small n that is this here. Now what I will do is yeah I should have written down this step here what I am doing is I will write this as this is this. So, I will write this as maybe I will do it here. So, this is n c x then a by n raise to x out of this and then into divide by n. So, 1 minus a by n raise to n minus x. So, all this is equal to this which is your binomial probability of choosing x items from n and that means your number of defective or number of type a items from the sample size n is this probability is that and now this you see here what we are saying is that x by a is small that means a is a big number large number that means the number of type a objects in your total population is large. Then this number is also small and this number is also small that means your capital n is very large in such a situation you can see that all these numbers go to 1 all of them and this also goes to 1. So, this whole reduces to approximately number 1 and so this probability hyper geometric probability of choosing a particular type of objects in your sample that probability reduces to the binomial. Now I should have did not probably pointed out here see when we conduct the binomial experiment we say that the trials are occurring independently and then you keep counting the number of successes. So, in other words in this situation the binomial experiment would be when you are replacing the balls right because getting a white ball is a success and. So, in the binomial situation you replace back the ball that you have taken out. So, each time the probability remains the same of getting a white ball which is equal to a by n right see here a by n p. So, this actually comes out to be this p raise to x 1 minus p raise to n minus x. So, this is what I wrote right. So, therefore that means the difference between a hyper geometric and binomial is that you know for small values of the population size it is without replacement the hyper geometric right. But if you make the population size large and as I said all these three numbers should be small in that case the hyper geometric reduces to approximate is approximated by the binomial probability right. And I have already shown you that binomial can be approximated by Poisson where we were saying the same thing that yes that this should be small right. So, in this case n into p our p is our p is a by capital N. So, this number should be moderately small then we say that for capital N being large then we say that the binomial can be approximated by Poisson or the binomial probability goes to Poisson. And here I have shown you that a hyper geometric goes to binomial and by the same argument that I have shown you here when I talk about and I take lambda to be N into A by N then you can show by again manipulating the terms that the hyper geometric will go to Poisson. And I already in the earlier lecture I have shown you that Bernoulli is binomial 1 p right. And so the relationship is there that when you and you have N when you add up N Bernoulli random variables you get the binomial. And so this diagram shows you the relationship between the various discrete distributions that we have discussed so far. . So, let us now look at the other type of random variables which are continuous random variables. So, far we have looked at discrete random variables and their special cases. Now, I will want to describe continuous random variables and then again we will look at the special cases of continuous random variables. So, essentially these are random variables for which possible outcomes for which a possible outcomes are uncountably infinite. So, here you cannot count and therefore, for example if you take a subset in R 2 then the number of points is uncountable. And similarly if you take an interval on the real line and you consider all possible real numbers then that is a uncountable set. Examples are lifetime of a transistor. So, of course, you do not know when exactly a transistor will fail, but and see you might say that because you have finite clock. So, you can say that it failed at this but that means you can actually say that the lifetime is finite, but then it depends on your counting system. I mean if you as fine as you find as you make it then you know the lifetime you can treat this as a uncountably infinite. The arrival of a train at a station and so on. So, one can go on adding list to this and as we go through the as we go through the this topic we will come across so many continuous random variables. Now, one way to define a continuous random variable would be that suppose there is a non negative function f x defined for all real x and on the real line minus infinity to infinity having the property that for any set b of real numbers the probability that x belongs to b is integral of the function this non negative function f by d y over b and so here by our definition we are saying that if x belongs to the whole real line then the probability of that will be integral minus infinity to infinity and f by d y is 1. So, we are putting this condition and therefore, by definition f is a and so f is now known as the probability density function as opposed to probability mass function because now this is the continuous case. So, we differentiate between the continuous and discrete by. So, in this case the function is probability density function and for the discrete case we called it probability mass function. So, this is the for the for the random variable x. So, therefore, x is therefore, if there is a function like this and if it satisfies these conditions. So, it is a non negative function when we say that f is the probability density function of the continuous random variable x. So, let me give you some more idea about continuous random variable. So, you know one can also define a random variable x as a variable whose distribution function is continuous everywhere. So, actually the name continuous random variable has come from here because the distribution function of a continuous random variable is continuous that is why we call the random variable continuous. So, actually it is not that there is nothing about you know calling a random variable continuous or discrete. We actually say random variable is discrete because it is a cumulative distribution function is discrete that means it has jumps and we say that a random variable is continuous if its distribution function is f x is continuous. So, this is everywhere that is one definition. The another one can be a random variable x is said to be absolutely continuous if there exists an integrable function f x from r to r such that f x is non negative for all x belonging to r and its distribution function f x satisfies the equation that f x of x is equal to minus infinity to x integral of f x t d t x belonging to r for any real number. That means the probability x less than or equal to small x that probability is obtained as integral of minus infinity to x f x t d t x belonging to r. So, and then if you see that since the distribution function has the property that limit f x x as x goes to plus infinity is 1. So, therefore you see the value of this integral minus infinity to infinity f x t d t will also be equal to 1 and hence the function small f x that we are saying has to be non negative is actually the p d f for the for the function x. So, either way either you define it through capital F and then you say that f the small f will be the p d f or sometimes you may define the small p d f and then small f to be the p d f and then you define the distribution function. Anyway, so the thing is that most of the time in this course I will not be saying absolutely continuous of the time, but whatever is absolutely continuous I will call it continuous and then I will distinguish between a mixed random variable. So, that means a discrete random variable, mixed random variable and continuous random variable. So, what I refer to as continuous random variable will be absolute is actually by definition absolutely continuous, because the way we have been handling we are defining the continuous random variables and the p d f in this course the definition this definition is the right one. I mean we are following this definition that capital F x x is equal to minus infinity to x f x t d t x belonging to R. So, whichever way, but I just thought that one needs to say a little more than what I had said in the lecture about a continuous random variable and of course, through examples we will come to know quite a bit more about the various kinds of continuous random variables that we come across and their properties. Now, if b is an interval then a probability x belonging to a comma b will be a to b f x d x and that is what I am trying to show you that if this is the curve of f x then this implies that it is actually the area under the curve and of course, I have written it out here that in terms of your I will come to that. So, then and if a is equal to b then this reduces to probability x equal to a and that means a to so this sorry b a. So, the integral will be from a to a f x d x and by again definition of the integral this is 0. So, therefore, mass at a point for a continuous random variable is 0 that all the probability that a continuous random variable assumes a fixed value is 0 this is what we are saying by this right. And therefore, so I will come to this point is that see the interval I have been writing as an open interval, but here I have written as closed. So, it does not matter because whether you include the point a or the point b or both the masses at the individual points at the fixed points a and b are 0 that means no probability attached to fixed values of the random continuous random variable. Therefore, it does not matter whether I write it as this or as I write it as an open interval. And then so since the probability x less than or equal to x I am sorry this is not correct I want to say here this is x capital X. So, this is now we are defining the cumulative distribution function or the probability that x is less than or equal to x and this will be in by our definition minus infinity to x of f y d y. So, this is the cumulative distribution function of x and from the figure we can see that this is again f b minus f a. So, that means this will be the area from minus infinity to a under the function f x that is f a and then this area up to b would be f b. So, you are essentially looking for the area between the in the strip and inside this strip and so this is area under the curve. So, we have I have shown you that for the x is a continuous random variable then the cumulative distribution function has been defined according to this and so we can also then I have the concept of the probability in an interval a to b and that is area under the curve. Now, let me just make two particular I mean emphasize two points here that you see as opposed to a discrete random variable where it was important whether in an interval when you are talking of probability of random variable in an interval then whether the end points are included in the interval or not for a discrete random variable it mattered because every point has some positive probability I mean or non negative. Now, for a continuous random variable you see it does not really matter because there is no concept of a probability at a point that concept is the idea that at a point the probability is 0. So, therefore, whether I say a less than or equal to x less than or equal to b or a strictly less than x less than or equal to b has no relevance. So, therefore, it is understood and see that is why I am writing the integral from a to b here f y d y. So, for continuous random variable at a point there is no concept of probability or the mass whatever or the density because it is a density. So, we measure it on an interval fine. So, and secondly for a continuous random variable we will call it cumulative distribution function that is the proper notation. And here again if some places it may just happen that without realizing it I may have used the word density, but does not matter the idea the proper notation is that it should be cumulative distribution function when your random variable x or in fact for this notation holds for x continuous or discrete random variable. So, the word is cumulative distribution function we saw that for a discrete random variable it was a summation and here it will be in the form of an integral because you are computing the probability of a continuous random variable over an interval. Now, we want to check that f x has the properties of a c d f and so the first thing you want to check is that this limit of f x x as x goes to minus infinity will be 0. So, I should have said that this is equal to 0 here and of course this is immediate because limit as x goes to plus infinity would be this integral minus infinity to infinity f y d y and since f is a probability density function by definition minus infinity to infinity this integral should be equal to 1. So, that part is now for this again the argument may look repetitive we have already used it elsewhere, but let me just repeat it. So, let x n be a decreasing sequence such that x n is going to minus infinity. Now, we define the events e n which are all points in the sample space for which capital X is less than or equal to x n. Then you see again e n this as n goes to 1 to infinity is a decreasing sequence of events and limit e n as n goes to infinity will be empty because you know as n goes to infinity you will be talking of event where x is less than or equal to minus infinity. So, there can be no real number which is less than minus infinity and therefore, limit e n goes to infinity is phi is the empty set and again by continuity of the probability function you will say that probability of limit probability e n as n goes to infinity is same as limit probability e n as n goes to infinity by continuity and therefore, this will be phi because the probability of a empty set is 0. So, this property is also satisfied and we will verify the other properties also. .. And for to show that it is monotonic for a less than b this is f x we want to show that f x is less than or equal to f b. Now, since capital f x b can be written as minus infinity b then b being bigger than a I can break up this integral into minus infinity to a f y d y plus from a to b f. Now, this is non-negative f is a non-negative function this is a finite interval because b is greater than a. So, again this number is something non-negative therefore, your probability or your f x b is bigger than f x a. And so the function capital f is monotonic and therefore, it satisfies all the conditions for cumulative density function and hence we have a proper definition is a proper here. Now, the question if you just define a function like this and you ask whether is it a c d f of random variable x then what do we all need to verify we need to verify that for minus infinity. So, that means you need to verify that minus infinity to 0 e raise to x by 2 d x this is equal to what see it should for minus infinity to infinity it should be 1 this should sorry I am showing that this is a c d f. So, this is not p d I mean I am not checking it is p d f. So, what is happening is e raise to x by 2 limit as x goes to minus infinity because this is x less than 0. So, as x goes to minus infinity this should go to 0. So, that is fine because x goes to minus infinity e raise to x will go to 0. So, this goes to 0. Now, look at limit 1 minus e raise to x by 2 as x goes to plus infinity. So, what is happening here this because this portion goes to infinity. So, therefore, this is going to minus infinity and not equal to 0. So, therefore, this does not define a c d f and. So, we can continue I mean one can try to see if any one of the condition fails to be satisfied here we will conclude that the function that we are looking at is not a c d f and similarly as we have done it for the discrete get also we check verify when we define a continuous random variable that the whether it is a valid p d f or not probability density function. Then I will also try to later on give you examples where the random variable can be the mixed kind. That means, some for some portion of the real line it may be discrete it may behave like a discrete random variable and then for some points on the real some portion of the real line it may be a continuous random variable. Now, another thing that I want to point out is that since you are defining your c d f as this. So, necessarily by the theory of integral calculus it turns out that this has to be a continuous function. So, that is another way of differentiating between that means for if for a certain part of the real line your c d f is continuous then we will assume that the corresponding the part of the random variable is a continuous random variable right and we saw that when it is a discrete random variable your graph of capital the c d f has jumps and the jumps are equal to the probability of the random variable at that particular point. So, definitely. So, therefore that means you can have now c d f which is for in part step function and in part it is a continuous function. So, I will try to give you examples of such and then that case we will say that the random variable is the mixed kind. So, that means all kinds of random variables exist discrete mixed kind and continuous. .