 I will begin this lecture by discussing exercises 4 with you. This is on random variables their pdf, cdf and expectation etcetera. So, let us look at question 1. Consider the function f x given by c times 3 x minus x square for x lying between 0 and 4 and 0 otherwise. So, the question asked is, could f x be a probability density function for any value of c? Now, without proceeding further, you can just see that since the function can be written as x times 3 minus x. So, it will be negative for x greater than 3 and your interval is 0 to 4. So, the density function and you know if the sign of c you might say that we can make c negative, but then when x is equal to 2 say for example, then 3 x minus x square is positive. So, in that case again c times 3 x minus x square will become negative. So, in fact for no value of c is the current function pdf because it is not non negative for all values of x in the interval 0 4. Now, we say that suppose we require the function to be a probability density function when x is between 0 and 3. So, that makes sense because then in the interval 0 to 3 3 x minus x square is non negative. So, then c will be chosen as some number which is also non negative in fact positive and the condition the way you will obtain c would be you integrate the given function from 0 to 3 and then the integral must be equal to 1. So therefore, you can answer this question in this way. The question 2 the probability density function of x the lifetime of a certain type of electronic device measured in hours is given by so f x is equal to 15 by x square x greater than or equal to 15 and 0 when x is less than or equal to 15. So, that means the device is guaranteed to run for more than 15 hours. Now, find probability x greater than 30. So, again you can do it by finding out the cdf or integrating this function from 30 to infinity because this says 15 by x square is x greater than or equal to 15. What is the cumulative distribution function of x and what is the probability that 6 of such types of devices at least that out of what is the probability that of 6 such types of devices at least 3 will function for at least 15 hours what assumption are you making. So, here of course you see you will assume that the devices are independent of each other and then you will do the rest by finding out. So, probability of 1 device functioning for more than 15 hours and then you have to say 3 out of 6. So, you can understand what all you have to do this will become a binomial probability where the p will be this thing this integral from for at least 15 hours. So, will function for at least that 6 of such types of devices at least 3 will function for at least 15 hours. So, it will be 15 and more. So, I leave it to you to do the rest. Question 3 for a random variable y show that E y the expected value of y is 0 to infinity probability y greater than or equal to y d y minus 0 to infinity probability y less than minus y d y actually question 5 should proceed question 3 in question 5 I am asking you to do the same thing for a non negative random variable y in that case the second part of the integral will not be there because y less than minus y. So, here in question 3 I am asking you to show that for a non negative random variable y E y is equal to 0 to infinity probability y greater than t d t. So, once you show this then you can go to question 3 and do the rest when y can take positive negative values both. Question 4 x is a random variable that takes on values between 0 and c that is probability x lying between 0 and c is 1. So, show that variance x is less than or equal to c square by 4. So, I am just asking you to get an upper bound for the variance and you see the only information you are given is that x lies between 0 and c. So, all mass of this random variable is between 0 and c. Now, there is a hint one approach is to first argue that expectation x square is less than or equal to c E x. You see this is a non negative random variable yes it should have been said that because 0 less than or equal to x less than c. So, it implies that c is non negative. Therefore, this inequality expectation of x square is less than or equal to c times E x. You see actually x square is less than or equal to c x yes for all values of x between 0 and c because x is non negative right and c is a non negative number. So, from there you get this inequality when you take the expectation on either side of this inequality and then you show the rest. So, I will not discuss the second hint there you should think and then get the answer. Question 5 we have already yeah now question 5 the second part is that you have to obtain expectation x raise to n and show that it is equal to this 0 to infinity and x raise to n minus 1 probability x greater than x d x. So, here again the hint is that you start with e raise to x n as 0 to infinity. So, therefore, you write y equal to x n and then you will do this and then you know because of the substitution y equal to x n. So, d y will be n x n minus 1 d x and that is how you are getting this part. So, this you should be able to do. . Question 6 the number of minutes of playing time of a certain high school basketball player in a randomly chosen game. So, this should be player. So, I have just got the. So, that means number of minutes of playing time of a certain high school basketball player in a randomly chosen game is a random variable whose probability density function is given in the following figure. So, this is the graph of the pdf for the number of minutes that a player in a basketball team gets actually to handle the ball in a sense. Find the probability that the player plays over 15 minutes. So, therefore, here as remember I have told you that this probability if you are saying that the player plays over 15 minutes then you are asking for the probability x greater than or equal to 15. And so, here you will have to you will integrate or you find out the see the 15 will be somewhere between 10 and 20. You know the height of the graph at different places. So, the area to the right of 15 on the x axis on the minute axis that area would be the probability that the player gets to spend more than 15 minutes on the field while playing the game. Then, similarly between 20 and 35. So, between 20 and 35 the area. So, this will be the area you can immediately find out just by looking at the graph you do not have to do any integration or anything. So, then because anyway you are this thing is not given to you the functional form of the pdf is not given. So, just by looking at the graph you find out the area between 20 and 35 and that will be the probability that the player gets that many minutes to between 20 and 35 less than 30 minutes same thing. So, this will be this. So, the area to the left of 30 will be the answer and more than 36 minutes it will be somewhere here. So, to the right. So, this was just to illustrate that how when it is convenient you can just look at the graph of the pdf and find out the required probabilities. Suppose question 7 suppose that the travel time from your home to your office is normally distributed with mean 40 minutes and standard deviation 7 minutes. So, the time that you would spend in going from home to office is a normal distribution with 40 minutes as its mean and standard deviation 7. If you want to be 95 percent certain that you will not be late for an office appointment at 1 p.m. what is the latest time that you should leave home. So, you have to read this problem at least 2 to 3 times and see what we are asking is that you want to know the travel time which you can be sure of the time 95 percent of the time that means you have to find the probability of reaching from home to the office the time which will be possible for 95 percent of the time. So, here that means if x is the random variable denoting the time that you take from home to office then x minus 40 and x we will say in minutes. So, x minus 40 upon 7 will be the standard normal variant. So, now you want to find out the probability that when this z is less than or equal to some number t is equal to 0.95. So, from the tables you get the value of t. So, corresponding value of x is 40 plus 7 into t minutes thus starting time would be 40 plus 7 t minutes before 1 p.m. Then you are likely to be in the office 95 percent of the time in on time that means you will be in the office by 1 p.m. 95 percent of the time. So, just read this problem carefully and then now question 8 the median of a continuous random variable this I have to explain to you that the median means that half the area lies on one side and the other half lies on the other side. And I want you to find out I think I have already done it for the normal distribution I showed you that for a normal distribution x equal to mu is the median. So, now for uniformly distributed over a b and exponential with mean lambda find out the median. So, this I have discussed in the class. If x has hazard rate function lambda x t compute the hazard rate function of a x where a of a x is a constant where a is a positive constant. So, you can apply the formula definition of the hazard rate function and get the answer the lung cancer hazard rate of a t your old male smoker a t is such that this. Now, here I have discussed in the lecture that if you are given the hazard rate function then you can compute the c d f of the random variable. And so assuming that a 40 year old male smoker survives all other hazards. So, we are just considering the death because of smoking what is the probability that he survives to age 50 to age 60 without contacting lung cancer. So, I have discussed part of this problem with you in the lecture. So, you should be able to do it. Now finally, x is uniformly distributed over minus 1 and 1. So, while discussing functions of random variables in the last lecture I discuss this how you will handle probability mod x greater than half x is less than minus half and greater than half then the density function of the random variable mod x. So, you should be able to do it. Question 12 the number of years a radio functions is exponentially distributed with parameter lambda equal to 1 by 8. So, that means the mean is 8. If Jones buys a used radio what is the probability that it will be working after an additional 8 years. So, now remember this is an exponential distribution it has the memory less property. So, therefore you can answer question 12 also. So, now I hope with all these hints you should be able to you know enjoy doing this exercise. . So, let me now continue with the last lecture I discussed functions of random variable how you find out their CDF's and PDF's and PMF's. Now, let us talk about expectation of function of a random variable. So, if x is a discrete random variable with p x as it is PMF then we define and g is some function real valued function I should have said that g is a real valued function of x. So, here g x is a real valued function of small x only then expectation g x would be because g x itself will be a random variable since x is a random variable g x will be a random variable and so this is summation i p x i g x i. So, let us see how we arrive at this form see the thing is that I start with you start with this summation then what I do is I group together all the x i's for which the value of g x i is y j because g may not be a single valued function. So, here all possible values of x i's which give me the same value of g x i. So, then I group this summation here. So, I say j here and then I am summing over i where g x i is y j. So, all those g x i's get summed up here and then for all those g x i is y j. So, then I will write y j here then this summation goes over all i said that g x i is y j. So, I am summing up all the probabilities p x i all x i for which d x i is y j and so this becomes summation y j and this probability they add up because x i's are discrete and distinct values. So, therefore, you add up the probability then that will give over all i said that g x i is y j. So, this becomes probability of y g x equal to y j of the event this is a discrete case. So, for all possible values of x i for which g x i is y j. So, therefore, this whole thing here is equivalent to this and so now this becomes sigma j y j. So, this is g x i equal to y j and then probability of g x equal to y j. So, therefore, by definition this is expectation g x. So, this is your the way we define it for a discrete random variable. So, the simple formula is this and I have tried to validate it for you by manipulating the summation terms. Then, in question 5 exercise 4 you have been asked to show I just discussed it with you. So, you have been asked to show that expectation x will be 0 to infinity probability x greater than y when x is a non negative continuous random variable. So, here we are I am just writing out this expression for the case when x is a non negative random variable. So, now when I want to talk about expectation g x I will do it for I will obtain this formula for when g x is non negative and then you should be able to take care of because remember question 3 is your general version where x can be negative or positive both. So, in that case g x can be also a general function taking negative positive values both, but once you understand this you will be able to do that also for the general case. So, I am doing it for g x non negative. So, expectation g x will be using this formula because g x is non negative. So, 0 to infinity probability g x greater than y d y this is I am using that question 5 of exercise 4. Then here this I am what I am doing is see therefore, see I have tried to show you that g x is this function. So, now here I am separating out the integral what I am doing is for g x greater than y I am integrating this f x d x. So, g x so if suppose this is your value y then you are integrating this area, but then y varies from 0 to infinity. So, that means from here here here so the whole area this whole area under g x from 0 to infinity is being integrated here. Then I can always change the order of integration I can you know add the integrate in this way. So, that means here it will be 0 to g x. So, first of all from here I come here 0 to infinity then I from here I come x g x greater than 0. So, here g x greater than 0 then I am integrating this way here in this way and then this is 0 to g x. So, your y the integration with respect to y is from 0 to g x and then g x is going from 0 to infinity. When g x is greater than 0 I have drawn it this way it could be whatever it is g x. So, you may start from here or your function may be like this does not matter. So, when you so here the order of integration first I was integrating with respect to d x and now I have changed the order of integration. So, when I change the order of integration my y first varies from 0 to g x. So, from 0 to g x d y and then my x varies from corresponding to g x positive. So, this way. So, therefore, I am taking this these lines and the lines corresponding to a fixed x will be from 0 to g x and then as x varies I am integrating along these lines. So, this is how I am covering this area. So, this is it now 0 to g x d y simply becomes g x and then this is integral x and the g x is positive of g x f x d x. Last integral the computation has to be for all x such that g x is greater than 0 this is equal to this. So, now it will be good if you can sit down and do it for a that means you do it now for the negative part and then you can just add it up expectation for. So, in other words I am also telling you how to do your problem three first do five then do three and then you can apply to get this result for the general function x g x. So, now immediately you can write down that if you take a x plus b as a function of the random variable x then the expectation will be a e raise to x plus b. Now, you can verify by writing the actual expression that means you can write out this expression and then show that what you get will be a into expectation x plus b. Similarly, for the variance see the expression is a x plus b minus a e x minus b whole square and b b cancels out. So, you are left with a square x minus e x whole square expectation of this then since a square being a constant comes out this is a square expectation of x minus e x whole square and which is a square times variance x. So, I just handled it for this because this is very of we have already used this formula in fact. So, I thought let us formalize once we talk about expectations of random variables then I talk about their expectations and so on in the applications. Another interesting expectation of a function of a random variable is the moment generating function and this is very important because in the sense that sometimes it gives you a lot extra information about the all the information that you can sometimes get more easily here if you know the moment generating function. So, let us say the definition is that m x t is the expectation of e raise to t x and m t m x t I will write here is the moment generating function of x for all values of t for which this exists. So, some time ago I had shown you that this need not all the time exist and so whenever for all values of t for which this exists we will say that this is the moment generating function of x. So, t is a real number so as t varies and sometimes for all values of t this may exist, but sometimes it may not be it may not exist for all values of t. So, for the discrete case when x is a discrete random variable and p is its p m f then m x t will be expectation e raise to t x p x because just now we wrote down the formula that for any function of a random variable expectation g x is summation i p x i g x i. So, I am applying this formula and therefore, m x t for a discrete random variable would be summation e raise to t x into p x for all x the summation is over all those x for which p x is positive, because otherwise the corresponding contribution here will be 0. Then x a continuous random variable with f x as its p d f then m x t will be minus infinity to infinity e raise to t x f x d x for all values of t for which the integral exists. So, the same thing it is only defined the moment generating function is defined for those values of x for which the corresponding expectation exists. So, if I differentiate the expression for m x t the moment generating function for x this is d d t of e e raise to t x and then I am taking the differentiation sign inside. And this is easy to explain because in the discrete case since this expression for the moment generating function the summation is a convergent series. So, therefore, differentiation can be passed through the summation sign, because this is a convergent sum. So, therefore, of course, we are taking it for all values of t for which this is convergent. So, in that case I can pass through the summation sign the differentiation sign. Now, if x is a continuous random variable then it is required that this can be shown again because it involves higher level mathematics. So, I am not doing it here. This is that if the moment generating function for x exists for all values of t in the interval minus a comma a that means this is some interval around the value t equal to 0 right. For some a a real number if this exists then it can be shown that you can exchange the differentiation and integration sign. So, in case your random variable x satisfies this property that the moment generating function will exist for all values of t in an interval around the origin around the 0 then you can differentiate the you can interchange the two signs and then therefore, when you differentiate you take the differentiation sign and sign it will become expected value of x e raise to t x right because you are differentiating respect to t. So, we will write x here and at t equal to 0 you can see that m prime x 0 will be e x which is the first moment and so on. So, the first derivative of the moment generating function evaluated at the point 0 is the first moment of x the mean or the expected value of x right. Then similarly, if you differentiate it again then you will get x square here expectation x square e raise to t x and so m double prime 0 that means the second derivative evaluated at t equal to 0 will give you expectation x square which is the second moment. So, in general the n th moment or the n th derivative of the moment generating function evaluated at t equal to 0 will give you the expectation of x raise to n and so once you have these moments therefore, you can you know make use of these things that means if you just compute the moment generating function you can get the information about all the moments through this formula. So, let me now start applying the definition of the moment generating function to special random variables that we have gone through so far. Binomial random variable the expectation value of e raise to t x would be sigma r varying from 0 to n e raise to t r because x takes the value r. So, e raise to t r n c r p raise to r 1 minus p raise to n minus r this is the expression I will combine e raise to t r with p raise to r. So, this becomes n c r p e raise to t whole thing raise to r into 1 minus p raise to n minus r and this you can see is again an x binomial expansion of the expression p e raise to t plus 1 minus p raise to n and this exists for all values of t because the expansion is valid. So, no matter what the value of t is so this is and now you see what we are trying to say is that if you get an expression like 0.3 e raise to t plus 0.7 raise to n. If you are given this as a m g f then you can immediately say by looking at the form of the moment generating function that the value this is the moment generating function of a binomial random variable with p as 0.3 and this is your n. So, the two parameters you can immediately find out by looking at the moment generating function. And if now if you differentiate this expression once then see from here it will be n derivative of this is p e raise to t I should have said here e raise to t n p e raise to t then e raise to t p plus 1 minus p raise to n minus 1. And so at t equal to 0 this number reduces to n p which is the expectation of x. Similarly, if you differentiate this expression twice this e raise to t is missing somewhere. So, then it should have been sum of 2. So, we will have to rewrite the expression here. So, it will be see for example what I am doing is. So, e raise to t is here. So, I am differentiating this again. So, this will be n minus 1 and p here then e raise to t. So, e raise to 2 t because there was an e raise to t plus you will have to take the derivative of this which will be n p e raise to t e raise to t p plus 1 minus p and this whole thing raise to n minus 1. In any case when you would compute this at the value t then this becomes 1. So, you left with n p n minus 1 p this also is equal to e raise to t is 1. So, this is 1 1 raise to n minus 2 is 1 then here also the contribution would be n p. So, is it so n p e raise to t. So, the second moment I am getting as and then from here n into n minus 1 n square. So, this is not correct. So, therefore, this will be plus. So, that means here when you put t equal to 0 you are getting n into n minus 1 p square this is then from here you will get another n p because this is 1 this is 1 and the whole thing is 1. So, this is plus n p now it makes sense because you have n p. So, n square p square minus n square p square goes away then minus n p square plus n p. So, minus I should write out the so minus n p square plus n p which is n p into 1 minus p. So, n p q this is the formula for the variance. So, please be careful when you are differentiating these expressions. Similarly, we can apply now we can obtain the moment generating function for a Poisson random variable and this will be expectation e raise to t n lambda raise to n into e raise to minus lambda upon n factorial n varying from 0 to infinity. Here again I will couple e raise to t n with lambda. So, this will become lambda e raise to t n into e raise to minus lambda n factorial. So, now this is what the value of the random variable that you are taking. So, at x equal to n and this will be if you take e raise to minus lambda outside this is the expansion of. So, lambda e raise to t raise to n upon n factorial is the expansion of e raise to lambda e raise to t. So, the expressions will look a little complex, but handling them is not much of a problem. So, here the whole thing adds up to. So, therefore, in this case also the series is convergent for all values of t and so that is what I am saying defined for all values of t and this can be rewritten as e raise to lambda e raise to t minus 1. So, if you differentiate. So, here again say for example, if I get term if I say that the m g f is say 3 e raise to t minus 1 then immediately by looking at this function I will say that the corresponding random variable is Poisson random variable with mean 3. And m g f of a random variable x will characterize the probability distribution function of x. So, when you differentiate this again. So, let me go through the calculations because there might be some error again. So, first derivative of m with respect to t here would be see the derivative of this would be lambda e raise to t. So, lambda e raise to t then into e lambda e raise to t minus 1 and evaluated at t equal to 0 that gives you lambda which is expectation of x. Second order derivative. So, there are two terms now involving t. So, the first one the derivative is lambda e raise to t into e raise to lambda e raise to t minus 1 plus the derivative of this would be lambda e raise to t lambda e raise to t and the same term here. Again evaluated at t equal to 0 you get lambda from here and you get lambda square from here. So, this is lambda plus lambda square. So, variance is lambda plus lambda square minus lambda square expectation of expectation x whole square. So, therefore, this is again lambda. So, the verification alternate ways of computing the same quantities. Exponential random variable with lambda as its parameter then this will be 0 to infinity lambda e raise to t x into e raise to minus lambda x t x and here again I couple the terms the powers of e. So, I get. So, if the moment generating function would be 0 to infinity lambda e raise to minus lambda minus t x t x which I can rewrite as lambda upon lambda minus t integral 0 to infinity lambda minus t e raise to minus lambda minus t x t x and you can see that this integral is defined for all values of t less than lambda because this exponent must be see this quantity must be negative sorry this quantity must be positive. So, that minus of this is negative and then at infinity this will go to 0 and therefore, the integral is defined only for lambda for t less than lambda for t greater than or equal to lambda the integral does not exist. So, this is important to note and therefore, that is what I was saying that the moment generating function need not exist for all values of t. We have to specify the values of t for which the integral exists. Now, make the substitution to integrate this you make the substitution y is equal to lambda minus t x this gives you d y is lambda minus t d x and therefore, this integral transforms to lambda upon lambda minus t. See here this lambda I wrote as lambda upon lambda minus t into lambda minus t. So, to get it in the proper form. So, then lambda minus t d x transforms to d y this is d y and e raise to this thing is e minus y simple integral this is this. So, therefore, minus e raise to minus y 0 to infinity is lambda upon lambda minus t m g f exists for t less than lambda and not t less than or equal to lambda. You can say that the corresponding random variable is exponential with parameter lambda and if you do the simple verification here derivative would be lambda upon lambda minus t whole square there will be a minus sign with this, but then since this is in the denominator another minus sign and they both multiplied to be positive. Therefore, m prime x 0 is lambda upon lambda square which is equal to lambda 1 upon lambda. Now, similarly you can compute the second order moment and then. So, that is why you know that the name is very suggestive moment generating function this function generates the different moments for the probability density function. Normal distribution again it may look very cumbersome, but actually it is a simple manipulation of the terms and you get the answer. So, for a normal distribution the moment generating function would be. So, 1 upon root 2 pi sigma minus infinity to infinity e raise to minus x minus mu whole square upon 2 sigma square plus t x you are computing expectation of e raise to t x. So, here I combine this square terms. So, this becomes 2 sigma square x t 2 sigma square x t. So, I collect the x terms. So, the x terms are twice mu plus sigma square t plus sigma square. Now, I want to make a perfect square and the reason is obvious. So, that the part of the integrand will add up to or integrate to 1. So, you see to this I must have x minus mu minus sigma square t whole square. So, therefore, I have added mu plus sigma square t whole square to make this perfect square. So, therefore, I must subtract. So, minus mu plus sigma square t whole square and the mu square from here is the remnant. So, therefore, this is what you have. So, this square plus mu square minus this. Now, if you simplify this the mu square cancels out you are left with minus 2 mu sigma square t plus sigma 4 t square. So, this term I have written out here as this separated it out and this is a constant because there is no function of x here. This is d x in fact d x goes here and you see that this integral in that case. Now, this is a p d f of a normal random variable where the mean is mu plus sigma square t not mu, but does not matter. The other things remain the same. So, this is a p d f of a normal mu plus sigma square t and sigma square. The variance does not change. This is under root of sigma square and this is 2 sigma square. So, only the mean has shifted from mu to mu plus sigma square t and therefore, this integrates to 1. The p d f of a normal standard normal variance this integrates to 1 and here you are left with just this part 2 sigma square t mu plus sigma square t t square divided by 2 sigma square. So, this is it. So, when you cancel out the sigma square part you are left here with mu t plus t square sigma square by 2. So, a simple form here again and you can now differentiate this. So, that means let us just take the first derivative. What would be the first derivative? This is t. So, this is e mu t plus sigma square t square by 2 into the derivative of this which is mu plus twice. So, sigma square t. So, at t equal to 0 this is 1 and this is mu. So, the first derivative of the first mean, the first moment which is the mean. Similarly, differentiate it again and then find out the second order moment and the variance. So, I think this illustrates quite well the concept of moment generating function and how you make use of it and of course, that it definitely characterizes the p d f's because by looking at the form of the m g f you can say what the distribution would be and what would be the corresponding parameters. Then there is still more interesting applications of the m g f. This is when I talk of jointly distributed random variables and then you can use of the concept of independence and so on. So, all these things get connected and I will try to show you the further properties of the m g f. So, let me now look at the moment generating function for the gamma random variable. So, the expectation e raise to t x would be 0 to infinity lambda e raise to minus lambda x lambda x raise to alpha minus 1 upon gamma alpha into e raise to t x d x. So, again combine this thing and here you see t less than or equal to lambda then only this because this is not defined unless I mean the whole integral will become improper if t is greater than lambda. So, therefore, this integral will exist as long as t is less than or equal to lambda. So, now you have this expression. So, again the trick is that I try to manipulate the integral add subtract divide or multiply. So, that I get into a familiar form plus a constant into a constant. So, here you see the parameter from shifts from lambda to lambda minus t. So, that is what I will do and that is not difficult because this lambda I can write as lambda upon lambda minus t into lambda minus t. So, this becomes my parameter here this is e raise to minus lambda minus t x. Now, this lambda same thing I will do replace it by lambda minus t and then divide by lambda minus t raise to alpha minus 1 because this is alpha minus 1 and the lambda raise to alpha minus 1 comes out. So, you see this part is now independent of this is independent of x the remaining portion all this is now the integral of the pdf of a gamma distribution with the parameter lambda minus t and alpha. So, instead of parameter being lambda it is now lambda minus t for any fixed value of t. So, the parameter will change as you change, but given a value of t then this will be the integral of the pdf of a gamma distribution with parameter lambda minus t and alpha. So, therefore, this whole integral this thing goes to is equal to 1 and I am left with the term lambda upon lambda minus t raise to alpha and. So, this is your mgf. Now, if you recall for an exponential distribution the mgf is lambda upon lambda minus t if lambda is the parameter. So, what is turning out to be the and exponential is gamma 1 comma lambda that means the I have been saying it out the other way. So, alpha is the first. So, that means here I want to say that this is gamma alpha and lambda minus t sorry. So, please the alpha comes out to be the first parameter this is the second. So, then your exponential is 1 lambda see if you looking at gamma alpha lambda then for alpha equal to 1 this becomes exponential lambda and here you see the. So, therefore, now through when I when I have talked about jointly distributed random variables sum of independent identically distributed random variables and so on. So, there will be lot of interconnections that I like to show. So, essentially what we are will be driving here is that first of all the result that if you have sum of two independent random variables then the mgf of the sum is the product of the corresponding mgf. So, here you see applying that iteratively it turns out that gamma alpha lambda is actually the sum of exponential independent exponentially distributed random variables each with parameter lambda. That means identically so alpha in case alpha is an integer then gamma alpha lambda is sum of alpha independent exponential random variables with parameter lambda. So, this is what the result will be. So, and therefore, that is what I am saying that you will be able to show these kind of things through the help of mgf because here this is lambda upon lambda minus t raise to alpha and this is alpha of them you add and they are independent then the mgf of the sum of these alpha independent exponential random variables will be gamma distribution because the mgf of this sum would be the mgf here which is alpha times this. So, you multiply the corresponding mgf when the variables are independent and then if you are talking of the mgf of the sum. So, we will develop this theory as soon as I talk of jointly distributed random variables. Now, just two questions before I finish this topic and this is x is a continuous random variable with pdf fx show that expectation of absolute of x minus a is minimized when a is equal to the median. So, we have defined the median for you and here see what we are saying is that this will actually come out to be a function of a this expectation and therefore, you want to minimize it that means, you will differentiate with respect to a the expression that you obtain for this expectation of absolute x minus a and then find out the critical value or the value at which this becomes minimum. So, you will have to show that it is the point at which the area under the curve is half. Now, I will just give you a hint. So, expectation absolute x minus a would be absolute x minus a fx dx which you can you know like either x is less than a or is x is greater than a for x less than a we will integrate from minus infinity to a. So, then this will be a minus x fx dx because the integrant has to be positive non-negative. So, this will be a minus x for x less than a and then a to infinity x minus a fx dx. So, the idea is that you differentiate with respect to a now all of you have already done this much calculus you can integrate. So, this will be you know when you have in differentiation under the integral sign essentially you have to apply that your limit is a function of the I mean you are treating a as a variable now because this whole thing is a function of a now. So, you can do this now similarly just take an example. So, let x be n mu sigma square. So, x is normally distributed with me with parameters mu and sigma square and has m g f m t define psi t as log of m t then show that psi double second order derivative of psi at 0 is variance x. And so, there are interesting function that you can define through your m t we will also be talking of the characteristic function. So, now for example, m x t for a normal is this. So, if you take log of this which we are calling as psi t then this will be equal to mu t plus sigma square t square by 2. So, if you take psi prime t this is mu plus sigma square t and then psi double prime t will be simply sigma square which is and therefore, this is also psi because that is a constant. So, the second order derivative of psi is a constant. So, psi square 0 is also sigma square. So, this is the answer and so, one can go on on doing lot of interesting things with this and I will be developing some more results here in the next lecture.