 So, let me now talk about the conditional distribution when the random variables are continuous. So, in that case it will be your probability density function x given y. So, the rotation will be this and so you will write it as the joint of f x y at x, y divided by the marginal of y at small y right. And see the way to explain this is because since we know that the in the continuous case the probability at a fixed point is 0. So, therefore, the way to look at it is you know if I multiply both sides by dx and then multiply and divide by dy here. Then you see this represents this will be the conditional probability of x given that capital Y is y and x is between x comma x plus dx. And here on the right hand side you can interpret f x comma y dx dy as probability x less than or equal to x less than or equal to x plus dx comma y less than or equal to y that means capital Y between y and y plus dy. You can look at it this way with dy and dx are small divided by. So, this will be probability capital Y between small y and y plus dy. So, therefore, you can say that this ratio represents the conditional probability of x lying between small x and x plus dx when you are given that capital Y is between y and y plus dy. So, this is what I have expressed here that capital X belongs to x comma x plus dx given that y belongs to y comma y plus dy. Now, let us just look at an example. So, suppose this is a joint density function x lying between 0 and 1 y is between 0 and 1 and you can verify that this is a joint pdf that means double integral of this expression should be equal to 1 when your x and y are between 0 and 1. So, but to find out the conditional. So, if I wanted to write down the conditional pdf of x given y and I need to compute the marginal of y which I hope the arithmetic is. So, therefore, this will be when you integrate 3 x into 3 minus x minus 2 y this will be the expression between 0 and 1. So, this will be 9 by 2 minus 1 minus 3 y minus 3 y. So, remember that because capital Y is fixed at small y the yes. So, this is the marginal of y. So, obviously, it will be a function of y only, but I had something else in mind which I will tell you right now. So, now if you want to find out the conditional of x given y then that by definition is this ratio f x comma y divided by f y of small y and this I will because I have computed f y y for you. So, this is the ratio and therefore, this comes out to be this. So, that means for a fixed x and y this will be the conditional pdf of x given y and so here your x will vary between 0 and 1. Now, if you have to find out this probability x greater than half given that capital Y is equal to y. So, this will be integration half to 1 of this conditional pdf right where y is being treated as a constant right. So, you integrate respect to x and here again this is the arithmetic 3 x square by 2 minus x cube by 3 minus x square y because the 2 cancels from half to 1 and then this is the denominator which I do not have to do anything. So, I do the computations here, please verify the arithmetic that thing should add up and so the final expression is this. Which will be a function of y because here you have given a value to x, x all values of x greater than or equal to half. So, therefore, this conditional probability x greater than or equal to greater than half or does not matter for the continuous case it does not matter given that y is y it turns out to be this expression. So, now once we have this and I think for the continuous case for the discrete case also we wrote down the cumulative distribution function or cumulative this thing and then you can write down the conditional probability mass function also exactly in the same way. So, therefore, there is nothing new maybe I did not actually write down the expression, but that does not matter. So, let me now begin the topic joint probability distributions of functions of random variables. So, just before this I talked about joint distributions of random variables. Now, let us take the joint distributions of probability distributions of functions of random variables because this also we need often to compute certain probabilities and so on. So, now x 1 and x 2 are jointly distributed continuous random variables with f x 1 x 2 as their joint pdf. So, then suppose y 1 is a function of x 1 and x 2 and y 2 is a function of x 1 and x 2. So, g 1 represents the function of x 1 and x 2 which is represented by y 1 and g 2 is a function which represents y 2. So, where g 1 and g 2 have to satisfy certain conditions and this is what we are saying is that if you look at these two equations. So, then there must be a unique solution and or in fact if there are more than one then you should be able to fix the values of y 1 and the values of x 1 and x 2 that you will take corresponding to the values of y 1 and y 2. So, in other words what you are saying that we should have these solutions in a deterministic way there should be no ambiguity about it. So, x 1 should be h 1 of y 1 comma y 2 and x 2 is h 2 of y 1 comma y 2. So, I can solve this set of equations to get the values of x 1 and x 2 for given values of y 1 and y 2. And then the second condition is that this Jacobian as we call it this should be a set of partial derivatives which are continuous. So, the first order partial derivative I have not written it here may be that also has to be mentioned first order partial derivatives of g 1 and g 2 are continuous. So, therefore they exist and are continuous. So, first order partial derivatives exist are continuous then we define this determinant which is delta g 1 by delta x 1 delta g 2 by delta x 1 then delta g 1 by delta x 2 and delta g 2 by delta x 2. So, the notation is sometimes some people also use the notation this will be y 1 y 2 upon x 1 x 2 and so on. Because whatever you have in the denominator they. So, they want to other notations for the Jacobian also and this should not be 0 for all x 1 x 2 in the valid region. So, this should be a non singular matrix and therefore it is determinant is not 0. Now, this quantity we call the Jacobian determinant here and then the transformation that means when you are wanting to find out the pdf of y 1 y 2 respect to the pdf of y 1 y 2 then that can be obtained in terms of the pdf of x 1 and x 2 as this f x 1 x 2. So, this thing now here of course you will substitute for x 1 in terms of because you are able to solve x 1 and x 2 in terms of y 1 y 2. So, you can substitute that here this will be the absolute value these two lines indicate the and inverse of the Jacobian. So, you have this matrix compute this determinant and then take its inverse and the absolute value. Now, I will try to give you a feeling about see what they say is that the fact that this is non 0 you can see that if it was 0 pdf of f y 1 y 2 at small y 1 y 2 is not defined. This division by 0 is not permissible otherwise there is no point of point in talking of such transformations where the Jacobian is 0 and there are so many ways you can interpret this concept of the Jacobian, but I will just try to show you one aspect here and that says that absolute value of the Jacobian determinant at a point p gives us the factor by which the function expands or shrinks the area and bracket volume if you are talking in three dimensions near the point p. So, it will shrink if the absolute value of Jacobian is Jacobian determinant is less than 1 and it will expand if the determinant of the Jacobian is greater than 1 and near the point p. So, if you have the coordinates that we are considering are x 1 x 2 and y 1 y 2 in the transform plane then near the point p in the transform space. So, the area in the x y plane in the x 1 x 2 plane will get transform to the area element of around the point p that means we are talking in terms of element of area. So, around p in the y 1 y 2 plane by the value determinant of the Jacobian. Let us consider this example. So, x 1 and x 2 are jointly distributed random variables with f x 1 x 2 as their pdf and let y 1 be equal to x 1 plus x 2 and y 2 is x 1 minus x 2. So, we define two new random variables as functions of x 1 and x 2. So, this implies that your x 1 is half y 1 plus y 2 and x 2 is half y 1 minus y 2. So, Jacobian if you want to compute then it will be the derivative of this with respect to. So, here when you differentiate with respect to y 1 this will be 1 and differentiate with respect to that means you are differentiating x 1 with respect to y 1 and y 2. So, this is 1 and 1 then you differentiate x 2 with respect to y 1 and y 2 it will be 1 and minus 1. So, the value of this determinant is minus 2 and if you take the absolute value it will be 2 and the inverse value of the inverse of the Jacobian determinant of the Jacobian and with absolute value will be 1 by 2. So, according to this formula our pdf for y 1 y 2 would be equal to half f of x 1 x 2 the pdf for x 1 x 2 when you substitute for x small x 1 and small x 2 in terms of y 1 and y 2. So, this is y 1 plus y 2 by 2 and this is y 1 minus y 2 by 2. And so, what I am trying to say which I just said a few minutes ago is that you know you can treat if you have take a small element of area which is d x 1 sorry d y 1 d y 2 here in the y 1 y 2 plane this is d y. So, element of area. So, we treat this the density as the probability density over d y 1 and d y 2 and here if you look at this part this will be the probability density over the element of area d x 1 and d x 2. And in this case of course, since we are taking a very small element of area I can treat. So, I can say that this is half times this the density is the relationship between the two densities I am just trying to get a feeling give you a feeling about this. Anyway and then you can see that here if you take the particular case that x 1 and x 2 are uniform 0 1 both are distributed uniform are both are both are random variables uniform random variables over 0 1. Then you see your transformation and if I am taking y 1 as x 1 plus x 2 and x y 2 as x 1 minus x 2 then you see here this p d f for y 1 and y 2 by that formula would be half into 1 1 because the they are uniform and I am treating them as independent. So, I am taking 1 into 1 the p d f of both the f x 1 x 2 would be simply product of f x 1 into f x 2 and so both being uniform this is 1 1 and so this is the formula you get and the ranges y 1 plus y 2 varies from 0 to 2 and so you can get the individual ranges which I have drawn here. So, the area when you consider the x 1 x 2 variables this is the area and this equals 1 the area. Now, this gets transformed to this kind of region in the y 1 y 2 plane if you draw these y 1 plus y 2 equal to 0 and y 1 plus y 2 equal to 2 which are these two lines and y 1 minus y 2 equal to 0 is this and y 1 minus y 2 equal to 2 is this line. So, it is this area which you get. So, a gets transformed to b and you see the area here is two units. This is one unit and two units and this is what I want to explain that since Jacobian is a constant. So, therefore, you see this probability density is the same over the whole area and that is the filling I want to give you and so the relationship between the two areas here because now that the Jacobian is a constant. So, therefore, the area a which is one unit is goes over to area 2 in the y 1 y 2 plane or in other words you can also say that because see this density is half into 1 1. So, when you integrate over the whole of b this whole thing should add up to should integrate to 1 which is half area b which is one. So, area b is two and when you equate the two p d f if you for example, do this and integrate. So, see this area if you just do this this is one and this area also has to be one I mean the integral I am trying to say that this integral and this integral both. So, this has to be one and this must be one, but then this is related by this half. So, therefore, this will be twice this. So, therefore, the density will turn out to be half. So, therefore, this density will be half of this because this must add up to this integrate must integrate to 1 this integrates to 1 if you just took the x 1 x 2 variables and this value is half. So, this is equal to twice this. So, therefore, the density for this one becomes half the density for x 1 and x 2. I am just this is my own interpretation and I am trying to give you. Now, consider when x 1 and x 2 are independent exponential random variables with respective parameters lambda 1 and lambda 2 and they are I am treating again treating them as independent. So, therefore, with the same transformation that y 1 is x 1 plus x 2 and y 2 is x 1 minus x 2 then the pdf for y 1 y 2 by that formula would be because the Jacobian is again in inverse of the Jacobian is half and the determinant and this will be lambda 1 lambda 2 product of the 2 pdf for x 1 and x 2 with x 1 replaced by y 1 plus y 2 by 2 and x 2 replaced by y 1 minus y 2 by 2. The limits here would be because x 1 goes from 0 to infinity x 2 goes from 0 to infinity. So, this will be this since they are both non negative. So, this is it, but now you can compute the individual limits for y 1 and y 2. Now, the finally, if x 1 and x 2 are independent standard normal random variables then you see this will be the your joint would be because they are independent. So, your joint anyway will be 1 upon what is it root 2 pi e raise to minus 1 by 2 and they are standard normal. So, it is simply x 1 square is for this thing and 1 upon root 2 pi e raise to minus 1 by 2 x 2 square your sigma square is 1 mean is 0. So, these are the 2 individual pdf. So, you multiply them. So, that becomes 1 by 4 pi 1 by 4 pi into root 2 pi root 2 pi. So, the half the Jacobian is half and then this is this product is 2 pi. So, 1 upon 2 pi into 2 this becomes 1 by 4 pi and then this is e raise to minus half pi 1 plus y 2 whole square by 4 because your x 1 is what x 1 I am writing as 1 by 2. So, this should be taken. So, 1 by 2 and then y 1 plus y 2 by 4 whole square then similarly minus 1 by 2 x 2 square x 2 is y 1 minus y 2 by 2. So, the square gives me y 1 minus y 2 whole square by 4 and the variance of y 1 plus y 2 is 2 variance of y 1 minus y 2 is 2 because again y 1 and y 2 are independent I have computed this for you fine. We will come to this conclusion later on but let me continue with this. So, now what I have done is yes I have written see the I have written this expression y 1 plus y 2 whole square by 4 plus y 1 minus y 2 whole square by 4 see the coefficients here. The half I have left out just these 2 terms. So, then they add up to because the product term here will be 2 y 1 y 2 plus 2 y 1 y 2 and here it will be minus 2 y 1 y 2 divided by 4. So, that cancels out and you get twice y 1 square plus twice y 2 square. So, therefore 2 upon 4 that gives you half y 1 square plus y 2 square. So, this whole thing reduces to e raise to minus 1 by 2 then y 1 square plus y 2 square by 2 and then I can again write 1 by 4 pi as 1 upon under root 4 pi into 1 upon under root 4 pi and since. So, now I am saying that this is 1 upon root 4 pi into e raise to minus 1 by 2 y 1 square by 2 into 1 upon root 4 pi e raise to minus 1 by 2 y 2 square by 2. So, you see the pdf here separates out into 2 single variable pdfs and so I will conclude that y 1 and y 2 are independent and you see that therefore the variance y 1 plus y 2 actually I can also compute the variance of y 1. So, variance of y 1 plus y 2 is 2 why am I saying that variance of y 1 plus y 2 what am I computing from here this is a wrong statement that is why I was saying that this is not right. This is because x 1 and x 2 are independent therefore I should have written variance x 1 plus x 2 is 2 variance of x 1 is 1 variance of x 2 is 1 and these are independent. So, this is this similarly variance of x 1 minus x 2 is also 2. So, therefore y 1 when you are looking at. So, now y 1 is what y 1 is y 1 is here where did I define it y 1 is x 1 plus x 2 and y 2 is x 1 minus x 2. So, therefore y 1 and we have also seen that the sum of normal random variables is again normal. So, here the mean is 0 variance is 2. So, y 1 is normal and y 2 is normal and therefore this is in accordance with the because if y 1 is normal 0 2 then you are dividing by the variance square. So, y 1 square by sigma square it is a 2 sigma square. So, this is y 1 square upon 2 sigma square 1 upon root 2 pi into root 2 because standard deviation will be root 2. So, that becomes 1 upon root 4 pi. So, this is all in accordance with this and therefore you can say that y 1 and y 2 are also independent standard independent normal variables. And this happens only because you see that is why I took 3 examples for the same case I first took x 1 and x 2 to be jointly uniform. Then we wrote down the then we wrote down the pdf of y 1 and y 2. So, what did that come out to be simply this which you cannot say because it is constant in this area that is all. So, from here it will probably will follow that this is uniform think about it, but when you took the distributions for x 1 x 2 to be exponential then what did you get you did not get any separation here. So, therefore you here you cannot conclude that y 1 and y 2 are independent, but when you took the x 1 and x 2 to be standard normal independent random variables then it turns out that x 1 plus x 2 and x 1 minus x 2 are also independent and normally distributed. So, this happens for a normal distribution for when x 1 and x 2 are normally distributed independent random variables then these functions will also be normally distributed and they will be then I mean for x 1 plus x 2 and x 1 minus x 2. I am not claiming that this will happen for any functions of normal random variables, but in case when the functions are x 1 plus x 2 and x 1 minus x 2 then they turn out to be independent they will be normal yes that of course we know we from the property of normal distributions already we have seen it. So, this is the case. Let me continue with another example of functions of random variables. So, here x 1 and x 2 are independent random variables each exponentially distributed with parameter lambda. So, now the question asked is are the random variables u equal to x 1 plus x 2 and v equal to x 1 upon x 2 independent. So, therefore we will find the joint density function of u comma v and see if it can be separated out into a function of u and a function of v. So, write the Jacobian or the Jacobian is here this is 1 1 the partial derivatives for v the partial derivatives are 1 upon x 2 and minus x 1 upon x 2 square. So, therefore the determinant is equal to this which can be written like this. Now, compute the inverse functions. So, here from the second equation you see that x 1 is v x 2. Therefore, this is this now a substitute in this equation. So, for x 1 so that will be yeah if you write it down that is what I am doing here. So, if you write it for x 1 plus x 2 and x 1 is u into x 2 we just said that x 1 is v x 2. So, v x 2 if you write out write. So, here you do that and therefore when you write x 1 as v x 2. So, v x 2 plus x 2. So, x 2 outside 1 plus v. So, your x 2 becomes u upon 1 plus v and then from here you get that x 1 is u v upon 1 plus v and if you write x 1 plus x 2 that comes out to be u well that is already given to us it is simply a verification fine. So, then you will sub that means the formula gives you the joint pdf of f u v which will be the Jacobian inverse. So, x 2 square upon x 1 plus x 2 absolute value of this which is equal to this and then because x 1 and x 2 are independent. So, the joint pdf is a product. So, for each if the pdf is lambda into e raise to minus lambda u lambda x x 1 and then lambda. So, here it will be lambda times x 1 plus x 2 which we have which we are given to be as u. So, this is your pdf for and of course, you will substitute for x 1 and x 2 in terms of u and v. So, you can immediately see that x 2 square is u square upon 1 plus v square and then x 1 plus x 2 is u. So, therefore, your final function is u upon 1 plus v whole square lambda square e raise to minus lambda u. Now, I was trying to see draw the picture, but again because see the region for x 1 x 2 is whole of first quadrant and it looks like that for u and v also because u and v are also both non negative and both are extending to infinity. So, it appears that here since the regions are infinite therefore, I cannot show you any shrinking or anything. In any case this is dependent on the point. So, the Jacobian is not a constant here. So, it will depend on the coordinate values x 1 x 2 and so on. So, therefore, I cannot do much here, but you see now this and of course, your limits for u are from 0 to infinity and for v from 0 to infinity and now I just can write this down as a product of two functions. So, lambda square e raise to minus lambda u into u is 1 and 1 upon 1 plus v square is the other function. So, since I have and I remember I gave you this proposition in the earlier lecture that if there is a pdf which can be written out separately as a function of single variables then each of them must be pdf themselves for the corresponding random variable. So, now I want you to verify that the two functions represent the pdf. That means, this is a pdf from 0 to infinity show that this integral is 1. Similarly, this integral from 0 to infinity is 1 which you can do by iterative integration here and they are both non-negative. So, therefore, we will conclude that which I did not write here that u and v are independent. So, I will now talk about exercise is 5 which is you know collection of problems from whatever we have been discussing in the last 3 to 4 lectures. Let us see again as usual I will try to give you small hints and then you should be able to work out the problems. In question 1 3 balls are chosen without replacement from an urn consisting of 3 white and 8 red balls. So, x i equals 1 if i th ball selected is white. So, you know you have first, second and third balls which are chosen without replacement. So, the if the i th ball is white then you put x i equal to 1 and 0 otherwise. So, give the joint probability mass function of x 1, x 2. So, again you will make that chart we have shown you right you know rows will be for x 1 and columns will be for x 2 and then you can write out for different values. And then they want you to write also the joint probability mass function of x 1, x 2 and x 3. So, now in this case you will have to simply write 3 values because it will be x 1, x 2 and x 3 all of them right. So, 3 dimensional I have been discussing with you 2 dimensional so far. So, I thought let me include this and let us see how you try out this problem. Question 2 the joint probability density function of x and y is given by this function e raise to minus x plus y, x and y between 0 and infinity then find probability x less than y. So, now this is the event that means the region. So, you will draw the line so here it is simple this is this. So, the whole of first quadrant is your valid region now you want to find the probability. So, it will be under this region x is the other way this is x less than y sorry. So, it will be this region right. So, therefore, fix your limits accordingly and you will be able to you can immediately write down the limits from here because x has to be less than y. So, therefore, x cannot vary from beyond y. So, it will be 0 to y and then y of course, varies from 0 to infinity then probability x less than a. So, in the b you will have to find the marginal of x first and then compute this probability. This is problem 3 now problem 3 says you are given the joint density function of x and y which is f x y is 2 if 0 is less than x less than y and y is between 0 and 1 0 otherwise r x and y independent. So, now as I told you since the limits of x are dependent on y my immediate reaction would be that no they are not independent, but you will have to find out the marginals and then show that the joint is not the product of the marginals. If x and y were given by this new function which is f x y equal to x into e raise to minus x plus y 0 then and now the limits for x and y are independent of each other. So, in this case again you can break up your joint p d f into x into e raise to minus x into e raise to minus y. So, therefore, they should be they should turn out to be independent question 4 says that 2 dies are rolled. Let x and y denote respectively the largest and the smallest values obtained compute the conditional mass function of y given x is i for i varying from 1 2 to 6. So, let me see you fix a value of i of x and then say r x and y independent y all this you have to answer. So, you are quite familiar with now rolling of 2 dies and how you write down the probabilities. So, therefore, you should be able to answer question 4. . Question 5 the joint probability mass function of x and y is given by. So, this is now discrete set of random variables and you have given the probabilities here compute the conditional mass function of x given y is i i varying from 1 to 2. So, there will be 2 conditional mass functions 1 for when y is equal to 1 and the other when y is equal to 2 r x and y independent apply the condition for independence and compute x y less than or equal to 3 probability x plus y greater than 2 and probability x upon y greater than 1. So, here you see the values of y are not 0 anywhere y takes the values 1 and 2 and x takes the values 1 and 2. So, all questions are valid and you should be able to answer them. . 6 is again you are given a joint density function of x and y and here you see y varies between minus and x x and x draw the region then find the conditional distribution of y given that x is equal to x. So, I just included all these things they are different from each other, but then you get the you get an idea when you solve these problems. x and y have joint density function given by 1 upon x square y square x and y are greater than or equal to 1 compute the joint density function of u equal to x y and v equal to x by y should be what are the marginal densities. So, anyway the joint density function you will compute by using the Jacobian method and then try to draw the regions because for x greater than 1 and y greater than 1 it is simply this thing when you take see this is 1 and this is 1. So, in the original thing this is the region and here of course, I can just give you a hint because when u is equal to x y and so you will have to write x in terms of u and v in terms of y in terms of u and v and then you see that the region how the region is transformed. So, do it because I have given you an idea already and you have to then compute the marginal densities of u and v. So, 8 question 8 is one should have been a suffix, but it does not matter x 1 to x n be independent exponential random variables having a common parameter lambda. So, all of them are coming that means they are observed values from an exponential distribution with parameter lambda. Determine the distribution of minimum x 1, x 2, x n. So, this is what I have already discussed with you finding out the pdf of x bracket 1 that means the smallest of n sample values. If x and y are independent binomial random variables with w i, t h. So, just make the correction with identical parameters n and p. So, x and y are the same binomial distribution having the same binomial distribution show analytically that the conditional distribution of x given that x plus y is n is the hyper geometric distribution that should have been a full stop is the hyper geometric distribution. Also give a second argument that yields the result without any computations. In the ninth problem you are given two binomial independent binomial random variables with identical parameters n and p. So, you have to find the conditional distribution of x given that x plus y is n and you have to show that this is a hyper geometric distribution. And this again added this problem because I have already shown you similar I have solved a similar problem in the lecture. And also give a second argument that yields a result without any computations. So, the hint is given here and you can argue that given a total of m heads the number of heads in the first n flips as the same distribution as the number of white balls selected. So, you figure out the hint and then see if it is useful. Now, the tenth problem is random variables x and y are said to have a bivariate normal distribution if the joint density is given by this. So, here I have not discussed this in the lectures, but I thought you should be able to work on this. So, a bivariate normal random variable distribution is of this form where you have the square term with respect to x with respect to y and then you have the product term. And of course, the row part which will by the time you get to this problem I think I would have discussed with you which is the correlation coefficient. So, this is the expression, but anyway right now this is a. So, now you have to show that the conditional density of x given that y equal to y is a normal density with parameters. So, you see the moment you fix your y then you can rearrange the terms and write them in the form. So, that this becomes the mean of the conditional variable that means conditional density of x given that y is y and your variance will become sigma x square into 1 minus rho square. So, you can it is just a question of you know manipulating the terms and since you already know what you have to show therefore, this will not be difficult. Then show that x and y are both normal random variables with respect to parameters mu x sigma x square and mu y sigma y square. So, here you see if x and y are two normal variants then the joint density function is given here above. And then you can show that when you compute when you do the integration for when you integrate f x y with respect to y from minus infinity to infinity you will get distribution with mean mu x and variance sigma x square. And similarly when you integrate respect to x you will get the marginal of y which will come out to be normal with mean mu y and variance sigma y square. So, part c says that show that x and y are independent when rho is 0. See now here if you look at the expression for f x y then by putting rho equal to 0. You see this coefficient under root 1 minus rho square will become 1 then 1 upon 1 minus rho square will become 1 and the product term in the exponential will become 0. And so the joint density function will become product of marginal of x and y. You can see immediately because I can write 2 pi as root 2 pi into root 2 pi then sigma x into e raise to minus 1 by 2 x minus mu whole square upon sigma x square into 1 upon root 2 pi sigma y e raise to minus 1 by 2 mu y upon sigma y whole square. So, it will become product of two marginals. So, therefore, the two x and y by our theorem x and y are independent. And the converse is also true that is if of course, if that I have talked about the converse. The actual thing is that if x and y are independent then of course, rho must be 0 that is what I am saying here. So, that x and y are independent when rho is 0. So, here I am asking you to talk about the converse. The theorem is that if x and y are independent then rho must be 0. So, this is what you have to show and also what I am saying is that the converse is true. That means, if rho is 0 for a bivariate normal random variable the x comma y being a bivariate normal distribution having a bivariate normal distribution then if rho is 0 we can also show that x and y are independent. And so this we will discuss in lecture 17 also and then later on I will show you that the covariance of a bivariate normal random variable x comma y is given by rho. So, this we will discuss much later in lecture 23. See in part c of question 10 we have to show that x and y are independent when rho is 0. So, what we have said so far is that if 11th problem the joint density function of x and y is given like this. And here x varies between 0 and 1 and y varies between 0 and 2. First question is r x and y independent yes you can answer because the limits are separate and the joint pdf can be separated into x into y. But I would like you to find out the. So, anyway you are finding out the density function of x the density function of y and then the find the joint distribution function the distribution function. So, you are asked to find the cumulative distribution function and then find E y and find probability x plus y less than 1. So, again this is I have just included this as an exercise so that you get more familiar with the how you work out these different integrals. There is another problem number 12 the pdf of random variable x is shown below this is a single one find density function of 1 upon x. So, this I have included because I thought we had not discussed many functions of a single random variable. So, therefore, find density function of 1 upon x e raise to x ln x ln x is again the log with base e and A x plus B. Again simply as a exercise to get familiar with you know how you find out the limits and so on and the ranges for different functions and so on. If x 1 and x 2 are independent random variables with same probability distribution function as x find the probability distribution function of x 1 upon x 2 and x 1 x 2. So, you may feel that some where the things are repeated does not matter do as much practice as you can to get a good feeling about how you handle these.