 Dear students, I would like to present to you the concept of the conditional mean of y given x that which is linear in x. So, let's see what exactly am I talking about. Suppose that we have a bivariate density function f of x, y the joint pdf of two random variables x and y which are continuous random variables and we denote the marginal pdf of x by f1, x so that the conditional pdf of y given x equal to small x is given by f2 slash 1, I can say f2 given one of y given x is equal to f of x, y divided by f1, x at those points where f1, x is greater than 0. Now, given all this that I have just said, let us consider the conditional mean of y given x equal to small x, so expected value of y given small x will be equal to the integral from minus infinity to infinity of y multiplied by f2 given one of y given x, simply f of y given x and f of y given x will be equal to f of y given x and f of y given x will be equal to f of y given x. So, what do we have then? As you can see on the screen that whole expression becomes the integral from minus infinity to infinity y into f of xy over f1, x with respect to y. Now, after this, note that f1, x is written in this denominator, obviously that is the marginal distribution of x, so therefore, this would act as a constant because I am integrating with respect to y. So, it will come out and we can write 1 over f1, x into the integral of y into f of xy dy. All right, now what I have just told you, after this, note that the expression or the conditional mean expected value of y given x, students, it is obviously a function of x, we can call that function ux. Now, if you are getting a little confused about how this happened, look carefully, in the denominator that is f1, x, obviously that is x, and the integral y into f of xy with respect to y and this is not indefinite integral, it has limits, generally we write minus infinity to infinity but if you do the first, your random variable that is from 0 to 1, then you will write 0 to 1 there. So, when you integrate with respect to y and then put the limits on it, then y will end. Instead of y, the limits number, the numbers will come in, so whatever remains in it that is some function of x, that is also a function of x, that is also a function of x, and we can call it ux, now after understanding this, let's come to the next point, suppose that ux, it is a linear function of x, now exactly what do we mean by a linear function, do you not know the equation of a straight line, I am sure you do, the equation of a straight line generally is we can say y is equal to a plus bx, but what is y itself, a function of x, so therefore here if this function is linear, then we can write that ux is equal to a plus bx, but what is actually in ux, it is that conditional mean, the conditional mean of y given x, that is what we were saying, then what can we say my dear students, then we can say that the conditional mean of y given x is linear in x, and if we want to simplify it and say it, then we will say that e of y given x is a linear conditional mean, after all this I would like to present to you some theoretical results pertaining to the expected value of y given x, abhi jo sari baat kahi uske tahit we have a theorem, and I am now going to state it for you, the theorem goes as follows, suppose that the random vector x comma y has a joint distribution for which the variances of x and y are finite and positive, and then let us denote the mean and the variance of x by mu 1 and sigma 1 square, and the mean and variance of y by mu 2 and sigma 2 square, and also let us denote the correlation between x and y by rho, abhi jo sari notation some adopt kahi, then according to the theorem, which has its own proof, which I am not going to go into, but I am just giving you the result, according to the all that derivation, if the expected value of y given x is linear in x, then it is according to the following equation, e of y given x is equal to mu 2 plus rho into sigma 2 over sigma 1, and this whole thing multiplied by x minus mu 1, do baara isko dekhye, ye jo baine abhi kaha note karein ke it is still very much according to linearity, yani ye complicated sa lag rahe na, lekin in reality bilkul wohi form hai, jisko hum kaya sakte hain a plus bx, kyunke agar aab aurse dekhyein toh b jo hai isko inar that is rho sigma 2 over sigma 1, aur a jo hai that is mu 2 minus mu 1 into rho sigma 2 over sigma 1, yani agar aab aurse bracket ko khol ne to aab ko ye pada lag jaata hai, jaldi se. So, this is quite an important result, it is a well known result, as I said it has already been proved, and I will say it to you one more time, if e of y given x is linear in x, then it is given by mu 2 plus rho into sigma 2 over sigma 1 multiplied by x minus mu 1, ek result ye hua aur ek result isko saath aur bhi hai aur woh toh boht hi zyada interesting hai. As you can now see on the screen, we have another result and that is that the expected value of the variance of y given x is equal to sigma 2 square multiplied by 1 minus rho square. Ab ye jo right side hain jo maine equal ke baat kaha hai that is very simple sigma 2 square into 1 minus rho square, lekin jo left side hain aur jo maine kaha that must be confusing you ke ye kya ho gaya, expected value of the variance of y given x. So, let us try to make some sense of this one, isme aapko sabse pehle isko understand kana padega ke ye jo variance of y given x hain na, this itself can be interpreted as a random variable, chuke ye depend karta hai x ki value peh, x ki value change hogi to ye bhi change hoga the variance of y given x will change as x changes. It can be interpreted as a random variable or in fact jo pehle result maine is aapke saamne rakha expected value of y given x, e of y given x of course that also can be considered a random variable. To aap agar random variable hain to pehle you can find its expected value, kyuke koi bhi random variable jo hota hain uske expected value to aap find kar sakte hain. Let us consider an example, suppose that we are interested in the mean heights, the mean heights of the people of various races and also we are interested in the variances of the heights of the people of various races. Race kya cheez hai? Of course it is a socially meaningful category of people who share biologically transmitted traits that are quite obvious and are considered important. For example, african continent me jo log hain, unke biological traits kuch different hain from other continents and so on and so forth. To aap hi agar hamara problem hain ya humari interest risk me hain ke hame mukhlif races ke jo log hain unki mean heights bhi pata lag jaain aur unke heights ke indar jo variations bhi jaati hain wo variances bhi hame pata lag jaain. Then if we denote height by capital Y and denote X, race by X, then the expected value of Y given X, yani the expected value of height given race is a variable which assigns to each person in the population the mean height for that person's race and the variance of Y given X, yani the variance of height given race is a variable that assigns to each person in the population the variance of the heights for that person's race. To yeh main aapko ek example diya hain ke kis traapis ko understand kar sakte hain ke E of Y given X is itself available and variance of Y given X too. Agar yeh baat aapko clear hui to phir uske baat please note that we can find the expected value of E of Y given X and the expected value of the variance of Y given X. The expected value of height given race is the expected value of the variable E of Y given X, the expected value of the mean heights of the people of various races. Is it that our expected value of the variance of height given race is the expected value of the variable variance of Y given X, yani the expected value of the variances of the heights of the people of various races. To iss tari ke se this whole story I'm sure you will agree is really quite interesting.