 Dear students, I am going to present to you the concept of the expected value of a real valued function of a random vector. And I would like you to concentrate on this one because there are a number of concepts involved. First of all, it is a straightforward extension of the concept of the expected value of a function of a random variable, which is an extension of the univariate case to the bivariate or the multivariate case. So let us consider the bivariate case and let x1, x2 be a random vector and let us define capital Y equal to G of x1, x2 as a real valued function of x1 and x2. G of x1, x2 can be any function. It can be very simple, for example, y is equal to x1 plus x2, y is equal to x1 minus x2, y is equal to x1 square minus e raised to x2, some combination of x1 and x2, which can be regarded as a function of x1 and x2. If we keep y as its name, we will note that y is a random variable, so x1 and x2, if they are themselves random variables, then obviously this combination of them, whatever it is, that is also a random variable. If it is a random variable, then we can determine its expectation by considering the distribution of this random variable. But students, first and foremost, let us determine the conditions under which the expectation of y will exist. Sometimes you might try to do it and it may not exist, so let us commence this particular discussion with first looking at the conditions that are required in the case of a single random variable. A random variable x will be said to have a finite or an infinite expectation, according as whether e of x is a finite number or is it not a finite number. If it is finite, then of course we will say that the expectation or the expected value exists. However, if it is infinite, if it is not finite, we will say that the expected value of x does not exist. If it is not finite, then we say that it does not exist, then naturally we would like to determine the conditions under which it will be finite. So what is the definition of the expected value of x? We all know that in the case of a discrete random variable, e of x is equal to summation x i into p of x i and in the continuous case it is the integral of x into f of x. So students, this e of x will be finite if our summation or our integral converges absolutely. If it is absolute convergence, then e of x is finite and we say that it exists. Then naturally the next question that comes to mind is what is meant by absolute convergence? What is absolute convergence? Well, the concept of absolute convergence is not so difficult. It is as follows, the term absolutely convergent describes a series that converges when all of its terms are replaced by their absolute values. Alright, let me say it in a different way. The term absolutely convergent describes a series for which the sum of all its terms remains finite when all the terms have been replaced by their absolute values. What I am now saying is also obviously to be represented in algebraic form and students, it is not so difficult. So if we let capital X be a random variable of the discrete type with probability mass function p of xk equal to the probability of capital X being equal to xk where k itself is 1, 2, 3 and so on, then if the summation of the modulus of xk multiplied by p of xk, the modulus of xk multiplied by p of xk, if this particular summation is less than infinity, in other words, if this particular summation is finite, then we say that the expected value of x exists and we write mu is equal to e of x is equal to summation xk into p of xk. Similarly, if it is a continuous variable, then if the integral from minus infinity to infinity of the modulus of x into f of x, the modulus of x, not just x, but the modulus of, in other words, the absolute value of x into f of x, isk integral, other if it is less than infinity, in other words, if it is finite, then we say that the expected value of x exists and we write mu is equal to e of x is equal to the integral from minus infinity to infinity of x into f of x with respect to x. So the same formula that we are aware of. Now, whatever I just said, for a single variable x, similar logic applies for a function of the random variable x. So if g of x represents a function of x, then we will say that the expected value of g of x exists and it is equal to the integral of g of x into f of x, if it is a continuous variable, provided that the integral of the absolute value of g of x multiplied by f of x, if this integral is less than infinity. So you have seen exactly what was before that extension and now I am in a position to talk about the expected value of a function of two random variables x1 and x2. So if both x1 and x2 are continuous random variables and they have a joint pdf small f of x1 x2, then we will say that the expected value of g of x1 x2, a function of x1 and x2, the expected value of this function exists and it is equal to the double integral of g of x1 x2 multiplied by the joint pdf f of x1 x2, provided that the double integral of the absolute value of g of x1 x2 multiplied by f of x1 x2 is less than infinity. Again, a further extension of what I said before and last but not the least, let us also consider the case when x1 and x2 are not continuous but discrete. So in a very similar way, we will be able to say that if y is equal to g of x1 x2 is a function of two discrete random variables x1 and x2, then the expected value of y, in other words, the expected value of g of x1 x2 exists if the double summation of the absolute value of g of x1 x2 multiplied by p of x1 x2, if this double summation is less than infinity. Apne dekha ke everything falls in place and I have tried to give you various concepts step by step, some general definitions and then applying in our situation. And this is the concept of the expected value of a real valued function of a random vector.