 Dear students, I'm going to discuss with you the discrete random vector and the joint probability mass function. So what is the definition of a random vector? It is the ordered pair capital X1 comma capital X2 where X1 and X2 are themselves random variables. When do we say that this particular random vector is a discrete random vector? Well, obviously it will be that situation when those individual variables X1 and X2 are themselves discrete random variables. Okay, if we have a discrete random vector consisting of only two discrete random variables X1 and X2, then what will be the joint probability mass function of this random vector? My dear students, you can write the probability, I mean that you can write P of X1, X2, the two random variables at the point small x1, small x2. This is the probability of capital X1 being equal to small x1 and capital X2 being equal to small x2. And all of this for the space which we denote by capital D, i.e. a particular space in which these ordered pairs small x1 comma small x2 fall. Now there are two very, very important and basic fundamental properties of joint probability mass function which I have just now defined for you. The properties are as follows. Property number one, the probabilities P of the random variables X1, X2 at the point small x1, small x2, this probability will always, always lie between 0 and 1. This is the first property and obviously this has to be because you know very well that probability can never be greater than 1, nor can it be less than 0. Have you ever heard that probability is negative of something? It's not possible. Along with that, we have that other very, very important property and what is that one? That the sum, the addition of all those joint probabilities that we may have in that experiment, whichever the context and whatever we have done, the sum of the probabilities will always be equal to 1. You can see this very well, you can understand this very well. It's a simple extension of the properties that we have in the univariate situation. The discussion that I'm doing with you right now, I'm assuming that we have only two random variables X1 and X2. But of course, this discussion can be extended further. You may have three or four and you will just extend these results to that situation. After this, you can see one more thing. These properties are obvious and without that, the probability of mass function cannot be called as t. But if we are interested in a particular event, if we are interested in finding the probability that the random variable X1, X2, I mean the random vector X1, X2 belongs to a space capital B, which is a subspace of the big space capital D, then how will we compute the probability of this event? Well, it is very, very simple. Students, you will simply add up the probabilities of all those outcomes that are according to your event. So it again, it is just a simple extension of what you have in the univariate situation. Another thing I would like to share with you, and that is that generally, we do not write space D. We can actually simplify matters. We can extend our probability mass function P of the variables X1, X2 at the point X1, X1, X2, we can extend it over a convenient set by writing 0 elsewhere. You must have seen many examples where P of X is equal to something in a univariate situation. It is written that P of X is equal to whatever that expression might be. And along with that, X values are given over which that expression is valid. But along with that, just below it, it is written 0 elsewhere. And this is a very convenient way of expressing it. And when we do it like this, then rather than writing capital D under the summation sign, we simply write summation over the variable X1 and then summation over the variable X2 of P of X1, X2. So this is the concept of the joint probability mass function in the case of two random variables taken together.