 I will now introduce the concept of random wearables. So, the whole idea is let me begin by giving some examples first. So, suppose two dice are thrown up and they are fair dice. So, you can say that may be two fair dice are thrown up. Now, let x denote the sum of the two numbers that show up. That means, the two numbers whatever 3, 4, 1, 2, whatever the two numbers show up when I throw the two dice, I add up the sum, I add up the two numbers and I denote that sum by x. Now, you can see that the number the values of x will vary from 2 to 12. Now, since the outcome of tossing up the two dice is not certain. So, that is a random phenomena because any phase can show up. So, any number can show up. Therefore, you see that the value of x is dependent on the tossing of the coin and whatever the outcome. So, therefore, this is what we mean by a random wearable. So, this is actually now you can see that before I start calling it a random wearable, I should explain more. See, when x is equal to 2, then it corresponds to the point 1 comma 1 of the sample space. I had earlier in my lecture shown you, when we were talking about when I introduced the concept of sample spaces. Then, I had shown you that omega, the sample space will contain 36 pairs of such points i j, where i varies from 1 to 6 and j varies from 1 to 6. So, when x is equal to 2, this actually corresponds to the outcome 1 comma 1 of the sample space. Both the phases much show 1 each and therefore, the sum will be 2. Similarly, if x is 3, then it can be either 1 comma 2 or 2 comma 1. So, both of them add up to 3 and so on. So, essentially trying to say. So, one can now give a formal definition of a random wearable that is. So, x is a real valued function that maps the sample space omega into the real line and we call. So, this x what we have described. So, through examples this x is called a random wearable and subsets, because these are all subsets. This is a singleton, this is a subset containing 2 points of omega and so this is again the number when x is 12, it will again correspond to the singleton 6 comma 6. So, such a function which is a real valued function and when it maps the subsets of the sample space corresponding to an experiment to real numbers, we will say that it is a random wearable. Now, if you again take another example, consider the experiment of tossing a coin till 2 consecutive heads appear. So, I toss a coin and unless I get 2 heads consecutively, I will not stop. Now, the sample space can be if it happens in 2 tosses then I will get the outcome will be h h and I will stop here, but if it does not show ahead in the first trial then it may show in the second and third. So, that means this will happen in 3 trial 3 tossing of the coins and so on. So, this will continue and this may not have a finite process, this may not be a finite process, because you may continue throwing up to tails. So, now, x is the number of tosses required for 2 consecutive heads to appear. So, you see here for example, in this case x will be equal to 2, then it corresponds to the outcome h h, if x is 3, then it corresponds to the outcome t h h. First draw, first toss gives you tail and then the other next 2 tosses give you heads, then x equal to 4 will correspond to this and so you can go on writing for different values of x, what will be the corresponding subsets of the sample space omega. So, again I will reiterate the same thing that since the values of a random variable are determined by the outcome of an experiment, we can assign probabilities to the values it takes, because there are probabilities associated with the outcomes of the sample space and since random variable is mapping subsets or these events from the sample space to real numbers, I can assign probabilities. So, for example, in throwing up of 2 fair dice, see when probability x is equal to 2 would be, then in that case probability of the singleton 1 comma 1, which is 1 by 36, because every pair each of the 36 pairs is equally likely, since the 2 dice are fair, then when probability of x equal to 3 would be corresponding to the probability of the subset containing the pairs 1 comma 2 and 2 comma 1. So, this will be 2 by 36. So, just for your benefit in fact you can complete, you could have completed the table by yourself. So, this I have shown you for all different values of that x can take from 2 to 12, what are the corresponding subsets and then the associated probabilities and since one of the x must take one of the numbers, since x must take one of the numbers from 2 to 12, since you are throwing up 2 dice. So, 2 numbers will show up and their sum will be one of the numbers from 2 to 12. So, then these are all possible events that can take place and so probability x equal to i varying from 2 to 12 should be 1. So, this is the probability mass function as we are, I am going to formally define the concept of probability mass function now. So, the random variables can be of different types. Let me first consider the case when x is a discrete random variable. So, as the name suggests it means that it takes countable number of possible values and let us say that the random variable takes the values a i varying from 1 to infinity. Now, of course, when I say countable countably infinite or countably finite it can be either case, but the values are sort of you can enumerate the values that it will take. Now, so therefore, we will say that probability x equal to a i is positive because these are the values that it takes and therefore, there will be positive probability associated with each of these numbers. So, this will be positive and for all of the values of x it will be 0. That means when x is not equal to a 1, a 2 up to this thing whatever the number of a i's then the probability at points which are not equal to any of these values is 0. Now, from axiom 3 since x must take one of the values a i therefore, when you add up the probabilities from 1 to infinity all these probabilities must add up to 1 by your axiom 3. Now, this function which assigns probabilities to different values of a random variable that is called and in the case when it is a discrete random variable we call it the probability mass function and as we have already seen the any probability function must satisfy the 3 axioms. So, here in short we also write p m f for probability mass function all the time you cannot go on writing these 3 letters. So, normally I will be referring to it as p m f now it helps to plot p x on the x y plane. So, consider the probability mass function p 1 is 2 by 3 p 2 is 1 by 6 and p 3 is 1 by 6. So, the random variable here is taking the values 1, 2, 3 and these are the probabilities associated with it. So, you can draw a bar chart. So, the idea is that you know erect rectangles. So, the height here is this height is 2 by 3 and this height will be 1 by 6. So, that means the rectangle the bar is centered at the value 1 and the height is 2 by 3. So, this is the idea similarly here this height of the bar is 1 by 6 and same as 3. Now, I have already also given you one other random variable which was giving you the sum of the numbers that show up when you throw 2 fair die. So, you can try to draw the bar chart for the random variable that will be a big one because the values it takes are from 2 to 12. Now, so having defined the random variable for the discrete case let me now associate some other functions with it. So, cumulative distribution function. So, probability mass function we have already defined. Now, this is the cumulative distribution function which again I will be referring to as CDF in the short form. So, for every real number x the CDF of a random variable x is given by this equation. So, capital F x of x is the notation for the cumulative density function and this is actually the probability of x less than or equal to the real number x. So, here the right hand side is actually the probability that your random variable x takes all values less than or equal to small x. This is the idea it immediately follows that if you want to compute the probability x greater than a and less than or equal to b then this is this can be written as probability x less than or equal to b minus probability x less than or equal to a. See that is why this is x greater than a because you are subtracting for all simple points for which this capital x is less than or equal to a. So, the probability of that. So, therefore, this becomes x greater than a and this in terms of our cumulative distribution function we write F x b minus F x a. Now, all along the notation for this probability is cumulative distribution function and if some place by mistake I call it as cumulative density function then that has to be ignored the proper terminology is cumulative distribution function. If you want to find out the probability of x lying in an interval then you can define this in terms of. So, I should have actually said this is as F x b minus F x a. So, you can immediately see because here this is this actually is equal to the event that x is less than b and x is greater than a. So, x greater than a implies it is the opposite of x less than a. So, therefore, minus x less than or equal to a because then this gives you the values of yeah. If you feel that you still need some assimilation you can do it for yourself to say that see from basically what you are saying is that from x less than or equal to b you want to subtract the because a is less than b. So, you want to subtract all values of x less than or equal to a then you will get the this thing that x lies between a and b. Now, if x takes the value x naught because after all x is a discrete random variable then the notation would be that F x x naught and then F x x naught minus. So, this actually says that you are approaching the value x naught from the left. So, from values which are less than x naught. So, where F x x naught minus is the limiting value of F x y where y goes to x naught minus and this notation means that if your number here is x naught then you are approaching the number x naught from the left of x naught minus this. So, and this will be clear in a minute because yeah. So, now let me just spell out the values here. So, essentially what I am saying here is that if x 1 x 2 x n are the let us say the finite n values that the random variable x takes then and x 1 is less than x 2 is less than x n. See then what is happening is that the distribution function F x is a step function because you see you say probability x less than x 1. Then this is 0 in this case because it is not taking any values less than x 1 and x 1 is the smallest value here. So, this probability is 0, but then if you want to say probability x less than or equal to less than or equal to x 1. If you now do this then what happens this is equal to probability x equal to x 1 because it is not going to take any other value any value less than x 1. The only value in this region that x will take is equal to x 1. So, this is actually equal to. So, in this case because it is the first value this is equal to x 1 and so this is what happens now. So, that means up to values less than just less than x 1 when if you want to draw the graph which I am showing you here I have drawn the graph fine. So, let me show you the graph for this first you see what is happening is that here the random variables taking the value 1. So, before that the value will be 0. So, therefore, if I want to draw the CDF the cumulative density function for the for that random variable then you see it is 0 and it is 0 till at the point 1 it takes a jump because at point 1 when x 1 is 1 this will be equal to probability of x equal to 1 which is 2 by 3. So, therefore, the function from 0 will take the jump and so it will be this thing and then you see for probability x less than or equal to less than x 2 if I do this. Then here it is again continues to take this only because less than x 2 or if you want to write 1 here in this case and this is 2 then as long as x is strictly less than 2 the number the there is no other probability because x is only continues to take the value 1 and here also it takes the value 1. So, as long as x is strictly less than 2 that means as long as I am here somewhere here and not taking the value 2 my value of the cumulative density function remains constant. So, this is like a step function it continues to be the same value as probability x equal to 1 and then the moment I say probability x less than or equal to 2 then this will be probability x equal to 1 plus probability x equal to 2 because now for x less than or equal to 2 there are 2 possibilities x can be equal to 1 on x can be equal to 2. So, then the 2 probabilities will get added up. So, that means it will be 2 by 3 2 by 3 plus 1 by 6 which will be 4 by 6 those will be 5 by 6 and you see. So, from 2 by 3 it takes a jump and at 2 the value now becomes 5 by 6 and so the jump and you can see that this jump that it takes is equal to the probability of the discrete random variable at that point that means probability x equal to 2. So, till up to this point this was probability x equal to 1 the moment I said probability x less than or equal to 2 it became probability x equal to 1 plus probability x equal to 2. So, the value of the cumulative distribution function jump by the probability of x equal to 2 and finally, when you talk about probability x less than 3 again it will continue to be this because there will be no other value of x here and then the moment you make it less than or equal to it will take a jump the figure is not very accurate it will be this. And so here again the moment I say equal less than or equal to the probability of x equal to 3 will get added to it and so again the jump will be by the probability. So, that means this will be a discrete function and a jump function or a step function whatever you can call and the point of discontinuities or the point of jumps that it takes will correspond to the values of the random variable. And the amount of jump that the function takes will be equal to the probability of the equal to the probability of that of the random variable taking that particular value where you are considering the jump. To continue with the general case when x takes the values x i varying from 1 to n and x 1 is less than x 2 less than x n then value of f remains constant in the interval x i minus 1 comma x i because it takes the value x i minus 1 and after that it does not take any other value random variable. So, the cumulative density function will remain the same and you see this is the sign this says it is a closed at this end and this is open. That means in this interval the values begin from x i minus 1 to all values which are less than x i. So, therefore, for all these values your cumulative density function remains constant which I will interpret as saying that because it is constant in this interval therefore, it is right continuous. Because I am approaching from here if I approach on the right hand side that means larger values than x i minus 1 then as I approach x i minus 1 this value of the function remains the same. So, it is a constant. So, it is continuous and it attains the value at x i minus 1. So, the same value and therefore, the function is right continuous and what we just saw is that it takes a step or a jump equal to the size of the probability x i. So, that means at x i it takes a jump which is equal to the probability of the random variable at that point x i. And also we have seen that it is an increasing function and since this will be when you want to compute this see this is equal to probability x less than or equal to x n. And by this means it has taken all the values for x less than or equal to x n means it has taken all the values x 1 x 2 x n. So, all the probabilities have been added up actually. So, this is nothing but summation x i i varying from 1 to n and therefore, it must add up to 1. So, this is what now let me formalize the properties of the cumulative distribution function. So, as we have seen that the function has to be an increasing function. So, that means what do we mean by when we say a function is increasing that is if a is less than b then the value of the function at a must be less than or equal to the value of the function at b. And this can be easily explained I have already done it through examples, but you see that the event x less than or equal to a is a subset of the event x less than or equal to b because b is bigger than a. So, all the values that are that give you this event also all the points of the sample space which give you this event also are here. And therefore, as we have seen now from our proposition using the axioms of probability that the probability of this event must be less than or equal to the probability of this event. And that is what this represents this represents the probability of this event and this represents the probability of this event. Therefore, this inequality follows and so the function is a increasing function is a non-decreasing function then limit f x as x goes to infinity is 1. So, we take an increasing sequence let us say of values x n to x increasing that means values keep on increasing. So, we approach that means if you have an x here then you are approaching x from here. So, all these values are increasing and you are reaching up to x. So, then again because of this property the event x less than or equal to x n is a subset of the event x less than or equal to x n plus 1 because x n plus 1 is bigger than x n. And therefore, limit probability x less than x n as n goes to infinity actually because this goes to infinity. So, therefore, this events they merge into x less than infinity. So, all possible values of x get covered up just as. So, therefore, this becomes equal to probability x less than infinity and therefore, this must be 1 because all values of x get covered up in this event and so this must be equal to 1. Then we say that the limit f x as x goes to minus infinity is 0. So, the argument here the same except that here we took an increasing sequence to x n there we will take decreasing sequence. So, this is as x goes to minus infinity. So, limit f x is 0. So, we can argue because here I am saying that as x goes to infinity f x is 1. So, when you take the decreasing sequence your events will be that means if I take decreasing sequence x 1 x n where this is greater than this then this is greater than x 2 this is greater than x n. So, then the event would be when n goes to infinity when n goes to infinity because I am taking decreasing sequences. So, then it will become empty set. So, this will be probability see you will be considering the event x less than x i and this is bigger than probability x less than or equal to x i plus 1. So, this is the whole add. So, just reverse the argument and so therefore, as you go on there will be nothing common as n goes to infinity this will be nothing common to because I am saying x goes to minus infinity. So, here there will be nothing common and so this will finally, converge to this will become probability x what shall I say here. The symbol you have to use is that you have to say that x is empty that means this becomes probability of the empty set that is what will happen. So, therefore, the limiting value of f x as x goes to minus infinity must be 0 because there will be no values of x that are possible once x goes to minus infinity. So, same as 2 then f is right continuous because any b and any decreasing sequence b n you take a b and any decreasing sequence to b n. So, same thing I am saying you approach b from right you are approaching this. So, the sequence b n is coming like this from right hand side and limit f b n as n goes to infinity is f b because same thing here x less than or equal to b n are decreasing events. The sequence b n converges to b right. So, the events converge to x less than or equal to b and so what we are trying to say is that x less than or equal to b n this event will contain the event x less than or equal to b n plus 1 and this finally will also all of them will contain the event x less than or equal to b right. And so therefore, probability of x less than or equal to b n will be greater than or equal to the probability of x taking the values less than or equal to b n plus 1 and so on and finally, this. So, now by the continuity property of the probability function p we get that limit f b n will be equal to f b because this is the value you know you are taking the limit here as n goes to infinity. So, therefore, because p is a continuous function probability function which is continuous therefore, this will be equal to f b right and so this proves the right continuity of f. And so now, we have shown all the 4 properties 1 2 3 4 of the cumulative distribution function and so any cumulative distribution function must satisfy all these 4 criteria and in fact, these 4 conditions are necessary and sufficient for any function to be a cumulative distribution function corresponding to a random variable x. So, this is important and so whenever you want to characterize a function which is a which you say is a cumulative distribution function for a random variable x then you have to make sure that these 4 conditions are satisfied by that function before you can take it to be the cumulative distribution function. Now, so if you take the example that we have been referring back to all the time this is when 2 dies are rolled up and x denotes the random variable x denotes the sum of the 2 numbers that show up then the expect. So, this is fine. So, that is it now I started giving you the example, but first let me now define a very important commodity or quantity which we associate with a random variable expected value of a random variable x here. So, if x is a discrete random variable having the probability mass function p x the expectation or expected value e x is defined by e x is equal to sigma x into p x such that. So, that means, you multiply x by the by its corresponding probability and then you add up. So, when you add up all these products. So, which is over all x such that p x is positive because if p x is 0 then the contribution here will be 0. So, we take this sum over all possible x is for which p x is positive. So, if you add up this these values then we define this as the expected value of the random variable x. Now, so therefore, we consider this example now consider the example in which 2 fair dies were rolled up and x denoted the sum of the 2 numbers that show up. So, in that case I had given you the table of you know for different values of x what will be the probabilities. So, if you just refer to that table then you can see that this will be the thing and this add up to 255 upon 36 which will be some number close to 7 a little bigger than 7. So, this is the expected value of the random variable that means, in other words what you are saying is that if you sort of keep throwing the 2 dies and add the numbers and then that means, you take the average. So, that means, suppose you throw up the number 100 time through of the 2 dies 100 times and then add up the numbers that show up and then divide that by 100 that will be close to your expected value. Now, here if you look at this expression what does it say now since since since p x for all x such that p x is positive is 1 this you can say that expected x is the weighted average weighted average of the values that x takes the weights as the corresponding probabilities. So, there can be some more interesting interpretations of the expected value which I will show you right now. So, e x can also be interpreted as a center of gravity of the masses p x i i varying from 1 to n located at points x i i varying from 1 to n. That means, you have you can imagine that the p x i's are the masses which you place at the points corresponding points x i's and then you take the center of gravity of this distribution of masses is also the same as the expectation x. Now, see that means, you imagine a weightless rod in which weights of masses p x i are located at corresponding points x i i varying from 1 to n. So, that means, imagine that this is the rod weightless rod and you have placed at these points the masses p x 1 at x 1 p x 2 at x 2 and so on. So, I have taken the points to be x 1 x 2 x n you know this is one distribution, but it could be anywhere distributed, but whatever the diagram will be the same that p x 1 is the mass located at x 1 p x 2 is the mass located at x 2 and so on. Now, the point at which the rod will balance itself is known as its center of gravity. So, from this thing you can this is the notation and so this is exactly what is given by the expression e x. So, e x can be the notation it can also be referred to as the center of gravity of these different masses which are the probabilities located at the corresponding points x i's. Now, note that here I have just taken when I defined e x I took x I said that x takes finite number of values and hence this quantity is a finite quantity and therefore, it is defined therefore, it exists. So, wherever there are different cases I will discuss them as that when we arrive. Now, we also refer to e x as the first moment of x. So, for when x is a discrete random variable and it is taking finite number of values. So, then I can also define expectation x square which will be sigma x i square p x i for all i varying from 1 to n. Obviously, the summation is over all those i's for which p x i is positive. Now, and even if the random variable x is taking countably in finite values then also if then I can you know take any function of x and I can accordingly write down the expectation of the of that function of a random variable. We will formally define expectation of you know of g of x when g of x is some function of a random variable x. So, that we will take care later on and of course, for a continuous random variable also we will define in a formal way. But, right now I can just say that because it is a summation sign. So, and if as long as this summation is a finite number I can define these expectations and. So, here for example, this will be the second moment right and also the linearity of expected function because it is a summation. So, therefore, it is a linear function right that means if I take c x plus d y to random variable x and y then also I will be able to write the expectation of c x plus d y will be c times expectation of x plus d times expectation of y. So, because of this summation thing of course, if x and y have the same probability mass function then. So, what I am saying is that right now wherever if I am using the linearity of the expectation then I am doing it in the scenario when your x takes may be a finite values or a countably infinite values and wherever the summation is a finite number I can treat e as a linear function and also I can define the expectation as a of any function of a random variable in this way. Now, here I would like to give talk of an example also. So, let us see then the second moment can also be defined here which is the second moment if this is the first moment then the second moment would be expectation of x square right which will be summation i varying from 1 to n x i square p x i right and then a very important quantity that we associate with random variable one is of course, e x and the second one is variance x which is that means, it is now the expectation of x minus e x and then whole square. So, you say that this is the moment second order moment of x about its expected value. See these are all for example, this is the moment e x is the first order moment about the origin and this is about 0. This is also the second order moment about 0, but now here this is the second order moment about its expected value. Now, if I open up this bracket then I get this right and now when I take e inside because as we have seen by its definition expected value is distributive I can take it inside the brackets. So, this will be e x square right minus since e x is finite quantity already. So, there will be e of x here. So, 2 comes out constant yeah in fact I am assuming that what I am saying is that if you take c is some constant then this will be c times expected x which immediately follows from the definition because as we are defining this as summation x i p x i. So, if I consider the random variable c x instead of then c x will take the values c x i right. And so here when I am wanting to compute this this will have c is present here, but since c is a constant it will come out and so this will be simply c x. So, therefore, when I take e inside this will be twice e x into e x then this is again a constant. So, therefore, this will simply be e x square there will be no expectation again I am using the property that if random variable is taking a constant value then I mean I this is this is not a random value essentially this is a constant right. So, if x is c for all possible values then there will be the expectation will simply be c because this is probability 1. So, the constant whenever random variable is just equal to a constant its expectation would be just that constant value right. So, this is this and therefore, minus 2 x square plus e x square. So, that reduces and therefore, this becomes e x square minus. So, therefore, you can also say that the variance of a random variable x is variable of a random variable x is second order moment about the origin. So, expected x square minus the its expectation whole square. So, this is what we and therefore, we will go on computing this as we introduce all special random variables and so on. So, the first one or the simplest random variable that we talk about is a Bernoulli random variable. This was this is named after the Swiss mathematician James Bernoulli and I think 1700 something probably he defined this random variable and you will see that it is a very basic random variable and it we use this to build upon other special kinds of random variables. So, this describes the situation or this random variable describe the situation in which the outcome is either a success or a failure. So, very simple you perform an experiment and the outcome would either be a success or a failure. So, for example, if you toss a coin you can say that coming of a head is a success and coming up of a tail is a failure. So, the values that x will take we just see we associate x equal to 1 with the success and x equal to 0 with a failure. And then let us say that probably p x equal to 1 is p and so where of course, p is a number which belongs to 0 1 and I am taking the open interval on both side that means p is not 0. So, I am defining a meaningful random variable or a meaningful experiment in which the outcome is either a success or a failure and so the probability x equal to 1 will be p and probability x equal to 0 will be 1 minus p. So, now if you compute the expectation or the first moment of this random variable then this will be simply 1 into p because the random variable is taking the values 1 or 0. So, 1 into p plus 0 into 1 minus p and therefore, this is equal to p the probability of a success and the variance by this formula. So, when you compute the e x square. So, here e x square 1 square is 1. So, 1 square into p plus 0 square into 1 minus p. So, which again is p. So, the second order moment is also p and so this is p minus the first order moment first squared which is p square. So, p minus p square. So, this is the variance. So, very simple quantities which you can right away compute and now we will further on use other special random variables discrete random variables and then of course, one will talk about continuous random variables also. So, let me just illustrate interesting aspect of the expected value. Here, I have taken this example from Sheldon Ross. A class of 120 students are driven in 3 buses to a musical concert. Now, 36 are seated in the first bus, 40 in the second and 44 in the third. When the buses arrive, one of the 120 students is randomly chosen right from the group of this who all go down from the bus. One student gets picked up. Let x denote the number of students on the bus of the randomly chosen student. See now, we carefully understand the event that I am telling you x is the number of students on the bus of the randomly chosen student. So, we picked up one student randomly and then now we want to the because again that is a random process I have chosen one of the students out of 120. Then x is the random variable which denotes the number of students on the bus on which this randomly chosen student was sitting. So, you have to find E x. So, find the expected value. So, first of all you see that since student is chosen randomly any student is equally likely out of the group of 120. I choose any one of them. So, therefore, the probability of the student being chosen from the first bus x equal to 36 is 36 by 120. Then the probability x equal to 40 because there is second bus. So, that will be 40 upon 120 because that many students are traveling in that second bus and finally, probability x equal to 44 will be 44 upon 120. So, now if I want to find out the expected value of this random variable then the expected value will be the random variable takes the value 36 into the probability of that bus being chosen. So, it was 36 upon 120. Then 40 into 40 upon 120 plus 44 into 44 upon 120. Now, when you add up these numbers this comes out to be 1208 by 30 which is 40.2667. If you just compute the because if you look at the event that you take any bus then the probability of being chosen is 1 by 3 because every bus is equally likely. So, if you want to compute the number of students on the expected number of students on the bus of the randomly this is later on. Now, on the other hand average number of students on a bus is 120 by 3 because I will add up because each bus is equally likely to be chosen. So, that probability is 1 by 3. So, that into 36 plus 40 plus 44 that will be 120. So, this number is 40. Now, this number is less than 40.2667 and this is what I want to point out here is that you see the expected number of students on the bus of the randomly chosen student is more than the average number of students on a bus. So, just think about it and why is this happening because you see the bus in which the largest number of people or I should say the more the number of students on a bus the more possibility of that student being chosen as the student because you know the larger number of students are coming from that bus in which more students were traveling. So, the possibility of choosing that student is higher than choosing from other students. So, just think about this thing and so I thought I will end this lecture by giving you this interesting example and so we will as we go on we will see various implications uses of these measures expected value variance and so on we will introduce some more.