 Let us get started. So in the last lectures, we discussed discrete random variables, right and in that we talked about Bernoulli, Binomial, Geometric and Poisson. So anybody has any questions so far on what we have discussed? Okay, today we will talk about these continuous random variables. So as I said the definition of continuous random variable is the range over which my random variable takes value is uncountable, okay. The first example in this case is called uniform. This is comes with two parameters and it is denoted as uniform AB, where A and B are some real numbers. So we have a random variable X which is uniformly distributed A and B are some real numbers. Now the parameters A and B says that my random variable X takes in the value in the interval A to B. Now this is a continuous random variable, there will be an associated probability density function here, okay and that probability density function is defined for this as in the range AB it is defined as 1 upon B minus A. Notice that this does not depend on X, so this function is constant in the interval AB and this is 0 outside. So pictorially if I have this and let us say this is my A and this is my B, so this looks like this and this quantity is 1 upon B minus A, okay. Now the examples could be many where you want to use such a random variable. For example, you want to model somebody's height and weight as some uniformly random variable in which case we will mention that okay my height is between some value A and B and also temperature you can mention it is between some value A and B of the time and this is often used when you have do not have any prior information and you are going to assume that everything is like almost equally likely. So that is why like you see that this function is constant in the range A to B that means as we said the rate of change of probability at any point is going to be same here. So there is an analog of this in the discrete case also which actually we did not discuss. So what could be the analog of this in the discrete? Suppose let us say now I am slightly instead of continuous now I am talking about discrete. Let us say your random variable is discrete and it takes value in value let us call x1, x2, x3 and let us say it takes up to value xn. Now I want to say that my x discrete random variable is uniform. What could that potentially mean? Yeah all these x's they have the same probability that means probability that my x takes any of these values xi is going to be what? 1 by n. 1 by n. This is for i 1, 2 up to n and now if it is for discrete we said that it has a probability mass function right and for that we can drop probability mass function this is for x1, x2 all the way up to xn and this is its PMF and this value is going to be what? This is going to be like this. All of them have the same heights and this value is exactly 1 by n ok. So uniform you can have discrete in that case all the probability mass function have the same value and you can have uniform continuous in this case its PDF is going to be a constant in the interval 0, 1. Now similarly we can have another random variables but in those random variables may be like all things are not need not take the same value there could be different like different values can happen with different probabilities that random variable one possible thing we are going to study is exponential and exponential is going to be divided as exp lambda and that comes with a parameter lambda which is strictly positive ok. Now this random variable x is a positive value random variable and it takes value in the entire positive range between 0 to infinity and its PDF is defined like this lambda exp to the power minus lambda x whenever x is greater than 0 and it is defined 0 otherwise. So if this is a PDF we know that if you integrate it between minus infinity to plus infinity it has to be 1 just ensure that if you integrate the area of this function is actually 1 that is pretty straight forward. Now if you want to just look into its figure so how does this function look like? So what is the value of this function at less than equals to 0 but this is going to remain 0 for all the negative portion of my real value and it at x equals to 0 what is this value is going to be it is going to start with lambda and as x increases it is going to decay exponentially with the rate my lambda here ok and that is something look like like this. Let us say this is some curve this corresponds to some particular lambda exp lambda 1 some lambda 1 and now I will take lambda 2 which is greater than lambda 1 ok. Now if I have to plot that curve will that be above this curve or it is going to be below this curve it is going to be something like so this is this is like maybe going to be like exponential lambda 2 ok. So it is basically that is why this factor lambda is called the rate of this exponential larger it it has a higher rate and that is going to fall faster. And now this is often used to model lifetimes ok for example how much time my bulb let us say you have bulbs how much time my bulb is going to work before it breaks down. So it is going to be some number it may be it will work for 100 like I am going to count time in seconds like it can break down after x seconds or like y seconds whatever but those seconds are like continued values I am even fractions we are counting in fraction of the seconds also like all possible values. So it could be like like if the bulb is going to last for longer and longer that probability becomes smaller and smaller right because if the bulb is learning maybe it its life is now reducing and it may break and there you want to use such an exponential distributions. This is Gaussian and this is the one of the most thing commonly we are going to use because of its nice properties. It is denoted by two parameters called mu and sigma square and it takes value over the entire real line from minus infinity to plus infinity. Notice that x when it was exponential it was only taking between 0 to infinity but now we want sometimes both positive and negatives. So we are looking for one such possibility Gaussian which is going to take value between both minus infinity to plus infinity and it has this pdf which is given like this ok. So x minus mu whole square divided by 2 sigma square and now there is also one factor 1 upon 2 by 1 upon square root of 2 pi square. Now how does it look like? So mu is one parameter and if you look into it this is going to be symmetric around this like and it is going to take the maximum value at mu and then after that it going to decay on both left hand side and if you see that this is going to be symmetric around your main point ok. And now mu is deciding where the peak happens but there are two parameters right what does this parameter sigma decides? The sigma decides the width of this loop ok larger is the like if your mu is larger that means your loop shifts towards right and right. Suppose let us say this is for some mu 1 and you have another quantity mu 2 obviously mu 2 here is larger than mu 1. So in this case your loop has shifted like this and now again this is symmetric around this point mu 2 and the sigma square defines how big is the how big this loops are. If sigma square is larger this spread is going to be larger ok. So for example if this is let us say for one sigma square let us call one and now if I make it larger then this is going to be more prominent like this it becomes a bulgy. I do not know if you can look into this like this upper one is with a larger sigma square but for a same value of mu 1 ok. Now this is often used whenever you have to handle both positive and negative quantities one of that that comes commonly in handling errors, errors could be positive or negative ok and noise, noise could be also positive negative. So whenever we want to model such noise you want to use this Gaussian random variable. One example is suppose let us say you are trying you have one target like one target is here and you want to hit it and whenever you hit it may fall short of this target or it may go beyond the target right. So when it goes beyond it is going to be taken as exceeding positive when it is fall short it could be negative ok. Now in modeling that like that is like an error like you want to be exactly falling here but sometimes you may exceed sometimes you may fall short and because every time it will not same things will not happen sometimes you may fall here sometimes you may fall here and sometimes you may exceed here that is like a random quantity every time you shoot you may be falling depending on so many factors that will decide maybe wind velocity humidity temperature all these things matter when you are basically firing something and all those things are random. So in such case you may want to model them as a Gaussian distribution. Other thing is Rayleigh distribution ok. So Rayleigh is actually now a derivative of Gaussian and this is like comes with a parameter sigma and it is often denoted as Rayleigh sigma square and where is sigma is a positive quantity. And here this Rayleigh takes a value between 0 to infinity that means it takes positive real numbers and its pdf is denoted like this. Notice that it is somewhat similar to Gaussian but not exactly the same. It has a X variable coming in out not only so there is a typo where they should have been X. So the X is not only inside the exponential but it is coming out outside exponential also and now one property is like one property of Rayleigh distribution is Rayleigh can be thought of an envelope of two Gaussian random variables. For example, let us take you have two Gaussian random variables X1 and another in is X2 and if you are going to take their squared sum like you are taking X1 square and X2 square and take the square root the new random variable X this is going to follow your Rayleigh distribution ok. So this is often thought can be thought as like envelope of noise suppose let us say whenever you have some signals right you basically take their squares and let us say when you want to add this is like a finding the amplitude like if you have two signals y1 and y2 let us say this is like your this is let us say this is your y1 axis and this is like y2 axis and ok maybe let me call this as ok I should have denoted this also look. Let us say let us say this is some X1 axis and this is your X2 and you have some particular point here X1 and X2 and you want to know the distance and this distance is from origin is naturally X1 square plus X2 square right. If your X1 distance and X2 distance are both are Gaussian distributed then their distance from the origin is Rayleigh distributed.