 So, we continue with the delta function, we examine some applications of the concept of delta functions to the kind of problems that we are going to deal with, we just recount to 2 applications. So, one application is for transforming discrete functions to continuous functions. So, supposing we have a situation where some lattice is defined, it need not be equally spaced, let us say it is located at space points x naught, x 1, x 2 etcetera say x n. Let us say some probabilities are defined only on these points say P naught, P 1, P 2 and say P n and so on. These are of course, very well known cases in probability theory, these are called probability mass functions because they are pure probabilities being defined on discrete points. So, it is the probability let us say of finding a particle on a lattice point x 1 will be P 1 probability of finding a particle in lattice point x n is P n etcetera. So, we can do lot of work with this discrete distributions without any problem. However, supposing we have to work together with continuous distributions and we want to adopt a single formalism to handle them. So, probability is sometimes useful to convert these discrete distributions to continuous distributions. Delta function efforts us to do that. So, we can illustrate that point by saying that if I define P x as my probability density function which is a continuous representation of P n the index n being now replaced by a continuous variable x. It is possible to do it via defining a delta function in the form overall n that is you multiply P n by the delta function with the shifted argument x minus x n sum overall lattice points that defines the probability density function which is now continuous. It does not carry any extra information that the discrete distributions carried. But however, it efforts us a continuous treatment such as for example, for the discrete functions I could not have been carry able to carry out Fourier transforms or Laplace transforms because these are continuous transforms whereas now for P x I can carry out Fourier transform or for that matter Laplace transform. So, it is it is of some advantage in some situations. So, one of the practical another practical application of a similar concept is if I have charges let us say located at various points in space then and each of them is let us say unit charge let us say q q can be let us say plus 1 or minus 1. So, I assign a value q i to it which can take up values plus or minus 1 then again I can write a uniform charge a continuous charge density function in the form n q n delta x minus x q n depending on the local charge q n can take up either plus 1 or minus 1 charge which is very useful in several problems. Another important application of the delta function is in transforming distributions supposing I have a probability distribution function in 2 variables x and y and x and y let us say independent variables it can be for example, the velocity distribution of the x component and the y component of molecule just 2 dimensional let us say then here for example, my u values it can be belonging to minus infinity to infinity and we also could have similar space. Now, for some reason I want to transform this distribution in terms of a new quantity z say it could be u into v or u plus v or maybe some other function in general of say g u v this is defined let us say this is known given function. So, in terms of the x y variable we can write as z equal to g x y is given the matter of interest is what will be the function phi z given the fact that x and y obey the distribution function denoted by f x y to carry out this task delta function becomes very handy. We define phi z as integration over the entire space of x and y of the function f x y subject to the constraint that z equal to g x y integrated over entire space of functions x and y. So, basically this process implies that z equal to g x y is a definite result. So, represented by the delta function in fact, an interpretation of delta function in terms of distribution theory is a representation of a deterministic process via a probability distribution. So, when you say a probability distribution is delta x it basically means that it is a it has a definite value it has only value at x equal to 0. So, with that understanding the problem is now reduced to merely evaluating this integral. So, it is a formally very useful process to carry out a specific evaluation let us consider a case z equal to x plus y. So, I had a function f x y and I needed to evaluate a distribution for a quantity z which is the sum of these 2 variables. Let us case a specific case of a distribution function f x y which is Gaussian that is since it has 2 variables a normalized Gaussian will be represented as sigma square into 2 pi e to the power minus x square plus y square by 2 sigma square supposing my joint distribution of 2 independent variables x and y is given by a Gaussian. This is of course, a fairly simple situation, but it illustrates the point. And I am now required to project this distribution on to variable z defined by z equal to x plus y or obtain the function phi z. So, we follow the equation represented here. So, we get phi z it can be written as 1 by 2 pi sigma square will stand out and the space of these functions is minus infinity to infinity. So, it will be a double integral of the function x square plus y square by 2 sigma square delta of z minus g x y and g here is x plus y. So, it is x minus y integrated over all space. Now we know that we know the properties of delta function it is basically a value selector at this point where its argument becomes 0. So, let us first carry out the integral with respect to the y variable. We have an option to do here either with respect to y or with respect to x, but we decide to do first with respect to y. So, then the integration with respect to y is simplified because of the delta function because it will select the value of the function when y equal to z minus x that is when the argument of delta function becomes 0. So, because at that point delta z minus x minus y is something like saying delta 0. Of course, delta 0 does not have a value, but instead an integral it will then select the value of the function at corresponding to y equal to z minus x and that value will be now only x integration is left y integral is over we will write it as x square plus z minus x whole square by 2 sigma square integrated over dx and here it will be 1 by 2 pi sigma square. So, we have now a 1D Gaussian integral the 2D problem is reduced to a 1D problem and we can actually work out and expand the argument z minus x whole square and for example, it will come to 1 by 2 pi sigma square x integral minus infinity to infinity e to the power minus it will be 2 x square minus 2 z x plus z square divided by 2 sigma square dx when we expand we will get this it will be 2 sigma x square here. We can now take out this for example, z square by 2 sigma square out because we are not integrating over z we are integrating over x. So, it will be this whole thing of course, we will have a bracket here. So, 1 by 2 pi sigma square e to the power minus z square by 2 sigma square integral minus infinity to infinity e to the power minus x square minus z x divided by sigma square dx. This can be made into a perfect square by adding z square by 2 and then subtracting z square by 2 we will skip that process because it is actually a Gaussian integral in and the final value of this integral then turns out to be 1 by 2 root pi sigma s e to the power minus of z square by 4 sigma square. So, this is also a Gaussian integral, but its variance now is a 2 sigma square. Earlier we had the individual variances for the phi function was a sigma and now the overall variance has doubled. So, it will be root 2 sigma will be the standard deviation, but we have been able to contract a 2 variable distribution to a single variable distributions under the condition for the sum of the 2 independent variables. So, this is one very useful demonstration of the application. In fact, one can do it in most cases becomes very useful method in handling such problems. We will right now proceed to additional concepts that is required in the study of stochastic phenomena the any other property of delta functions that we might need as we go along we will derive it then and there. Let us now examine the concept of generating functions while dealing with the Fourier transforms. We mentioned that one can define a characteristic function which is actually the Fourier transform of a function fx, but if fx is basically a probability distribution then the characteristic function chi k has several advantages. One can obtain various moments properties of the distribution by just the knowledge of chi k. There are other ways also of generating the equivalent of characteristic functions and these are called generating functions without having to do Fourier transforms and without having to use quantities with imaginary arguments. So, we define a generating function applicable to a continuous distribution and separately for discrete distributions. If my function fx is a continuous is defined over some space x a b and if it is entire real line then it can be minus infinity to infinity. If it is a valued only for positive values then it will be 0 to infinity regardless of how it is defined. One can define a generating function g of a parameter s always we must note that if x is the random variable one defines a conjugate parameter s via this is this is over the space of x. An exponential function multiplied by the given function fx and integrated over all x and that is how we define generating function. Whether s should be positive or negative right now we will not specify it. The domain of s will depend upon the region on which this integral exists it may not exist for all values of s. So, long as it exists in some support space for s then generating function is useful this is basically definition. So, wherever possible we will define a definition by 3 parallel lines identically equivalent. Once we have a generating function like the characteristic function we can define moments by moments of the distribution function by differentiating gs. For example, g prime s will be integrated over x we are differentiating with respect to s. So, x will come out it will be fx dx. If we now set s equal to 0 it will simply be x fx dx and if f is a probability density function then x fx dx is going to be the average value denoted by sometimes as x bar or using angular brackets or in statistics by a notation mu quite often. In general if you define moments mk as x to the power k fx dx then the mean x bar or mu is m 1. Similarly x square bar will be m 2 by definition second moment and in general by differentiating gs we can show that g for example, double prime 0 will be simply x square fx dx. You differentiate e to the power s x twice you will get x square then put s equal to 0 you will get a you will get this integral and which is m 2. So, the first moment is the first derivative of the generating function the second moment is the second derivative of the generating function. So, on we can go and we can show that the kth moment mk is the kth derivative of the generating function evaluated at s equal to 0. For short hand notation we have set the value of 0 in the argument of g itself although it has to be understood that first we differentiate and then set the value. Let us work out some examples to understand the meaning of the generating function. Let us consider a distribution function which is having a form lambda e to the power minus lambda x. In fact this distribution function is often called as the interval distribution function is quite useful. Lambda then has a meaning of the occurrence of a mean rate in space fx is a space variable. Supposing we have a problem where you have dots along a line then one can ask a question what is the probability that given a dot at some point say at 0 there will be no dot up to a distance x and a dot will appear exactly at x between x and x plus dx and that probability is given by fx equal to lambda e to the power minus lambda x. So, if you want to develop the generating function for this interval distribution it will have the form the domain of x is 0 to infinity the interval lowest interval possible is 0 and by definition we have to multiply the function lambda e to the power minus dot t or lambda well we can use a t also as a dummy variable. Since x is a dummy variable you can as well replace it with t and this becomes 0 to infinity e to the power minus lambda minus s t dt 1 lambda outside and when I integrate this function if that function has to tend to 0 and t at the point t equal to infinity that has some demand on the value of lambda minus s and if that is met then it can be written as 1 by lambda minus s and this is valid for all s which is less than. So, it is valid for s negative definitely, but it is even valid for s positive up to a value of lambda basically it follows from the fact that integral 0 to infinity e to the power minus some constant A into x dx will be 1 by A if A is greater than 0 or else it is not defined we have used that property in the above integration. Thank you.