 Last time we have considered Gaussian quadrature with two points. So, we had found interpolation points which are the zeros of a Legendre polynomial of degree 2. When we interpolate at these two points by a linear function and integrate the linear function, we get a formula for approximate quadrature and that formula is exact for cubic polynomials. So, we first found the points on the interval minus 1 to 1. So, the Gauss points in the interval minus 1 to 1, those are minus 1 by root 3 and plus 1 by root 3. Next we looked at a 1 to 1 on to a fine map from the interval minus 1 to 1 to interval a b, a general interval and then using this map we defined Gauss formula with two points for the interval a b. We obtained an error formula for this numerical quadrature and then next we looked at composite Gaussian quadrature with two points. So, our interval a b was subdivided into small intervals of length b minus a by n. On each of this interval we applied our basic Gauss formula with two points and then we obtained a composite Gaussian quadrature and the error is of the order of h raise to 4. So, it is same as the composite Simpson's rule with the assumption that our function should be 4 times differentiable. Today, what we are going to do is we are going to define a general Gauss formula. So, now, we had considered only two points. So, now, we will first define what we mean by n Gauss points or n plus 1 Gauss points. The two points minus 1 by root 3 and plus 1 by root 3 they were obtained by looking at three functions 1 x x square and orthonormalizing. So, same idea we will use and we will look at say 1 x x square x raise to x cube and so on. To these functions we will orthonormalize these and then get orthonormal polynomial and the zeros of orthonormal polynomials they are going to be our Gauss points. They will have similar property that if you look at n plus 1 Gauss points, if you fit a polynomial of degree less than or equal to n and integrate then we are going to get a formula for numerical quadrature of the type summation w i f of x i i goes from 0 to n. Now, this formula we expected to be exact for polynomials of degree less than or equal to n, but we will see that it is going to be exact for polynomials of degree less than or equal to 2 n minus 1. So, let us first define the Legendre polynomials and then define the Gaussian quadrature. So, our setting is x is equal to C A B. We have got our inner product, inner product of f and g is going to be integral a to b f x g x d x. Look at the functions f 0 x is equal to 1, f 1 is x is equal to x, f n x is equal to x raise to n and so on. Norm f is going to be the induced norm. So, we denoted by norm f 2 square root of inner product of f with itself positive square root. Then the Gram-Schmidt orthonormalization is g 0 x is going to be equal to f 0 upon norm f 0. Then for n is equal to 1, 2 and so on. Our function R n is function f n minus summation j goes from 0 to n minus 1 inner product of f n with g j multiplied by g j. So, we have come up to the stage n minus 1. So, we have calculated g 0, g 1, g n minus 1. We subtract this term from f n. Now, by very definition if I look at inner product of R n with g k, where k varies from 0 to n minus 1 that inner product is going to be 0. And this is normalization. So, g n is R n divided by norm R n to norm. The functions which we obtain or the polynomials which we obtain g 0, g 1, g 2 and so on, they have this property that when you consider span of f 0, f 1, f n that means look at all the linear combinations of f 0, f 1, f n. A linear combination of f 0, f 1, f n is going to be a polynomial a 0 plus a 1 x plus a n x raise to n. So, this span is same as span of g 0, g 1 up to g n. Now, look at our function R n. In R n, we have got this function f n which is f 0, f n x is equal to x raise to n and then we are subtracting something. Now, each g j, when you consider j going from 0 to n minus 1, it is going to be a polynomial of degree less than or equal to n minus 1. So, f n is x raise to n. We are subtracting a polynomial of degree less than or equal to n minus 1. So, R n is going to be a polynomial of degree n and we are dividing by a constant. So, g n is going to be a polynomial of degree n. So, these g 0, g 1, g 2 up to g n, these are known as Legendre polynomials. So, now, g n is a polynomial of degree n. It is going to have n roots, but what is important is those n roots, they are going to be distinct. I am not going to prove that part, but that is property of Legendre polynomial. So, g n, it has got n roots. Those n roots are distinct and those are known as our Gauss point. So, we will look at the n plus 1 Gauss points, fit a polynomial of degree less than or equal to n, integrate it and then we will get the formula for Gaussian integration. Ortho normality property of our Legendre polynomials g j, it tells us that if you look at inner product of g i with g j, that will be equal to 1, if i is equal to j and 0, if i not equal to j. So, in particular, if you look at g n plus 1, this g n plus 1 will be perpendicular to function g 0, g n plus 1 will be perpendicular to g 0, g 1 up to g n. It will also be perpendicular to g n plus 2, but that part we do not need. So, now, our g n plus 1 is perpendicular to g 0, g 1 up to g n, span of g 0, g 1, g 2, g n that was polynomial space of degree less than or equal to n and hence our g n plus 1 is going to be perpendicular to any polynomial of degree less than or equal to n. So, we have g n plus 1 g j is equal to 0, for j is equal to 0, 1 up to n, span of g 0, g 1, g n is equal to span of 1 x, x raise to n and hence inner product of g n plus 1 with a polynomial a 0 plus a 1 x plus a n x raise to n is equal to 0, for any value of a 0, a 1 a n, a 0, a 1 a n, these are the coefficients, they are going to be real numbers. Now, g n plus 1, it has got n plus 1 distinct 0s. So, let me denote those 0s by x 0, x 1, x n, g n plus 1 is a polynomial of degree n plus 1, let us factorize it, these are 0s of g n plus 1. So, you will have a factor x minus x 0, x minus x 1, x minus x n. So, we have got in all n plus 1 brackets. So, that means that is going to contribute x raise to n plus 1 terms and then the lower order terms, because g n plus 1 is a polynomial of degree n plus 1, here the coefficient is going to be a constant, it cannot be a function of x, because if it is a function of x, then it will be a polynomial of degree bigger than n plus 1, but g n plus 1 is a polynomial of exact degree n plus 1 and it is perpendicular to a 0 plus a 1 x plus a n x raise to n for any values of a 0, a 1, a n and hence we can conclude that x minus x 0, x minus x 1, x minus x n, this is going to be perpendicular to x raise to j for j is equal to 0, 1 up to n. You substitute here, once a 0 is equal to 1, remaining coefficients to be 0, then a 1 is equal to 1, remaining coefficients to be 0, alpha n plus 1, it is a constant. So, it comes out of the integration sign. So, we have got this and this x minus x 0, x minus x 1, x minus x n, this we denote by w x. So, this is a crucial property of our Gauss point. So, g n plus 1 is Legendre polynomial of degree n plus 1, obtained by orthonormalizing functions 1, x, x square, x raise to n plus 1. This g n plus 1, it has got n plus 1, 0s. Those 0s are 0, 0s are 0, 0s are 0, 0s are distinct. So, we denote them by x 0, x 1, x n and then if you look at w x, which is x minus x 0, x minus x 1, x minus x n, it is in a product with x raise to j is going to be 0 for j is equal to 0, 1 up to n. So, using this property, we will show that our Gaussian quadrature is going to be exact for more than a polynomial of degree n. So, now, let us look at the interpolating polynomial. Look at the Lagrange form. So, p n x will be given by summation f x i, l i x, i goes from 0 to n. We have fixed now our interpolation points x 0, x 1, x n. We are fitting a polynomial, we are integrating and then we get the formula of the type w i into f x i, i goes from 0 to n, the summation and then we get the error part. So, the error part it is integral a to b and then two functions. One function is our divided difference of f based on x 0, x 1, x n and x and we have multiplied by w x into d x and then that integral. If instead of the divided difference based on a point x, if we had a divided difference based on some fixed points, then I could have taken it out of the integration sign and use the fact that integral a to b is equal to 0. Now, integral a to b w x d x is equal to 0, because we have got integral w x x raise to j is equal to 0 if j is equal to 0 1 up to n. So, a particular case when we have got j is equal to 0, then that means integral a to b w x d x is equal to 0. This property we can use by replacing our divided difference x 0, x 1, x n, x by a divided difference based on say x 0 repeated twice x 1, x 2, x n plus there is going to be a one more term and that is obtained by using the recurrence formula for divided differences. We have used this method earlier. So, we are going to use it now for this Gaussian integration. So, we have f x minus p n x to be error f x 0, x 1, x n x into w x, where w x is product of x 0, x n x into w x, where w x is product of x minus x 0 up to x minus x n, integral a to b w x x raise to j is equal to 0, integrate both the sides. So, you have integral a to b f x d x minus integral a to b p n x d x is equal to this error term consisting of two parts, one divided difference, another function w x. Now, look at the divided difference based on x 0, x 1, x n, x this using recurrence relation repeatedly we can write this to be equal to f, it is divided difference based on x 0 repeated twice x 1, x 2, x n plus the next is divided difference based on x 0 repeated twice, x 1 repeated twice and x 2, x 3 up to x n appearing only once multiplied by x minus x 0. In the next term x 2 also will be repeated twice and we will have x minus x 0, x minus x 1 and one continues and what one gets is divided difference based on x 0 repeated twice, x 1 repeated twice, x n repeated twice. So, all the interpolation points are repeated twice and then x multiplied by now x minus x 0, x minus x 1, x minus x n that is nothing but our w x and the property of w x is integral a to b w x, x raise to j is equal to 0 for j is equal to 0 1 up to n. So, now you integrate this, so you integrate this multiplied by w x, so I look at this integral that is our error in the numerical quadrature when I do that the first term is a constant. So, it comes out of the integration sign integral w x d x is 0, so there is no contribution from this term. Then the next again the divided difference is constant it is not depending on x and we are going to have w x multiplied by x minus x 0 d x, w x is perpendicular to constant function 1 and function x, so there will be no contribution from this term and like that for all the terms except this last term. So, our integral a to b f of x 0, x 1, x n, x w x d x becomes equal to integral a to b f x 0 repeated twice, x 1 repeated twice, x n repeated twice, x w x 1 w x 1 w x from here, 1 w x from here, so we have got w x square. So, now our error it has got integration of two functions, one function is continuous, we assume f to be sufficiently differentiable. So, we have got divided difference based on x 0 repeated twice, x 1 repeated twice, x n repeated twice, so these are going to be total 2 n plus 2 points and then we have got point x multiplied by w x square, w x square will always be bigger than or equal to 0 and hence the mean value theorem for integration is applicable. So, using this mean value theorem for integration, we can take out the divided difference term out of integration as f of x 0 repeated twice, x 1 repeated twice, x n repeated twice and some point c and multiplied by integral a to b w x square d x and then this term will be equal to as I said we have got x 0 repeated twice, x 1 repeated twice, x n repeated twice, so those are 2 n plus 2 point and this point x. So, here this point x it should be equal to point c because we are taking it out of the integration, this is some fixed point and that is going to be equal to 2 n plus second derivative of f evaluated at some point c upon 2 n plus 2 factorial and then integral a to b x minus x 0 square x minus x n square d x. So, we have a formula integral a to b f x d x is equal to summation w i f x i, i goes from 0 to n. So, it is based on n plus 1 points, the error is it contains 2 n plus second derivative of our function which will mean that if our function f is a polynomial of degree 2 n plus 1, then the error is going to be equal to 0. We have considered the case n is equal to 1, we have got x 0 and x 1, in that case there will be no error provided f is a polynomial of degree 2 n plus 1. So, n is equal to 1 that means cubic polynomial. So, this result it can be generalized and we have got x 0 and x 1 is a way to choose our interpolation points such that you are interpolating the given function at n plus 1 points. So, that means you are fitting the polynomial of degree n, but the error is 0 for polynomials of degree less than or equal to 2 n plus 1. Now, this integration at Gauss points. So, it comes out to be equal to modulus of the error is less than or equal to, there will be norm of f 2 n plus 2 infinity norm, then you will have integration of this. So, the integration it will definitely have a term b minus a raise to 2 n plus 3, each of this I can dominate by b minus a. So, I will have b minus a raise to 2 n plus 2 and then integral a to b. So, that is how you get b minus a raise to 2 n plus 3 and some constant. Now, one can find more precise bound by integrating this like not dominating it by b minus a, but you can integrate that is what we have been doing. So, you integrate, but anyway the error for the Gaussian integration is going to be less than or equal to this. So, now, we have defined Gaussian quadrature for a general case like looking at the n plus 1 point. Now, the question comes whether this is going to converge for all continuous function. So, that means you look at our set of points, they are going to be always Gauss point. We have looked at already Gauss 2 points. So, those were our 2 points in the interval a, b. Now, you look at 3 Gauss points. So, they will be something different. So, like that if you choose your Gauss points as interpolation points, fit a polynomial, obtain an approximate formula for integration, whether it will converge to integral a to b f x d x as n tends to infinity. Please note that we are not looking at composite rule. Now, we are increasing the degree of the polynomial. We already know that our polynomial p n x, no matter how you choose your rules, there always exists a continuous function for which the interpolating polynomial does not converge to f in the maximum norm. Now, the convergence of p n to f, that is a sufficient condition for convergence of numerical quadrature formula. It can happen that even though the polynomial does not converge to f for all continuous functions, our numerical integral can converge to integral a to b f x d x. Now, this is what happens in the Gaussian quadrature rule and it does not happen for the Newton Quotes formula when you are looking at our set of points to be the equidistant points in the interval a, b. In the interval a, b, we want to look at n plus 1 points. So, in case of Newton Quotes formula, we look at those points to be equidistant points and in case of Gaussian quadrature, we will take them as the Gauss points, which are the zeros of the Legendre polynomial. Now, to prove the convergence of numerical quadrature to integral a to b f x d x, we want to look at f x d x. There are going to be two terms, two facts crucial. One is, we are going to show that our weights in the Gaussian integration, they are always bigger than 0. So, this is the first one and the second one is the Weierstrass theorem, that any continuous function can be approximated by polynomials in the maximum norm. So, using these two results, we are going to show that Gaussian quadrature converges to integral a to b f x d x as n tends to infinity, where n plus 1 are our interpolation point. So, let us show that the points or the weights in the Gaussian integration, they are always bigger than 0. Now, so far, when we were writing a numerical quadrature formula, we were writing summation w i f x i, i goes from 0 to n. So, as such our w i and x i, they depend on n. Like look at equidistant points. In the case of equidistant points, we had got first two points, which are the two end points a and b. Then the next case was a b and a plus b by 2, but the case after that, when we want to consider four points, our points will be a b and then two points, which are the two points are at a distance b minus a by 3. So, if we want to be specific, we should have written our x i's to be depending on n and our weights also depending on n. In that case, so far what we have been doing is, we have been fixing the degree of the polynomial. So, that is why in order to not to have notation to be cumbersome, we wrote w i and x i with dependence on n understood or it is implicit. Now, we are going to change n. So, let us be more precise with our notations and then let us write this as w i depending on n and x i's depending on n. Now, w i's the weights, they are integral a to b l i x d x, where l i is the Lagrange polynomial. Here, still I have not written the dependence on n, but afterwards when we are going to look at the convergence, that time we will write it explicitly. So, at present I am writing w i with understanding that it depends on n. How do we obtain w i's? We look at the interpolating polynomial p n, look at their Lagrange form. So, that is summation f x i l i x i goes from 0 to n, integrate it f x i's are constant. So, they come out of the integration sign and integral a to b l i x d x is that is our w i. These Lagrange polynomials, they have property that summation j goes from 0 to n l j x is equal to 1. This was one of our tutorial problem that the Lagrange polynomials when you add them up, then they are equal to 1 and hence I write w i as integral a to b l i x d x multiplied by 1. So, that 1 I am writing it as summation j goes from 0 to n l j x d x. Let us split this sum as when j is equal to i and the remaining terms when j not equal to i. So, w i is integral a to b l i square x d x plus summation j goes from 0 to n j not equal to i integral a to b l i x l j x d x. What we are going to show is this term is equal to 0. If we can show that this term is equal to 0, then w i will be strictly bigger than 0 because it will be integral a to b l i square x d x. So, that is the idea and in order to show that integral a to b l i x l j x d x is equal to 0, we will use the fact that our interpolation points those are not any points in the interval a b, but those are some special point. They have the property that when you look at w x which is x minus x 0, x minus x 1, x minus x n, this w x is perpendicular to functions x raise to j, j going from 0 1 up to n. So, using this property let us show that integral a to b l i x l j x d x is equal to 0, if i not equal to j. So, we look at the case when i not equal to j l i x l j x. So, the definition of l i x is product k goes from 0 to n k not equal to i x minus x k divided by x i minus x k. Similarly, l j x will be product say l goes from 0 to n l not equal to j x minus x l x j minus x f. If you do not write like the same notation l it can be p, it is just a domain x. So, this is product of l i x into l j x. So, look at the first product, the first product contains all x minus x k except k not equal to i. In the second one we have got x minus x l all the terms except when l not equal to j, because we are assuming that i not equal to j the term x minus x i will be there. So, I take the term x minus x i and join with this. So, what I will have will be x minus x 0 x minus x n including the term x minus x i divided by the product k goes from 0 to n x i minus x k upon for k not equal to i. From here I am taking the term x minus x i divided by x i minus x l. So, that term will be x minus x i is absorbed here. So, here this term should be equal to x j minus x i, because we are putting l is equal to i. Now, from this product the term l is equal to i we are associating here. So, this product becomes l goes from 0 to n l not equal to j l not equal to i and then x minus x l x j minus x l. The numerator x minus x 0 x minus x n is going to be our function w x. The denominator is a constant and now look at this term this has got n minus 1 brackets, because total there are n plus 1 brackets and 2 brackets are not there. So, it is going to be n minus 1 brackets. So, it is going to be a polynomial of degree n minus 1. So, we have got our l i x into l j x to be w x divided by the product k by some constant and multiplied by a polynomial of degree n minus 1. We are interested in showing that integral a to b l i x l j x is equal to 0 for i not equal to j. So, let us look at integral a to b l i x l j x d x that will be equal to integral a to b w x by some constant c multiplied by q n minus 1 and use the fact that w is perpendicular to q n minus 1. So, since x 0 x 1 x n are Gauss points w x into q n minus 1 x d x is going to be 0 and l i x is a polynomial of degree n. So, it cannot be identically 0. So, integral a to b l i square x d x is bigger than 0 and hence our w i's they are going to be bigger than 0. So, it is a very important property of Gaussian integration that the weights are always bigger than 0 and using this property we are now going to show the convergence of Gaussian integration when we are considering the interpolating polynomial based on these Gauss points. Our proof it is going to be based on the weights are bigger than 0, then the Weierstrass approximation theory property and the third property is that in the Gaussian integration there is no error provided your function is a polynomial of degree less than or equal to 2 n plus 1. So, look at the function f to be continuous function. Let us introduce the notation i n f to be summation i goes from 0 to n w i n f x i n. So, now, I am denoting the dependence on n and there is no error or the integral a to b f x d x is same as i n f provided f is a polynomial of degree less than or equal to 2 n plus 1. So, as a special case if I take f x to be equal to 1 then integral a to b 1 d x is going to be equal to b minus a and i n for that function will be summation i goes from 0 to n w i n is equal to b minus a. Now, our claim is that i n f converges to integral a to b f x d x as n tends to infinity. So, the first thing is Weierstrass theorem. What we want to show is integral a to b f x d x minus i n f is less than epsilon or constant times epsilon if your n is big enough. So, I am fixing a epsilon greater than 0 and then by the Weierstrass approximation theorem there exists a polynomial say q m of degree less than or equal to m minus i n f is less than such that norm of f minus q m infinity norm is less than epsilon integral a to b q m x d x is going to be equal to i n q m where i n q m is this approximate quadrature provided your n is bigger than or equal to m minus 1. When you look at q m to be a polynomial of degree less than or equal to m and here you are looking at i n. So, we have got n point when we have got n points the formula is exact for polynomials of degree less than or equal to 2 n plus 1. So, that is how I get that if n is bigger than or equal to m minus 1 by 2 then integral a to b q m x d x is equal to i n q m. This is our first step in the next step we want to show that modulus of integral a to b f x d x minus i n f it is going to be less than epsilon or constant times epsilon. So, we have fixed epsilon bigger than 0 we have found out q m such that norm of f minus q m its infinity norm is less than epsilon if my n is bigger than or equal to m minus 1 by 2 then integral a to b q m x d x is same as i n q m. So, I add and subtract that and then I get this. Now, this will be less than or equal to integral a to b modulus of f x minus q m x d x plus we have i n minus i n. So, that is summation j goes from 0 to n w j n q m x j n and w j n f x j n. Now, since our w j n are bigger than 0 I do not have to write modulus here otherwise we have to write the modulus. So, we have got this f x minus q m x it is going to be less than or equal to norm of f minus q m infinity and integral a to b d x is b minus a q m x j n minus f x j n this is also going to be less than or equal to norm f minus q m infinity because infinity norm means maximum of mod of f x minus q m x x belonging to a b. So, what is left is summation j goes from 0 to n w j n and that is equal to b minus a. So, you get this to be less than 2 epsilon into b minus a. So, for a fixed epsilon we have found n such that if n is big enough i n f is going to converge to integral a to b f x d n. So, this is convergence in Gaussian quadrature. So, now, what goes wrong with the Newton Quartz formula? Why cannot I use the same argument? Like I start with f belonging to C a b and then in case of Newton Quartz formula I am going to have n plus 1 interpolation point n plus 1 interpolation point it is going to be exact for polynomials of degree less than or equal to n. So, this is the difference that for the Gaussian quadrature we had exactitude for polynomials of degree less than or equal to 2 n plus 1, but that should not matter because anyway I want that modulus of integral a to b f x d x minus i n f should be less than epsilon when n is big enough. So, may be in Newton Quartz formula I will have to choose n bigger than in the Gaussian quadrature and for convergence that does not matter. What we want is given epsilon for n large enough modulus of integral a to b f x d x minus i n f should be less than epsilon. So, let us see where our proof breaks down. So, we have equidistant points and then our quadrature rule is going to be exact for polynomials of degree less than or equal to n. So, our fix our function f and let us calculate or let us find a q m. So, by the Weierstrass theorem I will have q m such that norm of f minus q m infinity norm is less than epsilon. So, integral a to b q m x d x will be equal to i n q m provided n bigger than or equal to m in case of Gaussian quadrature we had n bigger than or equal to m minus 1 by 2. So, I have only for n bigger than or equal to m. I look at modulus of integral a to b f x d x minus i n f I add integral a to b q m x d x add and subtract. So, I will get this then modulus of integral a to b f x d x minus integral a to b q m x d x this will be less than or equal to integral a to b mod of f x minus q m x d x this can be dominated by norm of f minus q m infinity into b minus a. So, we have got this term to be less than epsilon into b minus a then look at the term i n f minus i n q m this will be summation j goes from 0 to n w j n f of x j n minus summation j goes from 0 to n w j n q m x j n. So, by triangle inequality this is going to be less than or equal to summation j goes from 0 to n mod w j n and modulus of f x j n minus q m x j n this term will be less than or equal to epsilon and you have left with summation j goes from 0 to n mod w j n. So, here is the crucial difference for Gaussian quadrature this modulus of w j n was same as w j n. So, we had here summation j goes from 0 to n w j n and that is equal to b minus a in the Newton quotes formula also summation w j n is going to be equal to b minus a that fact still remain, but in our error formula what is coming into picture is summation mod w j n and in case of Newton quotes formula the our weights they are going to be of mix sign that means they can be both positive and negative and that is why the there is no convergence if you choose your points to be equidistant points and hence we went to the composite numerical quadrature in case of equidistant point or the Newton quotes formulae we had special cases as trapezoidal rule then Simpson's rule and then we can write higher degree. Whereas, for the Gaussian quadrature we have got convergence we have a choice we can increase the degree of the polynomial. So, instead of considering the composite rules we can look at the higher degree polynomials and then get a numerical quadrature formula. So, if your function f is sufficiently smooth then it is worthwhile to apply Gaussian integration of higher order than composite say trapezoidal or composite Simpson's rule because the speed of convergence is going to be very high for Gaussian integration. So, if your function f is sufficiently smooth then one should use the Gaussian integration of higher order. Now, let us look at what are the disadvantages of Gaussian integration. Gauss two points in the interval minus 1 to 1 which we obtained they were minus 1 by root 3 and 1 by root 3. Similarly, the higher order Gauss points they are going to be irrational. So, that seems to be a stumbling block that one prefers a simple Simpson's rule. But then when you are writing a program that should not be a stopping thing because the Gauss points and Gauss weights for higher degree polynomials or for general case the tables are available. So, initially may be while writing the program it is a bit more problem, but afterwards it pays off. Now, another drawback here is that suppose I have got Gauss two points. So, I get minus 1 by root 3 and 1 by root 3 in the interval minus 1 to 1 I calculate. Now, I find that the accuracy is not good enough. So, I go to 3. When I look at three Gauss points then whatever work we have done for two Gauss points that is lost that is one of the disadvantage of the Gauss point. But as I said that if your function is sufficiently differentiable then in Gaussian quadration we are going to get a very fast convergence. So, in our next lecture we will consider Romburg integration and then we will solve some problems. So, thank you.