 today's topic is Gaussian integration. So, far when we considered numerical integration rules, we what we did was we fixed the interpolation points. So, we started with interpolation points x 0, x 1, x n. We fitted interpolating polynomial and then integration of that interpolating polynomial that was approximation to integral a to b f x d x. Now, today what we are going to do is we are we will not fix the points interpolation points beforehand. We will try to write a interpolation formula of the form summation w i f x i, where w i's are the bits and x i's are the interpolation points. So, we are going to have i going from 0 to n. So, total we have got 2 n plus 2 constants to be determined. Now, these we will try to determine. So, that there is no error for polynomials or polynomials or polynomials of degree less than or equal to 2 n plus 1. So, that is the idea of Gauss integration. So, when we fix points x 0, x 1, x n in the interval a b and look at p n x to be equal to summation f x i l i x i goes from 0 to n, where l i is the Lagrange polynomial. So, it is product j goes from 0 to n x minus x j divided by x i minus x j j naught equal to i. Now, this is Lagrange polynomial. Then integral a to b f x d x is approximately equal to integral a to b p n x d x, which will be equal to summation i goes from 0 to n f x i integral a to b l i x d x. Now, what you have to note is that l i x the Lagrange polynomial, it depends only on the interpolation points x 0, x 1, x n. So, there is no function f coming into picture there. It is a polynomial of degree n. So, you can integrate. So, you integrate the matrix and then you are going to get the weights w i. So, we have this to be equal to summation i goes from 0 to n f x i w i. So, these are real numbers independent of our function f. Now, this is going to be our starting point. So, let us start with a formula integral a to b f x d x is approximately equal to summation w i f x i i goes from 0 to n. So, our starting point is integral a to b f x d x is approximately equal to summation i goes from 0 to n w i f x i and we want to determine x i's and w i's. Such that integral a to b f x d x is equal to summation w i f x i i goes from 0 to n for polynomials of degree less than or equal to 2 n plus 1. So, we want that there should not be any error if our function f is a polynomial of degree 2 n plus 1. This can be achieved provided there is no error for functions 1 x x square up to x raise to 2 n plus 1 because any polynomial of degree 2 n plus 1 it is going to be combination of these functions 1 x square x raise to 2 n plus 1. So, now, put f x is equal to 1 and then equate integral a to b 1 d x is equal to summation w i f x i i goes from 0 to n. So, f x is 1. So, that will give us 1 equation and like that you will have 2 n plus 2 x raise to 2 n plus 1. So, now, put f x is equal to 1 and then equate a integral a to b 1 d x is equal to summation w i f x i i goes from 0 to n. So, f x is 1. So, that will give us 1 equation and like that we will get 2 n plus 2 equations because there will be f x is equal to 1 then x x square up to x raise to 2 n plus 1 and then our unknowns are also 2 n plus 2 in number. The unknowns are going to be the weights w 0, w 1, w n and x 0, x 1, x n. So, we have got 2 n plus 2 equations into n plus 2 unknowns. So, let us look at some special cases. So, suppose we are taking n is equal to 0. That means we have got x 0 is 1 point and w 0 is the weight. We want integral a to b f x d x is equal to w 0 f x 0 if f is a polynomial of degree 1 which will mean that once take function f x is equal to 1 then take f x is equal to x you will get 2 equations and from that we will try to determine w 0 and x 0. So, let us do that. So, we have integral a to b f x d x is approximately equal to w 0 f x 0, w 0 x 0 these are unknowns f x is equal to 1 constant function. We want that there is no error that means integral a to b d x should be exactly equal to w 0 of f x 0 f of x is 1. So, that means it is equal to w 0. So, this imposes the condition that w 0 should be equal to b minus a. So, which means that if I choose w 0 to be equal to b minus a and x 0 to be any point in the interval a b then the formula w 0 f x 0 is going to be exact for constant polynomials. So, now we want our formula to be exact for linear polynomial. So, we will impose one more condition for f x is equal to x. So, w 0 we have already determined the condition f x is equal to x that will determine our point x 0. So, we have f x is equal to x in this case also no error that will happen provided integral a to b x d x is equal to w 0 x 0 which will imply b square minus a square by 2 which is the integral a to b x d x should be equal to w 0 is b minus a multiplied by x 0. So, this means x 0 should be equal to midpoint a plus b by x 0. So, this means x 0 should be equal to midpoint by 2. So, thus integral a to b f x d x if you approximate it by b minus a into f of a plus b by 2 this is going to be exact for linear polynomials and this is nothing but the midpoint rule. So, thus we have solved the problem for case n is equal to 0. Now, let us look at the case n is equal to 1. So, when you put n is equal to 1 you are trying to approximate integral a to b f x d x by a formula of the type w 0 f x 0 plus w 1 f x 1 unknowns w 0 w 1 x 0 x 1. We will like to determine in such a manner that now the our rule is exact for polynomials of degree less than or equal to 3. This will be achieved provided there is no error for 4 functions which are 1 x x square x cube. Any cubic polynomial is going to be of the form a 0 plus a 1 x plus a 2 x square plus a 3 x cube. So, if there is no error for the 4 functions 1 x x square x cube there will not be any error for a cubic polynomial a general cubic polynomial. So, let us equate and get the 4 equations. So, we have integral a to b f x d x is approximately equal to w 0 f x 0 plus w 1 f x 1. First put f x is equal to 1 then we want b minus a to b minus a should be equal to w 0 plus w 1. So, that is the first condition then f x is equal to x. So, that is going to be b square minus a square by 2 it is the integral a to b x d x and on the right hand side it is going to be w 0 x 0 plus w 1 x 1. Next look at the function f x is equal to x square it is integral is x cube by 3. So, this is going to be b cube minus a cube by 3 on the right hand side it will be w 0 x 0 square plus w 1 x 1 square and f x is equal to x cube that will be b raise to 4 minus a raise to 4 divided by 4 is equal to w 0 x 0 cube plus w 1 x 1 cube. So, let me call these 4 equations 1 2 3 4. So, we have got 4 equations in 4 unknowns which are w 0 w 1 x 0 x 1. Now, these 4 equations we got 4 equations in 4 unknowns, but these are non-linear equations. If they were in linear equations then solving them is easy, but we have got 4 non-linear equations. Now, in this particular case one can do some manipulation and one can try to find w 0 w 1 x 0 x 1. But what we want is we want to look at a general case that when you are considering a formula of the type summation i goes from 0 to n w i f x i. How I should choose the weights w i's and the interpolation points x i's in such a manner that I have no error for as high degree polynomial as possible. So, that is why we will not do manipulation for this, but look at a general case. Now, for the general case the method which we are going to use I am going to explain again for this particular case n is equal to 1 and then we will see how to generalize. So, the first thing we are going to do is we are going to obtain the conditions on x 0 x 1 which will guarantee that if you interpolate if you choose x 0 x 1 in a certain manner fit a linear polynomial integrate then the formula which you are going to get it will be exact for cubic polynomial. So, let us derive the condition which the interpolation points x 0 and x 1 should satisfy. Once we find these conditions there are going to be two conditions then we will see how to choose x 0 and x 1 so that these conditions are satisfied. So, first we are finding the conditions which the interpolation points x 0 and x 1 should satisfy and then find the points x 0 x 1 which satisfy this condition. So, let us look at two points x 0 and x 1 fit a linear polynomial no matter how you choose your two points x 0 and x 1 if you have a linear polynomial then the interpolating polynomial is same as the function if the function is a linear polynomial. So, there will not be any error in our integration formula for linear polynomials, but we want something more we want that the error should not be there or error should be equal to 0 for cubic polynomial. So, we have f x is equal to f x 0 plus divided difference based on x 0 x 1 into x minus x 0 and then you have got error term f of x 0 x 1 x multiplied by x minus x 0 x minus x 1. So, this is our polynomial p 1 x and this is our error and hence integral a to b f x dx will be equal to integral a to b p 1 x dx plus integral a to b f of x 0 x 1 x x minus x 0 x minus x 1 dx. So, this is going to be the error in the integration now we have used this technique before hand if your points x minus x 0 if your points x 0 and x 1 are such that integral a to b x minus x 0 x minus x 1 dx is equal to 0 then we will have to make use of this relation in order to manage our error we had done this for the midpoint rule. In the case of midpoint rule we had integral a to b x minus a plus b by 2 dx is equal to 0. So, our error formula it has got 2 parts it has got a divided difference and then you are multiplying by the function w a. So, this divided difference in the error formula it depends on x. So, we cannot take it out of the integration sign, but if I replace this divided difference which is based on x by a divided difference which is independent of x and then plus extra term then I can take the divided difference out. So, let us look at the error and then f of x 0 x 1 x this divided difference let me write it as f of y 0 x 0 x 1 plus f of y 0 x 1 plus f of y 0 x 1 x 0 x 1 x multiplied by x minus y 0 this is the recurrence relation this you take on the other side divide by x minus y 0 that is the recurrence formula for f of y 0 x 0 x 1 x. So, this I am going to substitute here now here y 0 is a fixed point. So, that will come out of the integration sign and our error will be equal to f of y 0 x 0 x 1 integral a to b w x dx plus integral a to b f of y 0 x 0 x 1 x multiplied by x minus y 0 x 0 x 1 x minus y 0 w x dx where I am calling this as w x. So, if this is equal to 0 then our error is going to have this form. So, our if integral a to b w x dx is equal to 0 then error is equal to integral a to b divided difference based on y 0 x 0 x 1 x w x x minus y 0 dx. So, now, look at the error error has a divided difference of x 0 x 0 x 0 x 0 x 0 x 0 x 0 x x 0 based on 4 points that is f of y 0 x 0 x 1 x. If your function f is a quadratic polynomial then this divided difference will be 0 and that means error will be 0. So, if our points x 0 and x 1 if they are such that integral a to b w x dx is equal to 0 then we get the formula to be exact for quadratic polynomial. So, earlier we had only exactitude for linear polynomials. Now, we got for quadratic polynomial with one condition integral a to b w x dx. Now, we will use the same technique again and then obtain another condition for the equation which will give us the error to be 0 for cubic polynomial. So, now, let us write down the divided difference y 0 x 0 x 1 x as f of y 1 y 0 x 0 x 1 minus f of y 1 y 0 x 0 x 1 minus f of y 1 y 0 x 0 x 1 x multiplied by x minus y 1. So, this is the recurrence relation. This we substitute in the error. So, we will get error to be equal to f of y 1 y 0 x 0 x 1. It will be equal to f of y 1 y 0 x 0 x 1 minus f of y 1 y 0 x 0 x 1 x 0 x 1 minus f of y 1 y 0 x 1 minus f of y 1 y 0 x 0 x 1. So, this is the recurrence relation. This we substitute in the error. So, we will get error to be equal to f of y 1 y 0 x 0 x 1 integral a to b w x x minus y 0 dx plus here it should be plus not minus plus to be f of y 1 y 0 x 0 x 1 x x minus y 1. Then, we have got this w x y 0. So, it will be x minus y 0 w x dx. Now, if this is equal to 0, then this will be the expression for the error. If integral a to b w x dx is 0 and integral a to b w x multiplied by x minus y 0 is equal to 0. If both the conditions are satisfied, our error is integral a to b divided difference based on 5 points. The points are the divided difference is based on y 1 y 0 x 0 x 1 and x. So, we have got 5 points. If f is a polynomial of degree 3, then this divided difference will be equal to 0. So, our error is integral a to b a divided difference multiplied by w x into x minus y 0 x minus y 1 dx. If the divided difference term is 0, error is going to be equal to 0. So, thus we have got this. So, we have got this. If the divided difference term is 0, error is going to be equal to 0. So, thus we obtain two conditions which guarantee that there is no error for cubic polynomials and those two conditions are integral a to b w x dx is equal to 0 and integral a to b w x into x minus y 0 dx is equal to 0. We have w x is x minus x 0 x minus x 1. Integral a to b w x dx is equal to 0. Integral a to b w x x minus y 0 dx is equal to 0 implies x is equal to 0 that the error is integral a to b f y 1 y 0 x 0 x 1 x multiplied by w x into x minus y 0 x minus y 1 dx and this term is equal to 0 if f is a cubic polynomial. So, this means error is equal to 0 if f is a polynomial of degree less than or equal to 3. Now, look at the condition integral a to b w x x minus y 0 dx, y 0 is a fixed point and we already have integral a to b w x dx is equal to 0. So, that means this condition will reduce to integral a to b w x x dx is equal to 0. So, the conditions which we get are integral a to b w x dx should be equal to 0 and integral a to b w x multiplied by w x dx is equal to 0. So, we have to choose x 0 and x 1 such that these two conditions are satisfied and then for the error our y 0 and y 1 they can be any points in the interval a b. So, we can choose our y 0 to be equal to x 0 y 1 is equal to x 1 and we will get integral a to b f of x 0 repeated twice x 1 repeated twice x x minus x 0 square x minus x 1 square dx. Now, the reason for writing in this manner will be now we have got a divided difference which is going to be continuous provided your function f is sufficiently differentiable you are multiplying by a function x minus x 0 square x minus x 1 square. So, this function is always bigger than or equal to 0. So, we can apply mean value theorem for integrals and then the divided difference we can take it out as divided difference based on x 0 repeated twice x 1 repeated twice and some point c multiplied by integral a to b x minus x 0 square x minus x 1 square dx. We can integrate this and then obtain a more precise bound as compared to if we had dominated the x minus x 0 and x minus x 1 by b minus a. So, now, our problem is we have reduced our problem to finding x 0 and x 1 such that x minus x 0 x minus x 1 it is integrally 0. So, we can write x 0 x minus x 1 it is 0 and if you multiply x minus x 0 x minus x 1 by x then that integral also should be 0. Now, as I said we want to have a general method that means here we have taken only n is equal to 1. So, we have got two interpolation points x 0 and x 1, but whatever we want to do we want to do it that you should be able to extend it for the more general case like when you have got n plus 1 points then how I should choose my n plus 1 points so that we have got exactitude for polynomials of degree less than or equal to 2 n plus 1. So, in order to do that what we are going to do is we are going to recall what is the inner product. So, on our our space is c a b it is the vector space of continuous real valued functions defined on interval a b. On this we will define a inner product as integral a to b f x into g x d x once we have inner product we can talk of two functions being perpendicular. So, if the inner product is equal to 0 we say that f is perpendicular to g look at our condition which we want it is integral a to b x minus x 0 x minus x 1 d x should be equal to 0 that means we want our function w x to be perpendicular to the constant function 1 and the second condition tells us that our w x should be perpendicular to the function f 1 x is equal to x. So, we will start with three functions function constant 1 then x and then x square to this we will apply Gram-Schmidt orthonormalization process and get a quadratic polynomial which is perpendicular to constant function 1 and function x. This quadratic polynomial look at the roots of this if the roots are x 0 and x 1 those are going to be our desired point. And what we will do is inner product we will defined on c a b, but in order to find the points x 0 and x 1 we will first consider the interval minus 1 to 1 and then we will look at a map from minus 1 to 1 to the interval a b which is 1 to 1 on 2 and a fine. And using this map we will transfer the results to interval a b. So, let us look at the inner product on c a b, so the vector space is c a b f and g these are functions in c a b inner product f comma g is integral a to b f x into g x into d x. This inner product, so this is inner product its properties are inner product of f with itself is going to be bigger than or equal to 0 because we will be looking at integral a to b f square x d a f is a real valued function. Then f comma f is equal to 0 if and only if the function f x is identically 0. If the function is identically 0 then integral a to b f square x d x will be 0. On the other hand if integral a to b f square x d x is 0, so integral a to b f square x d x is equal to 0, then f square is continuous, f square x is bigger than or equal to 0. Then f square is continuous, f square x is bigger than or equal to 0 and which will imply that f x has to be identically 0. So, this is the first property, second property is inner product of f with g it is same as inner product of g with f and that is because our multiplication of real numbers is commutative and the third property which is known as linearity f 1 plus f 2 its inner product with g will be inner product of f 1 with g plus inner product of f 2 with g and alpha times f comma g will be alpha times f comma g where alpha is a real number f 1 and f 2 is a real number these are continuous function. So, these are the properties of inner product, so we have defined the inner product and now we are going to look at the three functions f 0 x is equal to 1, f 1 x is equal to x, f 2 x is equal to x square. Look at g 0 x, this will be equal to we define it as f 0 x divided by inner product of f 0 with itself and raise to half. We have got a induced norm, if you define norm of f to be square root of inner product of f with itself, this is going to be bigger than or equal to 0. So, we can it is going to be greater than or equal to 0, we can take its positive square root and this is generally denoted by norm f 2. So, this has properties that norm f 2 is bigger than or equal to 0, it is equal to 0, if and only if f x is identically 0, second norm of alpha f will be equal to mod alpha times norm f, where alpha belongs to r and the third one is norm of f plus g is less than or equal to norm f plus norm g, this is known as triangle inequality and the triangle inequality is proved using Cauchy-Schwarz inequality, which is modulus of inner product of f with g that is less than or equal to norm f plus norm f into norm g. So, we have look at the three functions 1 x x square, from this we want to construct three other functions, which I will denote by g 0, g 1, g 2, which will have property that if you look at the norm of each function, it is going to be equal to 1 and if you consider any two distinct functions, like if you consider g 0 and g 1, it is inner product will be 0, inner product of g 0 with g 2 will be 0 and inner product of g 1 with g 2 is equal to 0. So, this the way we are going to do that is known as Gram-Schmidt orthonormalization process, we will do it for the three functions, but then one can define it for n functions. So, we have our f 0 x is equal to 1, f 1 x is equal to x, f 2 x is equal to x square, define g 0 x to be equal to f 0 upon norm normalization f 0 to norm. So, which will imply f 0 x, which will imply that norm g 0 to norm is going to be equal to 1. Next, look at r 1 x to be equal to f 1 x minus inner product of f 1 with g 0 multiplied by g 0. So, this is my definition, if I look at inner product of r 1 with g 0, it is going to be inner product of f 1 minus inner product of f 1 with g 0, g 0 and then g 0. Now, we use linearity of inner product to write this as inner product of f 1 with g 0 minus, this is going to be a scalar. So, it will come out of the inner product. So, it will be f 1 g 0 and what is left is inner product of g 0 with g 0. Now, our norm g 0 2 is positive square root of inner product of g 0 with itself. So, this is going to be equal to 1 and then this will get cancelled. So, you will get r 1 g 0 to be equal to 0. Now, when you look at r 1, we have got f 1 x is equal to x, this is going to be some scalar, g 0 x is going to be multiple of 1. So, this is going to be a linear polynomial. So, by very construction, our r 1 is perpendicular to g 0 and now, if I want r 1 to have norm 1, define g 1 to be equal to r 1 upon g 0. So, we will have norm g 0 to be equal to 1, norm g 1 to be equal to 1 and inner product of g 0 with g 1 is equal to 0. So, now, we will look at the function x square. We have got g 0 to be a constant polynomial, g 1 to be a polynomial of degree 1, which will be perpendicular to each other. f 2 x is our function x square. So, from this function, we will sort of subtract the component of our f 2 in the direction of g 0 and in the direction of g 1. So, we will construct our r 2 in such a manner that r 2 is perpendicular to both g 0 and g 1. Our r 2 is going to be a quadratic polynomial, which will be perpendicular to g 0 and g 1. Define r 2 x to be equal to f 2 x minus inner product of f 2 with g 0 into g 0 x minus inner product of f 2 with g 1 into g 1 x. When you look at inner product of r 2 with g 0, that will be inner product of f 2 with g 0 minus inner product of g 0 with itself minus inner product of f 2 with g 1 and then inner product of g 1 with g 0 using the linearity of inner product. This is 1, this is 0. So, this gets cancelled and then you get r 2 comma g 0 to be equal to 0. Then you look at inner product of r 2 with g 1. That will be inner product of f 2 with g 1 minus inner product of f 2 with g 0 the coefficient and inner product of g 0 with g 1 minus inner product of f 2 with g 1 and inner product of g 1 with itself. So, this is going to be equal to 1, this is equal to 0. These two will get cancelled and then you get r 2 comma g 1 is equal to 0. Next, define g 2 to be equal to r 2 upon norm r 2. So, this will imply that norm of g 2 with g 1 2 norm is going to be equal to 1. So, this is the procedure for constructing g 0, g 1 g 2, 3 orthonormal functions which are which we have obtained from the functions 1 x x square. This was the general procedure. Now, let us look at the interval to be minus 1 to 1 and find explicit expression for g 0, g 1, g 2. So, when you look at the interval minus 1 to 1, our f 0 x is equal to 1, x belonging to minus 1 to 1, then norm f 0 will be integral minus 1 to 1 d x raise to half. That is going to be equal to root 2 and hence our g 0 x is function 1 by root 2. It is f 0 upon norm f 0. Next, r 1 x is equal to f 1 x which is x minus f 1 comma g 0, g 0. Let us calculate the inner product f 1 comma g 0 that is integral minus 1 to 1 x, g 0 is 1 by root 2 d x. So, this is already equal to 0. So, we have got our r 1 x to be equal to x. That will give us g 1 x to be equal to x divided by norm r 1. Now, what will be norm r 1? So, norm r 1 to norm will be integral minus 1 to 1 x square d x raise to half. So, this is going to be square root of 2 by 3 and that will give us g 1 x to be root 3 by 2 x. Now, the third one, so let us look at r 2 x. So, r 2 x is going to be equal to x square minus inner product of f 2 with g 0 into g 0 minus inner product of f 2 with g 1, g 1 where our f 2 x is x square. g 0 is 1 by root 2 and g 1 x is equal to root 3 by 2 x. So, inner product of f 2 with g 0, this will be integral minus 1 to 1 x square by root 2 d x. So, this is going to be equal to 2 by 3 root 2 2 by 3. So, this is going to be root 2 by 3. When we look at inner product of f 2 with g 1, this is going to be equal to integral minus 1 to 1 x square into root 3 by 2 x d x. So, this is going to be equal to root 2 by 3 root 2 by 3. So, this is going to be equal to 0. So, our r 2 x becomes x square minus inner product of f 2 with g 0 is root 2 by 3 and g 0 is 1 by root 2 and inner product f 2 comma g 1 is 0. So, that means it is equal to x square minus 1 by 3 and then our g 2 x will be r 2 upon norm r 2. So, it will be equal to some alpha times x square minus 1 by 3. So, we have obtained a polynomial, a quadratic polynomial which is say alpha times x square minus 1 by 3. It is going to be perpendicular to the constant function 1 by root 2 and it is also perpendicular to the function g 1 x which is root 3 by 2 into x. So, this function alpha times x square minus x square minus 1 by 3. So, it has got distinct roots plus or minus 1 by root 3. So, this x plus 1 by root 3 into x minus 1 by root 3 is going to be perpendicular to 1 and perpendicular to x. So, that means we have got x 0 and x 1 in the interval minus 1 to 1 which has got a property that integral minus 1 to 1 x minus x 0 x minus x 1 d x is equal to 0 and integral minus 1 to 1 x minus x 0 x minus x 1 x d x is equal to 0 and that is the property we were looking for that if you have x 0 and x 1 satisfying this property then you fit a linear polynomial, integrate, get a formula for approximate integration and that is going to be exact for our cubic polynomial. Like because we are fitting a polynomial of degree less than or equal to 1, it is natural that the error will be 0 for polynomials of degree less than or equal to 1, but you are getting something more and what happens is when such a thing happens in the error you are going to have a term contain fourth derivative multiplied by the length of the interval and if it is fourth degree then the it will be b minus a raise to 5 and there will be some constant. Now you know that when you go to composite rules then this b minus a becomes important because in the composite rules b minus a will be replaced by h. So, higher the power of h you are going to have faster the conversion. So, for the basic quadrature rule, basic Gauss two point quadrature rule. So, these points which we have obtained those are known as Gauss points and the orthogonal polynomial or rather orthonormal polynomials which we have obtained they are known as the Joindar polynomials. So, for the basic Gauss two point rule our error is going to be of the form fourth derivative of our function evaluated at some point, some constant and then b minus a raise to 5. When we will go to composite rule on each interval we will apply this basic rule. Then if the sub interval length if it is denoted by h that h is going to be b minus a by n and our b minus a in the error for the basic quadrature rule that will be replaced by h we will be summing it over n intervals. So, that is why one power of h will be lost and then you are going to get the error to be less than or equal to constant times h raise to 4. We had obtained the error to be less than or equal to constant times h raise to 4 also in the Simpson's rule. In case of Simpson's rule we had for the composite rules we had to evaluate total function to be total 2 n plus 1 times. Now, for the this particular formula what is going to happen is on each interval you are going to have two points there are n intervals. So, there are going to be two n point. So, this Simpson's rule and Gauss two point rule is going to be on par, but for higher Gaussian rules they will be they are going to be better than the corresponding Newton codes formula. So, in our next lecture we will first derive the error formula for the Gauss two point rule that means we have obtained x square minus one third. So, Gauss two points in the interval minus 1 to 1 they are minus 1 by root 3 and plus 1 by root 3. So, we will look at the general interval a b and then we are going to consider the general Gaussian quadrature and what are the advantages and disadvantages of Gaussian rules. Thank you.