 last time, we have seen that when we consider numerical differentiation. So, when we try to approximate f dash a by derivative of an interpolating polynomial, then there is some problem which we face. So, when we go on reducing value of h, then there comes a stage when instead of the error being decreasing, it starts increasing. And this phenomena is because the computations which we do, these are done in finite precision. So, a number is represented on a computer and there is some round off error. Now, this problem does not come into picture when we consider numerical integration. So, I want to quickly explain the difference between numerical differentiation and numerical integration using a composite rule. I am going to explain it for the derivative, first derivative f dash at a. So, this is let us look at central difference formula. That means f dash a, we are going to approximate it by f of a plus h minus f of a minus h divided by 2 h. So, we have f dash a to be equal to f of a plus h minus f of a minus h by 2 h minus h square by 6 f triple dash at some point psi. So, this formula tells us that if you do exact computations, then as h tends to 0, the error will tend to 0 and the quotient f of a plus h minus f of a minus h by 2 h will tend to f dash a. In practice, what happens is f of a plus h is not represented exactly. So, instead of that we have f of a plus h plus some error even minus there will be similar error in the quantity f of a minus h. So, the f dash computed instead of computing the quotient, what we compute is f of a plus h plus even minus f of a minus h plus e 2 by 2 h. So, if I substitute for f of a plus h minus f of a minus h by 2 h to be equal to f dash computed plus e 2 minus e 1 by 2 h in this formula, what we get is f dash a. This is our f dash computed plus term e 2 minus e 1 by 2 h minus f of a minus h square by 6 f triple dash of psi. So, thus our error consists of two parts. One part is the discretization error which is minus h square by 6 f triple dash psi and the other error is e 2 minus e 1 by 2 h which is because of the round off error. Now, there is no reason to expect that the round off error e 1 and e 2 they will cancel. So, e 2 minus e 1 this need not be equal to 0. Then e 2 minus e 1 divided by 2 h when you increase h or whether we are actually letting h to tend to 0. So, when you decrease h there will come a stage when this factor e 2 minus e 1 by 2 h this will start dominating the other factor which is minus h square by 6 f triple dash. So, this will be equal to e 2 minus e 1 by 2 h triple dash of psi and that is why when you are doing numerical differentiation there is going to be a limit to the accuracy which we attain. The erotically yes the as h tends to 0 error should tend to 0, but in practice it does not happen. After a certain stage when you go on reducing value of h after a certain stage the round off errors they start dominating the total error and they go on increasing because you are dividing by a small number. Now, in case of numerical integration there also we are letting h to tend to 0 because there h was length of our sub interval when you look at a composite numerical quadrature rule. We have our interval a b we subdivide it into smaller intervals and on each sub interval we apply some basic rule. So, this is how we construct the composite rules and then as the number of sub intervals increase and tends to infinity or equivalently when the length of the sub interval h which is b minus a by n that tends to 0 our numerical quadrature formula is going to give us better and better approximation of integral a to b f x dx. So, look at composite trapezoidal rule. So, you have integral a to b f x dx is approximately equal to h by 2 f a plus f b plus h times summation f t i i goes from 1 to n minus 1 interior partition points. Now, each of the values f t i and f a and f b there is going to be some round off error, but here you are multiplying by h you are not dividing by h. So, t n computed is going to be t n the trapezoidal rule plus this round off error. So, e 0 plus e n and then you have got e i and you are multiplying by a number h. So, this makes a difference and you are going to get a approximate value of integral a to b f x dx which is acceptable. There will be a difference between the two some part because of the round off errors, but that will not dominate the total error. So, we started with interpolating polynomials we wrote down the divided difference form that allowed us to get a error formula and we integrated interpolating polynomial in order to get numerical quadrature rules. Differentiating this polynomial gives us a way to calculate approximate value of derivatives. Now, we are going to start a new topic and that topic is solution of a system of linear equations. So, we will have n equations in n unknowns we are going to consider a square system. So, n equations in n unknowns we will assume that the coefficient matrix is invertible that means our system is going to have a unique solution. So, that is our starting point. Now, we want to solve this system we are going to consider real numbers. So, all the quantities involved they are all going to be real numbers. There is a classical way of finding a solution. So, that is known as Cramer's rule. So, Cramer's rule is theoretically important it is an elegant formula for calculating the solution, but it is going to be very expensive. So, in practice one does not use Cramer's rule for finding solution of system of linear equations. Our starting point is going to be Gauss elimination method a simple method to describe. So, this Gauss elimination method we will first describe then we will show equivalence of Gauss elimination method with LU decomposition of matrix A. What we will be interested in is the number of computations needed. So, we will calculate the number of operations needed in the Gauss elimination method for solving system of linear equations. From LU decomposition for positive definite systems we will show what is known as Cholesky decomposition. Then we are also going to consider the conditioning of the matrix. So, for that we need to define what is the norm of a matrix and so on. So, let us start with description of Gauss elimination method. So, we have our system. So, the system is a 11 x 1 plus a 12 x 2 plus a 1 n x n is equal to b 1. Then the second equation is a 2 1 x 1 plus a 2 2 x 2 plus a 2 n x n is equal to b 2 and a n 1 x 1 plus a n 2 x 2 plus a n n x n is equal to b n. The right hand side b 1 b 2 b n that is given to us a i j's they are also given. So, what is unknown is vector x. So, x 1 x 2 x n we want to find x 1 x 2 x n which satisfy this system. If you look at the powers of x i's they are all equal to 1 and that is why it is known as a linear equation. This system in a compact form we write as a x is equal to b n x n is equal to b where a is coefficient matrix a i j. I denotes the row index j denotes the column index. So, a is going to be n by n coefficient matrix. Our vector we denote by as column vectors. So, that is why b is equal to b 1 b 2 b n t that means transpose. So, this is the row vector and it is transpose that is the right hand side x is equal to x 1 x 2 x n transpose that is the unknown vector. So, the system is written as a x is equal to b a is the coefficient matrix and if to the matrix a we add one more column which consists of b 1 b 2 b n then you get a matrix of size n rows and n plus 1 columns. So, that is known as the augmented matrix. Now, we will assume that a is invertible matrix. If a is invertible a x is equal to b whatever is the right hand side it is going to have unique solution that unique solution will be given by x is equal to a inverse b. Now, let me first describe Cramer's rule. So, in Cramer's rule we are assuming that a is invertible matrix that means determinant of a is not equal to 0. If you want to calculate x j then what you have to do is look at the determinant look at the matrix a in that matrix a replace the j th column. So, the j th column of the matrix will be given by a 1 j a 2 j a n j. So, replace this by right hand side b 1 b 2 b n take the quotient that is going to be equal to x j. So, it is a very nice formula for calculating the unknown vector x 1 x 2 up to x n. So, x j is determinant of this matrix with j th column replaced by b 1 b 2 b n divided by determinant a j is equal to 1 2 up to n the problem with this formula is it is too expensive. If you want to calculate determinant of a where a is n by n matrix you will need at least n factorial multiplications of n by n matrix. So, you will need n factorial multiplications for plus n factorial addition slash subtraction and n factorial grows very fast with n. So, no matter how big your computer is if you try to calculate determinant of a by traditional way soon your computer memory will not be enough. So, this Kramer's rule is going to be too expensive to use in practice. So, now suppose your system is a per triangular that means in the first equation all unknowns appear x 1 x 2 x n in the second equation there is no x 1 in n minus first equation what is coming into picture. So, the next picture is only x n minus 1 and x n and in the last equation you have got only x n. The coefficient matrix invertible will mean determinant of u which is product of diagonal entries u 1 1 u 2 2 u n n this is not equal to 0. So, all u i i s they are not equal to 0 what one can do is start with x n x n is going to be equal to y n upon u n n. Then having determined x n go to this equation now x n is known. So, take it on the other side and calculate x n minus 1 as y n minus 1 minus u n minus 1 n x n divided by u n minus 1 n minus 1 and so on. So, this is known as back substitution. So, if your coefficient matrix is a per triangular then you can determine vector x in the order x n x n minus 1 x n minus 2 and so on. So, our aim will be to replace our system A x is equal to b by an equivalent system u x is equal to y. What I mean by equivalent is both the systems they should have the same solution. We are interested in solution of A x is equal to b. So, if you replace by another system then the new system should be replaced by the same solution because we are interested in the system of original we are interested in the solution of the original system not some other system. So, coefficient matrix A we want to reduce it to upper triangular form. Now, whatever you will do for A you will have to do on the right hand side because we do not want to change our system. So, we are going to use elementary row transformation. That means, what we can do is multiply a equation by a number alpha non-zero number and subtract it from other equations. If you do this you are going to get a system which is equivalent. Now, this equivalence we will show later on. So, first we are going to describe the Gauss elimination method, calculate the number of operations in it and then we will come back to equivalence. So, aim is to replace A x is equal to b by an equivalent upper triangular system u x is equal to y. Using elementary row transformations of the type r i becomes r i minus alpha r j. That means, you are multiplying j th row by a constant alpha and subtracting it from r i. So, this was our original one original i th equation. And this is the new equation. Now, we are going to make an additional assumption. A is invertible, but we will look at principle leading sub matrix. So, that is formed by first k rows and first k columns. Our assumption is determinant a k not equal to 0 for k is equal to 1 2 up to n. When k is equal to n that is determinant of a not equal to 0 that we have already assumed. Now, we are saying something more. What should happen is a 1 1 should not be 0 determinant of a 1 1 a 1 2 a 2 1 a 2 2 should not be 0 and then so on. This is additional assumption. Like, let me look at example say 2 by 2 matrix 0 1 1 0. This matrix is such that determinant of a is equal to minus 1 which is not equal to 0, but when I look at determinant of a 1 then that is equal to 0. So, this assumption that determinant of a k not equal to 0 for k is equal to 1 2 up to n this is additional. Now, we are going to show that there is a class of matrices for which the this condition is satisfied and those are known as positive definite matrices by one flop. Because as I said we are going to calculate the number of computations. So, one flop where flop stands for float. So, we are going to calculate the number of computations. So, one flop where flop stands for floating point operation, it consists of one addition or subtraction plus one multiplication slash division. Addition and subtraction they are considered on par and multiplication slash division they are also considered on par. Generally, total number of multiplications slash divisions is same as total number of addition slash subtraction. So, one flop is going to consist of this one addition slash subtraction plus one multiplication slash division. In the first step of Gauss elimination method what we want to do is we want to look at the entries in the first column below the diagonal entry and we want to make all of them 0. This entry a 2 1 can be made 0 by multiplying the first equation by a 2 1 by a 1 1 and then subtract it and then so on. So, when you do that the first row will remain as it is all these entries they will become 0 and the entries in this sub matrix they are all going to get modified. So, that is going to be the first step. In the second step we will work with this sub matrix we will not touch the first row or the first column. So, first we have to notice that a 1 1 is not equal to 0 that is because we are assuming that determinant a 1 is going to be not 0. Next we define multipliers m i 1 to be a i 1 divided by a 1 1 i goes from 2 up to n. A 1 1 is not equal to 0. So, m i 1s they are well defined and i th row from the i th row we will subtract first row multiplied by m i 1 for i is equal to 2 up to n. We want to economize the computations. So, these 0s we will write down directly we want to make the subtraction because we have chosen the multipliers in such a manner that this operation will produce 0s in such a manner that this operation will produce 0s in the first column below diagonal a i j 1. These will be given by a i j original entry minus m i 1 times the corresponding entry in the first column. The first suffix denotes the row. So, that is why you have got a 1 and then j. So, i j th entry minus m i 1 times corresponding entry in the first row. You have to do it for the last column also. So, it is b i 1 is equal to b i minus m i 1 b 1 i going from 2 to n j going from 2 to n. So, let us now calculate the number of operations. First you are going to have one multiplication here and then one subtraction here. This you are going to do it for this whole square. So, that is going to be n minus 1 square, but you are also doing it here. So, it is becomes n minus 1 into n multiplications plus those many subtractions. So, n minus 1 into n subtractions plus in order to calculate m i 1 you need to do this division. So, there are n minus 1 divisions. So, these many computations are needed for the first step. In the second step we are going to work with a matrix which has got n minus 1 rows and n columns and exactly the same computations. So, now the number of operations will be obtained by replacing in this relation n by n minus 1. Now, we have to confirm that our procedure is going to be n minus 1. So, we are going to work for the sub matrix also. To start with our matrix in our matrix a a 1 1 is not 0 because you are assuming determinant of a k not equal to 0 for k is equal to 1 2 up to n. You do some operations you get a new matrix. In the new matrix which consists of second third up to nth row and second third up to n plus first column. Now, the entry which we are going to divide by that is going to be a 2 2 1 the modified entry. So, we have to make sure that a 2 2 1 is not equal to 0. What we have done is we have used to implement elementary row transformation. That means, we have multiplied a row by a fixed real number and subtracted from other row. Now, this operation does not change the determinant of the matrix. The determinant of the matrix remains invariant. So, look at the 2 by 2 matrix which is formed by first 2 rows and first 2 columns. So, that is our capital A 2 by assumption determinant A 2 is not equal to 0. And this 2 by 2 matrix after the first stage of our Gauss elimination method it is transformed to the first row as it is A 1 1 A 1 2 second row 0 below the diagonal in the first column we have got 0 and then A 2 2 1. So, determinant A 2 will be A 1 1 times A 2 2 1. A 1 1 is not 0 A 2 2 1 also should not be 0, because determinant A 2 is not equal to 0. So, that means, our second step of Gauss elimination method we can perform and then one uses the same argument that you can do this process till your coefficient matrix A 2 2 1. So, this A is reduced to an upper triangular form. So, our starting matrix was A B augmented matrix we transformed it to A 1 B 1. Then using similar operations we have transformed it to A 2 B 2 and so on. In order to do this transformation we need the fact that A 2 2 1 is not equal to 0. So, now, the number of computations we have seen that for the first step of Gauss elimination method we had n minus 1 into n multiplications plus n minus 1 into n subtractions plus n minus 1 divisions. In the second step you are performing a similar operation on the sub matrix with n minus 1 rows and n columns. So, everywhere you have to replace by n by n minus 1. So, it will be n minus 2 into n minus 1 multiplications plus n minus 2 into n minus 1 subtractions plus n minus 2 divisions. So, when you consider total number of operations which are needed to reduce the system A B to a system U y where U is upper triangular matrix it will be given by. So, here write this n as n minus 1 plus 1. So, you will have n minus 1 square. So, you will have n minus 1 square plus n minus 2 square plus 1 plus this n we are writing as n minus 1 plus 1. So, that is why you will have plus n minus 1 from here then n minus 2 plus 1 and now I am writing as flops. Number of multiplications and subtractions they are the same and one multiplication plus one subtraction is one flop. So, that takes care of this and what is remaining is divisions. So, you have n minus 1 plus n minus 2 plus 1 division. Now, this is square of natural numbers. So, we have got a formula. So, it is n minus 1 n 2 n minus 1 by 6 plus some of this will be given by n minus 1 into n by 2. So, these many flops plus n minus 1 into n by 2 division. Now, this Gauss elimination method which I have described it is for solving big system of linear equations using computer. If you are trying to solve a 3 by 3 system by hand it will not matter which method you use you can use Kramer's rule, but in practice one comes across big systems of equation. So, then one has to worry about the number of operations. Now, we have calculated the number of operations. These number of operations they are some constant times n cube plus another constant times n square plus constant times n plus constant. Now, when n is big what matters is what will be coefficient of n cube like between n cube and n square for n big enough you can ignore what is n square like if your n is 10000 then n cube is going to be much bigger than n square. So, the dominating factor becomes n cube, but you should retain the coefficient of n cube because there is a difference between n cube and 2 n cube. The 2 n cube is double the number of operations. So, in this Gauss elimination method coefficient of n cube is 1 by 3. So, one says that you can reduce your system to a upper triangular system by doing the number of operations of order n cube by 3. So, now, we have reduced our system to upper triangular system, but still it remains to find the solution. We do this because we want to solve A x is equal to b. So, now, let us calculate the number of operations in the back substitution. A x is equal to b is our original system. This we have reduced it to upper triangular system u x is equal to y by performing number of operations of the order n cube by 3. Now, let us see what effort is needed for getting vector x 1, x 2, x n from u x is equal to y. So, that we are going to do it by back substitution. A x is equal to b. A is invertible matrix. So, determinant of A is not equal to 0. All the operations which we have used, those are elementary row operation r i minus alpha times r k. So, this leaves the determinant invariant. So, determinant of u will be same as determinant of A. u is a upper triangular matrix. For an upper triangular matrix, determinant is product of the diagonal entries. So, you are going to have u 1 1, u 2 2 into u n n as determinant of u. Determinant of u not equal to 0 will mean that all the diagonal entries u i i, they are not equal to 0. And we have already talked about the back substitution. Start with the last equation, determine x n, then go to n minus first equation, determine x n minus 1 and so on. So, here u being an upper triangular matrix, u i j will be 0 if i bigger than j. And hence, ith equation is going to be of the form u i i x i plus u i i plus 1 x i plus 1 plus u i n x n is equal to y i. So, now, x i will be equal to y i minus summation j is equal to i plus 1 to n u i j x j divided by u i i, i is equal to n n minus 1 up to 1. With the convention of u i i, x i will be 0. So, the convention that if i is equal to n, then this summation is absent. Now, how many number of operations? So, here you have got j is equal to i plus 1 to n. So, you are doing n minus i multiplications u i j x j. You have got n minus i terms, when you add them up, you will need n minus i minus 1 additions, because when you want to add two numbers, you do one addition. But, then there is one subtraction here. So, that means, in order to calculate the numerator, you will need n minus i flops. And there is going to be one division. This you will be doing it for i is equal to n n minus 1 up to 1. So, the total number of operations they will be given by summation i goes from 1 to n minus 1, n minus i flops plus n divisions. And this summation is nothing but n minus 1 into n by 2 flops plus n division. So, that means, it is going to be of the order of n square by 2. So, thus the major chunk of our computations, they are needed to replace our matrix A by an upper triangular matrix. So, this is what we are going to do. The once you have done that back substitution is of the order of n square or n square by 2 if we want to be really precise. So, suppose sometimes one wants to solve the same system of equations for different right hand side. So, in that case what you should do is, you should retain your say number of operations or whatever you are doing because see what you have now is A x is equal to b. So, the coefficient matrix A is the same, but on the right hand side you have got b 1, b 2, b n those will be different. So, you should keep the track of the operations, reduce A to upper triangular for once for all, reduce A to upper triangular. Whatever are those operations you have to perform them on the right hand side also and then solve system of linear equations. So, this is Gauss elimination method. It is also known as Gauss elimination method without partial pivoting. So, we will see what pivoting means, but at present this is the simplest method. This method is applicable provided that A k is not equal to 0 for k is equal to 1 to up to n because if at any stage the number by which we are dividing in our multipliers we need to divide by certain number. So, if this becomes 0 then our system or our method will be, it will fail then we cannot proceed. So, there is a big class of matrices for which this condition determinant A k not equal to 0 for k is equal to 1 to up to n is satisfied and that class of matrices that is known as class of positive definite matrices. So, our definition of positive definite matrix is going to be matrix is a real matrix it should be a symmetric matrix A transpose is equal to A and in addition if you look at a non-zero vector x then x transpose A x should be bigger than 0. A is n by n matrix x is n by 1 vector. So, A x is going to be a n by 1 vector our x transpose will be 1 by n row vector. So, when you consider x transpose A x what you get is a 1 by n matrix or a real number if that number is bigger than 0 for all non-zero vectors then our matrix is going to be positive definite. Now, for the positive definite matrix when we look at its Eigen values. So, we are going to look at the Eigen values of matrix little later, but at present lambda is an Eigen value of A provided there exists a non-zero vector such that A u is equal to lambda u that is definition of Eigen value lambda is going to be equal to u transpose A u divided by u transpose u. Now, when A is positive definite u transpose A u will be bigger than 0 u transpose u any way it is going to be bigger than 0 and hence the Eigen value is going to be bigger than 0. So, for positive definite matrices Eigen values of A they are bigger than 0 and determinant of A is product of Eigen values. So, determinant of A is bigger than 0. So, that means for a positive definite matrix determinant of A is going to be bigger than 0. So, A is going to be not only equal to 0 it will not be equal to 0, but it will be strictly bigger than 0. Now, what we want is additional condition we want determinant A k should not be equal to 0. In order to show that positive definite matrices satisfy this property what we are going to show is that A k which is a sub matrix formed by first k rows and first k columns of the matrix they are also going to be positive definite. So, in order to show this we will have to go by definition. So, for positive definite first it should be symmetric. So, checking symmetry of A k is no problem, but showing that y transpose A k y is bigger than 0 for y not equal to 0 that will need some proof. So, we will show next time that if A is positive definite then all its principle sub matrices they are also going to be positive definite which will mean that determinant of A is bigger than 0 for k is equal to 1 2 up to n which means equation method which we described just now we can apply it to this class of matrices. And then we will also see some decomposition of positive definite matrix which is known as the Cholesky decomposition. So, we will continue next time. Thank you.