 We are considering solution of linear system of equation. So, we have got n equations in n unknowns. We describe the method which is known as cos elimination method. So, the system of linear equation we write as A x is equal to b and then using elementary row transformations we reduce the system A x is equal to b to an upper triangular system U x is equal to y. So, the coefficient matrix U is upper triangular it will mean that the last equation has got only one unknown x n last, but one equation has got only two unknowns x n x n minus 1 and so on. So, the solution of upper triangular system can be obtained by back substitution. The cos elimination method which we described we needed an additional assumption to the assumption that A is invertible that look at the matrix A it is a n by n matrix consider sub matrix formed by first k rows and first k columns of matrix A. We denote that matrix by A k it is a k by k matrix it is known as principal leading sub matrix and then we assume that determinant of A k is not equal to 0 for k is equal to 1 2 up to n under this assumption we showed that cos elimination method is equivalent to writing coefficient matrix A as product of two matrices L and U. L is a unit lower triangular matrix that means on the diagonal you have got the entries to be equal to 1 and it is a lower triangular matrix U is upper triangular matrix. Such a decomposition we proved that it is unique and then the system A x is equal to b it is decomposed into two systems. So, you have got A x is equal to b then that is equal to L into U x is equal to b. So, we look at U x is equal to y and L y is equal to b. So, the system L y is equal to b b is given to us. So, we calculate y by forward substitution now y which we have obtained that becomes the right hand side and then we have got U x is equal to y. So, you apply back substitution. So, thus A x is equal to b can be obtained as solution of two systems one lower triangular another upper triangular. Next yesterday from the L U decomposition we deduced what is known as L d v decomposition. In the L U decomposition L was unit lower triangular and U was upper triangular. So, the diagonal entries of U need not be equal to 1. Now, this U we decompose or we write U is equal to d into v where d is going to be a diagonal matrix and v is going to be unit upper triangular then using the uniqueness of L U decomposition one can show that this L d v decomposition is also unique and now today we are going to look at the case when A is positive definite and for the positive definite matrix we are going to prove Cholesky decomposition where we are writing A as G into G transpose where G is a lower triangular matrix. So, our starting point is going to be L d v decomposition which is available for matrices satisfying the condition that determinant of A k is not equal to 0 for k is equal to 1 2 up to n. The notation is A is equal to a i j n by n matrix assumption determinant A k not equal to 0 for k is equal to 1 2 up to n A is equal to L U L unit lower triangular U upper triangular. Now, determinant of A is determinant of L into determinant of U determinant of L is equal to 1 determinant of U is product of the diagonal entries. So, you have got U 1 1 U 2 2 U n n not equal to 0 that will imply that U i i not equal to 0 for i is equal to 1 2 up to n. Now, write D as diagonal U 1 1 U 2 2 U n n and define v is equal to D inverse U. Then this v becomes unit upper triangular and when you substitute in L U then you are going to get A is equal to L d v. Now, this decomposition is unique L unit lower triangular D diagonal and U unit upper triangular. So, now, let us look at the case when A is similar to a symmetric matrix. So, we have got A is equal to L d v that will give us A transpose to be equal to v transpose D transpose L transpose. Our L is unit lower triangular v is unit upper triangular and D is diagonal. Since, D is diagonal it becomes v transpose D and then L transpose. If A is a symmetric matrix then we have got A transpose is equal to A. A transpose is v transpose D L transpose A is L d v. Now, v transpose is going to be unit lower triangular L transpose is going to be unit upper triangular. These two are equal. So, by uniqueness of the L d v decomposition what we get is L has to be equal to v transpose and thus for a symmetric matrix A is going to be equal to L d L transpose. So, we have this provided A is symmetric. So, now, this is for symmetric matrix. Now, we will consider a positive definite matrix. A positive definite matrix has got to be equal to two properties. It is a symmetric matrix and if you look at a non-zero vector x then x transpose A x is going to be bigger than 0. So, now, for a positive definite matrix we are going to have A is equal to L d L transpose where L is a unit lower triangular matrix. For positive definite matrix we will show that the diagonal entries of D they are going to be all strictly bigger than 0. Once we prove that we define a new diagonal matrix which we denote by D raise to half consisting of diagonal entries as square root of D 1 1 square root of D 2 2 square root of D n n. So, we have got A is equal to L d L transpose. This D we write as D raise to half into D raise to half and then associate one D raise to half with L and another D raise to half with v and that will give us Cholesky decomposition G G transpose. So, let us do that. So, we have A positive definite that means A transpose is equal to A and x not equal to 0 vector implies x transpose A x is going to be bigger than 0. This is definition of positive definite matrix. Now, A is written as L d L transpose where L is unit lower triangular and D is diagonal. So, our claim is that if I write D as diagonal D 1 1 up to D n n then each D i i is greater than 0. So, this is our claim. So, in order to prove the claim. So, proof you consider matrix D to be equal to L inverse A L transpose inverse. L is a unit lower triangular matrix. So, that means determinant of L is going to be equal to 1 and hence it will be invertible and if L is invertible L transpose is also invertible. In fact, for a invertible matrix A transpose inverse is nothing but A inverse transpose. So, we have written D as L inverse A L transpose inverse. We will show that the D is positive definite or in fact that we do not need to show D to be positive definite. What we want is D i i they should be bigger than 0. So, these entries D i i they are going to be given by E i transpose D i transpose D i where E i is our canonical vector and using positive definiteness of our matrix A we will show that all these entries they are bigger than 0. So, now D is L inverse A L transpose inverse D i i is going to be E i transpose D E i where E i is canonical vector with 1 at i th place and 0 elsewhere. You can verify that D i i is going to be E i transpose D E i that is going to be E i transpose L inverse A L transpose inverse E i. I substitute for D from here. So, I get this let y be equal to L transpose inverse E i i is fixed in that case y transpose is going to be E i transpose L transpose inverse E i. So, this is going to be L transpose inverse and then transpose of whole thing. Now, the transpose and inverse they commute L transpose inverse will be L inverse transpose and transpose of transpose gives us original matrix. So, this is nothing but E i transpose L inverse and hence D i i will be y transpose A y. Our vector E i is not a 0 vector because you have got 1 entry which is equal to 1. When you consider L transpose inverse E i this is also not equal to 0 because if y is equal to 0 then I can multiply throughout by L transpose and then I will get E i which is equal to L transpose y it is equal to 0. So, this y is a non 0 vector and now recall the definition of positive definite matrix. That for positive definite matrix whenever vector x is not equal to 0 x transpose A x is greater than 0 and hence y is not equal to 0 vector and you will get y transpose A y is bigger than 0. So, let us recapitulate what we have done. We are assuming our matrix A to be positive definite using symmetry we know that A can be written as L D L transpose where L is a unit lower triangular matrix and D is a diagonal matrix and then using the other property of positive definiteness of matrix A we proved the diagonal entries they are going to be all bigger than 0. So, now we are ready for proving the Cholesky decomposition. So, we have A is equal to L D L transpose D being a diagonal matrix and D i i's they are bigger than 0. You define D raise to half and D i i's it is a notation to be a diagonal matrix with diagonal entries as square root of D 1 1 square root of D n n. This matrix will have property that D raise to half its square is going to be your matrix D and hence A is going to be equal to A L D raise to half is equal to L D raise to half D raise to half L transpose. Now, let me call this matrix to be G. So, if G is equal to L D raise to half G transpose will be D raise to half transpose L transpose D raise to half transpose L transpose, but D is a diagonal matrix. So, it is D raise to half L transpose. So, that means A is equal to G G transpose. So, A positive definiteness will imply that A can be written as G G transpose and G is going to be equal to G to be a lower triangular matrix. Now, the diagonal entries of G need not be equal to 1. So, compare with L U D composition in case of L U D composition you had A is equal to L into U L unit lower triangular U upper triangular. If the matrix is positive definite A is equal to G G transpose, where G is a lower triangular matrix and then G lower triangular it will mean that G transpose is upper triangular. So, what have we achieved? When we want to determine L U decomposition of our matrix A we need to determine L and we need to determine U. Now, we have got A is equal to G G transpose. So, once I determine G then G transpose is determined. So, that means our work is going to get reduced by half. Now, what about uniqueness like you have A is equal to G G transpose whether such a decomposition will be unique. If you do not impose any condition, if you just say that G should be a lower triangular matrix then such a decomposition will not be unique because remember how we have done this G G transpose. It was we had our diagonal matrix D. We proved that that diagonal matrix the entries are bigger than 0 and then we looked at positive square root. Now, for each of the entry if I take negative square root then also my D raise to half will have a property that D raise to half square is equal to D. So, if you want uniqueness then you have to put some extra condition. For example, you can say that G should be a lower triangular matrix with diagonal entries to be bigger than 0. The normalization could be that it should be such that the diagonal entries are less than 0 or some other, but then for the sake of definiteness we will consider the matrix G to be such that it is lower triangular and its diagonal entries they are all bigger than 0. So, this is Cholesky decomposition. So, we have a positive definite implies a can be written as G G transpose where G is lower triangular matrix and for uniqueness impose the condition that diagonal entries of G are strictly bigger than 0. Now, this it has a converse if you are given that a is equal to say m m transpose where m is invertible matrix then a is going to be positive definite. So, this part we will prove, but before that let us look at how to determine G and G transpose like we had done in case of L U D composition. One way of finding L U D composition is reduce the coefficient matrix A to an upper triangular form using a matrix A. Using Gauss elimination method the final form which you get upper triangular form that is our U and when you do Gauss elimination method you have got those multipliers. So, using those multipliers you construct your lower triangular matrix. Other way is start with A is equal to L into U. So, entries of L and entries of U they are determined by taking matrix multiplication of L into U and equating to the corresponding element of A. So, now in order to find the Cholesky decomposition that is what we are going to do. We will not go through the like start with Gauss elimination method get L U decomposition then get L D V called decomposition and then write D raise to half. So, instead of that we will just start with A is equal to G G transpose the entries of G are unknown and they will be obtained by taking the matrix multiplication of G and G transpose and then we will get L U decomposition and equating with the corresponding entry of your matrix A. So, here is N by N positive matrix given. Now, you want to write it as G G transpose where G is a lower triangular matrix. So, all the entries here they are 0 when you look at it G transpose all the entries below diagonal they will be 0. So, now in order to find out the composition our first step is going to be determine the first row of G transpose and that will also determine first column of G. So, consider the first row here and multiply by first column. So, that will give us the entry A 1 1 then first row into second column first row into third column and first row into nth column and then we have done in case of L U decomposition with a difference that in L U decomposition you determine the first row of U and you need to determine then first column of L. Now, here once you determine this row this column will be automatically determined. So, that is the work to be reduced by half. So, now let us look at first row into various columns of G transpose. So, first row of G into jth column of G transpose when you do that you are going to get the entry A 1 G. This is equal to summation k goes from 1 to n first row of G the entries will be G 1 comma k then G transpose the entries of jth column. So, they will be G transpose k G the notation is slightly different. So, here 1 denotes the row index and k denotes the column index. So, first row so that is why this one is fixed jth row so that is why this j is fixed. Now, remember that our G is a lower triangular matrix. So, in the first row only one entry is non-zero and that is G 1 1. So, the summation will get reduced to G 1 1 and then G transpose 1 comma j. So, that is going to be G J 1 J going from 1 to up to n. So, put J is equal to 1 that will give you G 1 1 square to be equal to A 1 1 and that means G 1 1 we take positive square root. So, you have got root of A 1 1 then for the other values A J 1 is same as A 1 J because our matrix is symmetric and then you have got G 1 1 G J 1 for J is equal to 2 up to n G 1 1 is determined. So, G J 1 will be A J 1 divided by G 1 1 J is equal to 2 up to n. So, we have determined the first row of G transpose which is same as first column of G. Now, second row of G into jth column of G transpose. So, in the second row of G you have got only two entries in the first row there was only one non-zero entry. Now, there are possibly two non-zero entries. So, the summation will be G 2 1 square plus G 2 2 square plus G 2 2 square is equal to A 2 2 and then you will have G 2 1 G J 1 plus G 2 2 G J 2 A J 2 for J is equal to 3 up to n. So, you determine G 2 2 by square root of A 2 2 minus G 2 1 square and G J 2 will be equal to A J 2 minus G 2 2 G J 2 divided by G 2 2 G J 2 minus G 2 2 divided by G 2 2 you are considering. So, G J 2 G 2 1 G J 1 G 2 2 G J 2 that is correct and G J 2. So, it will be A J 2 minus actually this term should be G 2 1 G J 1 this should be different it should be our G J 2 is going to be equal to A J 2 minus G 2 1 G J 1 divided by G 2 2. So, we continue like this and then we determine G J 2 by square root of G 2 2 G G G transpose. We have calculated G G transpose and this will be the number of operations they are going to be of the order of in cube by 6. In case of L U D composition they were of the order of in cube by 3 and now because of the symmetry you have got A is equal to G G transpose it is going to be of the order of in cube by 6 half the number so now more than this reducing the number of operations by half that is an important factor but this decomposition is a stable way. So, now what stability means we are going to discuss it later on. So, if matrix A is given to be positive definite then you can write this G G transpose. Now, let us prove converse that if I have got matrix A to be equal to M M transpose M need not be even lower triangular if M is a invertible matrix then A is equal to M M transpose is going to be positive definite. So, consider A is equal to M M transpose where M is invertible look at A transpose A transpose will be M M transpose transpose of that then when you take the A B transpose that is B transpose A transpose. So, you get M transpose transpose into M transpose but the M transpose transpose gives us M and then M transpose which is equal to A. So, A is symmetric now let X be not equal to 0 vector M invertible means M transpose also is invertible. So, you consider M transpose X this will be let me call it Y this is not a 0 vector because where M transpose X equal to 0 vector this will imply that M transpose inverse M transpose X will be equal to M transpose inverse into 0 vector which will imply X to be equal to 0 vector. So, that will be contradiction because you are starting with X not equal to 0 vector. So, now M transpose X is equal to Y which is not a 0 vector. So, look at X transpose A X I am starting with X to be a non 0 vector and I am looking at X transpose A X this will be X transpose M M transpose X substituting for A then we have M transpose X is equal to Y. So, X transpose A X will be equal to X transpose M M transpose X this is Y. So, this will become Y transpose Y if Y is our vector Y 1 Y 2 Y n then Y transpose is row vector Y 1 Y 2 Y n. So, Y transpose Y becomes you consider this Y is this and you have got Y transpose Y. So, you have to consider Y transpose into Y. So, that will give you Y 1 square plus Y n square. Now, Y not equal to 0 vector implies Y i not equal to 0 for some i at least 1 Y i has to be non 0. Now, Y 1 square plus Y 2 square plus Y n square this will be strictly bigger than 0 because each entry is bigger than or equal to 0 and 1 entry is strictly bigger than 0. So, we have proved that if A is equal to M M transpose with M to be a invertible matrix then it is a symmetric matrix and X transpose A X is bigger than 0. So, it is a positive definite matrix. So, now we have got if A is a positive definite matrix then you can write A as G G transpose conversely if A can be written as G G transpose with G to be invertible then A is going to be positive definite. So, this gives us a practical way. To determine whether your A is going to be positive definite or not look at the definition of positive definiteness we say that it should be symmetric fine I can verify that it is a symmetric. The other part is for every non 0 vector X we want X transpose A X is bigger than 0. So, now how am I going to verify this we have got infinitely many vectors. So, for all non 0 vectors we want X transpose A X to be bigger than 0. So, now we have got if and only if condition A is positive definite if and only if A can be written as G G transpose with G to be a invertible matrix. In the Cholesky decomposition for the normalization we had said that all the diagonal entries they should be bigger than 0. So, G being a lower triangular matrix its determinant will be product of the diagonal entries and it will be bigger than 0. So, it will be invertible. So, given a matrix A you first verify whether A transpose is equal to A you can write a computer program for that. Next you try to write A as G G transpose. So, look at a matrix A is given to you try to write like this. So, in the first stage you are trying to determine the first row. So, if G 1 1 you are taking positive square root if this number is less than 0 then your matrix is not positive definite. If this number is equal to 0 you are dividing. So, then your matrix is not positive definite. So, at any stage when so matrix A is given to you you have verified that it is a symmetric matrix. Now, you try to write A as G G transpose for this also you can write a program. If at any stage your algorithm breaks down now how can the algorithm break down you are taking square roots. So, if at any stage the number of which you need to take the square root if it turns out to be less than 0 your matrix is not positive definite. If your diagonal entry becomes 0 then again your procedure is going to break down and then your matrix is not positive definite. So, here is a practical test for determining whether a matrix A is positive definite or not and such test they are rare. So, it is one of the occasion where practically you can determine whether matrix A is positive definite or not. So, now so far we had considered system where the coefficient matrix A was invertible and in addition determinant of A k is not equal to 0 for k is equal to 1 to up to n where A k is the principal leading sub matrix and we have a class of matrices which is the positive definite matrices where this is applicable. But then if you have given only that A is a invertible matrix you are going to come across the systems that A x is equal to y A is invertible matrix. So, I know that A x is equal to b it is going to have a unique solution now how to find that solution. So, that is where the modification of Gauss elimination comes into picture and that is known as Gauss elimination with partial pivoting. So, now let us discuss that Gauss elimination with partial pivoting. So, now our assumption is A is invertible and that means determinant of A is not equal to 0 this is only given to us and we look at the system. In the Gauss elimination what we were doing where we were introducing 0s in the first column below the diagonal. So, A 1 1 was our pivot and we looked at multiplier when we want to introduce 0 here we multiplied the first row by A 2 1 by A 1 1 and subtracted from the second row and so on. Now, if A is invertible it can very well happen that this A 1 1 is equal to 0. We had already considered this example a 2 by 2 matrix it is a invertible matrix because determinant is equal to minus 1 and the pivot or the first entry A 1 1 is equal to 0. Now, even if A 1 1 is not equal to 0 you should avoid to divide by a small number when you are doing exact computations or when we do the mathematics we say that you cannot divide by 0 when your computations are going to be with the help of a computer you should not divide by a small number. Now what happens when you divide by a small number that discussion we will do later on. So, now what I can do is I have been given that matrix A is invertible I look at the first column it can very well happen that the first entry A 1 1 is equal to 0, but at least one of the entry in the first column it has to be non-zero because if all entries are equal to 0 I have got to 0 column that means determinant of A is 0. So, A will not be invertible. So, I look at the entries in the first column and look at the entry which has got maximum modulus. So, suppose that is in the kth row. So, I will interchange kth row and the first row. So, by this interchange I have done my best I am going to divide by the entry which is going to occupy the position 1 1 in the matrix. So, in the first column among the entries I take the entry with the maximum modulus after interchanging these two rows I proceed as before and then this will take care of first column then one do similar thing for the second column and so on and this is known as Gauss elimination with partial pivoting. So, here suppose modulus of A k 1 is maximum of modulus of A i 1 1 less than or equal to i less than or equal to n. So, I am considering the entries in the first column and looking at the entry which has got maximum modulus and then I interchange the first and the kth row. So, what was kth row before that has become the first row what was first row that has become the kth row. Now, notice that by interchanging the two rows that means I am interchanging the order of my system of equations I do not change the system because we want to find solution of A x is equal to b. So, whatever changes we make it should give us an equivalent system. Now, any equations are given to you you are just changing the order of the equation. So, you do not change the system. So, we have interchanged and now our multipliers are going to be M i 1 is equal to A i 1 by A k 1 the modified entry and then R i tilde are my new rows. So, this is my R 2 tilde R 3 tilde R n tilde. So, all the rows are going to be same as before except for one row and then I introduce 0s here. So, as I said it is going to be exactly same as Gauss elimination method which we had discussed earlier. Now, I will like to you to notice one thing that if I do this modulus of M i 1 is going to be less than or equal to 1. So, this factor becomes important when we are going to discuss the stability. So, when I do this my matrix will get modified to this matrix. So, because we are interchanging rows the new entries I am denoting by A 1 1 tilde A 2 1 tilde A n 1 tilde. I have introduced all 0s here and then this is the modified matrix sub matrix and this now I want to do similar thing for the second column. So, in the second column look at the entries diagonal onwards and look at the maximum entry. This maximum entry again should not be 0 because what will happen is if all the entries here they are 0. Then what our matrix looks like A 1 1 tilde 0 0 0 A 2 1 tilde 0 0 0. So, you have got two columns which are linearly dependent or what I can do is multiple or subtract a multiple of the first column from the second column and then get a 0 column. So, think about it. So, all these entries among these entries at least one has to be not equal to 0. So, we will interchange second and kth equation and continue as before. So, the difference between Gauss elimination which we discussed before and Gauss elimination with partial pivoting is at each stage we are interchanging rows. So, that is the Gauss elimination with partial pivoting and tomorrow we will show that the Gauss elimination with partial pivoting is equivalent to writing A is equal to L into U. So, this Gauss elimination with partial pivoting is equivalent to writing P into A is equal to L into U where P is a permutation matrix that means it is obtained from the identity matrix by interchange of rows. This again will be useful when we want to consider the backward error analysis. So, we will continue tomorrow. Thank you.