 We are considering Gauss elimination with partial pivoting. So, our assumption is that the coefficient matrix A is invertible. Earlier we had considered Gaussian elimination without partial pivoting, in which case we were assuming a stronger condition that determinant of A k is not equal to 0, where A k is the principle leading sub matrix which consists of first k rows and first k columns. And in that case we proved that Gauss elimination is equivalent to elude decomposition of our matrix A. Now, in case of Gauss elimination with partial pivoting, we assume only the condition that determinant of A is not equal to 0. In this case, we are going to show that the Gauss elimination with partial pivoting is equivalent to writing elude composition of not A, but P into A, where P is going to be a permutation matrix. So, permutation matrix it is obtained from the identity matrix by finite number of row interchanges. In Gauss elimination method with partial pivoting, we interchange rows. It may be necessary to interchange the rows. So, this interchange of rows is accounted by this permutation matrix P. So, our setting is A is invertible matrix. So, we have got system A x is equal to b, where determinant of A is not equal to 0 and we have got n equations in n unknowns. The right hand side b 1, b 2, b n that is given to us x 1, x 2, x n that is unknown vector. Since determinant of A is not equal to 0, this equation is going to have a unique solution. So, what we do is we first look at the first column and in that we consider the element which has got maximum modulus. If it is in the kth row, then we interchange the first and kth row. Now, since A is invertible, our A k 1 is not equal to 0 because if A k 1 were 0, then each entry in the first column will be 0. So, you will have a 0 column which will mean that determinant of A is 0. So, you interchange the first and the kth row. After the interchange, our matrix looks like this. So, we are not changing the system. We are just changing the order of our equations like what was first equation has become now kth equation and what was kth equation that has become first equation. So, now when you look at the element in the first row and first column, that is going to have maximum modulus in the first column. Now, we are going to look at the multiplier. Now, we will do Gauss elimination. That means, we want to introduce 0s below the diagonal in the first column. So, our multipliers M i 1, they are going to be equal to A i 1 divided by A k 1. For i not equal to k, i is equal to 1 to up to n. Now, since A k 1 has maximum modulus, modulus of M i 1 will be less than or equal to 1. Now, r i tilde denote the modified rows. So, r i tilde becomes r i tilde minus M i 1 r 1 tilde. So, we introduce 0s here and then we have got the new system in this manner. Now, whatever we have done for the n by n matrix, we are going to work on this sub matrix of order n minus 1. So, our aim is to introduce 0s in the second column below the diagonal. So, you look at the maximum of the element among A 2 2 tilde 1 up to A n 2 tilde 1. So, suppose it is modulus of A tilde k 2 1, maximum 2 less than or equal to i less than or equal to n. We do not want to disturb the first row. So, we are working only on this n minus 1 by n minus 1 matrix. Now, since A is invertible, again this will not be equal to 0. Now, interchange second and k th equation multipliers M i 2 will be A i 2 tilde 1 by A k 2 tilde 1 and then you continue. So, like that for Gauss elimination with partial pivoting, for every step, this will be equal to there may be a interchange of rows and that interchange of rows that we are going to show that that can be achieved by pre multiplying our matrix by a permutation matrix. So, some notation A is our n by n matrix, E j is going to be canonical vector with 1 at j th place and 0 elsewhere. When you look at A times E j, that is going to give us a j th column. So, you have A E j, A is this n by n matrix multiplied by vector 0 0 0 1 0 0, where 1 is at j th place. When you look at the matrix into vector multiplication, only corresponding to 1 that entry is going to contribute. So, what you get is A 1 j, A 2 j, A n j. So, that is going to be our j th column. In a similar manner, one can show that E i transpose A, that will give us i th row, A i 1, A i 2 up to A i n. Now, look at matrix A, its n columns, they will be given by A E 1, A E 2, A E n. Look at another matrix B. So, this is its columns will be given by B E 1, B E 2, B E n, where even E 2 n are canonical vectors, which is equal to C 1, C 2, C n. I am denoting, when you consider A into B matrix multiplication, its columns will be given by A B E 1, A B E 2 and A B E n, but B E 1 is nothing but C 1. So, matrix multiplication A into B will be obtained by multiplying A and the first column of B. So, that will give us first column of A B. A C 2 will be second column of A B and A C n will be nth column of A B. Now, we have seen before that, when we wanted to do R i minus E i 1 times R 1. So, we wanted to subtract a multiple of first row from i th row. So, this operation can be performed by pre multiplying our matrix A by a matrix, which we had called E 1. So, the matrix E 1 had one along the diagonal and then in the first column, second entry onwards they were minus M 2 1 minus M 3 1 minus M n 1. And then when you consider E 1 times A, then you get the modified matrix E 1, which was the first step of Gauss elimination method. Now, when you look at the matrix E 1, in fact, this matrix E 1 is obtained from the identity matrix by doing this transformation. Let me repeat, we have our matrix A, what we are doing is i th row, we are modifying by R i minus M i 1 R 1. We multiply the first row by M i 1 and subtract from the i th row. Suppose, this operation you do on the identity matrix, then our starting point is identity matrix 1 1 1 and then when you do R i minus M i 1 R 1, multiply first row by M 2 1, subtract from the second row, then you will get minus M 2 1 1 and then 0 0. Similarly, R n minus M n 1 R 1, so multiply this first row by M n 1, subtract it from the last row. So, you will get minus M n 1 0 and then 1 1 0 0 and then 1. So, the matrix E 1, which we obtained, we wanted to do some row transformations. You perform them on the identity matrix, you will get a matrix E 1, then you consider E 1 times A, that is going to give us modified matrix. Now, what we want to show is, in the Gauss elimination with partial pivoting, we are interchanging first and k th row, that is the first step. So, this step can be achieved by pre multiplying by a matrix say P 1. Now, this matrix P 1 will be obtained from the identity matrix by interchanging the first and k th row. So, exactly same operation, whatever you want to do on your matrix A, you do it on the identity matrix, you will get a matrix n by n matrix, pre multiply your matrix A by that matrix and then you will have the desired effect. So, look at the identity matrix and we want to interchange first and k th row. So, when you interchange the matrix P 1, which you will get, will be, you are starting with the identity matrix and then you are interchanging first and k th row. So, in the first row, there was one here. So, that becomes occupies the position here. So, it is k th row and first column, the diagonal entry k k, that was one and that you are interchanging with the first row. So, this becomes one. So, this is our matrix P 1. Now, it is clear that P 1 square will be identity, you have identity matrix, you are interchanging first and k th row, you get matrix P 1. Now, again you interchange first and k th row. So, then you will get back your earlier matrix. So, that is why P 1 square is going to be identity and the matrix P 1 is going to be a symmetric matrix. So, you will have P 1 transpose is equal to P 1. Now, when you look at the matrix P 1, identity matrix, its columns are nothing but E 1, E 2 up to E n, the canonical vectors. In matrix P 1, the first column is becoming E k and k th column is becoming E 1. All the remaining columns, they remain the same. So, you have P 1 is equal to P 1 transpose is equal to first column is E k, then E 2, E 3, E k minus 1 as before, then E 1 and then E k plus 1 up to E n. So, the first column E k, then E 2, E 3, E k minus 1, the k th column will be E 1 and then E k plus 1 up to E n. Now, if you look at A P 1. So, A P 1, we have seen that when you want to take multiplication of A into B, look at the columns of B. So, we call them C 1, C 2, C n and then A B. It is obtained by first column will be A C 1, second column will be A C 2 and A C A. So, now look at P 1 and then consider A P 1. So, A P 1, it which first column is A E k, then A E 2, A E k minus 1, A E 1, A E k plus 1, A E n, but what is A E k? It is the k th column of A, then C 2, C k minus 1, C 1, C k plus 1, C n. So, this means when you consider A P 1, you are interchanging the first and the k th column of A. So, we have obtained a permutation matrix P 1. The permutation matrix P 1 was obtained from the identity matrix by interchanging the k th and the first row. Now, if you post multiply your matrix A by P 1, then what it does is it interchanges the k th and the first column of A. So, the post multiplication by permutation matrix means interchange of columns. And now, let us show that if instead of A P 1, if you look at P 1 A, that means pre-multiplying, then that will amount to interchange of corresponding rows of A. So, we have seen that E I transpose A is going to be i th row of A. A is n by n matrix. E I is a column vector n by 1. So, its transpose will be a row vector 1 by n. So, 1 by n multiplied by n by n matrix that is going to give us a row vector and that is our vector R i or the i th row of A. And now, we are going to look at P 1 times A. Now, this P 1 is going to be its first row will be nothing but E k transpose. The second row will be E 2 transpose and so on. So, you look at P 1. So, the first row is E k transpose. It is the row vector with 1 at k th place and 0 elsewhere. Second row will be E 2 transpose, k minus first row will be E k minus 1 transpose, k th row will be E 1 transpose and so on. So, now, look at P 1 into A. So, P 1 the first row, second row and so on into A that is going to give us E k transpose A. E 2 transpose A and so on. E i transpose A is equal to R i. That means it is the i th row of A. So, when you look at P 1 A, the first row is the k th row of our matrix A and k th row becomes the first row. So, interchange of the first and the k th row of our matrix A is done by pre multiplying by P 1. So, let me summarize. You have got identity matrix. You are interchanging the first and the k th row. Now, for the identity matrix, if instead of interchanging first and k th row, you would have interchanged first and k th column, you would have obtained the same result. So, we obtain a matrix P 1. This matrix P 1, if I look at A P 1, the effect is interchange of k th and first column of A. If I pre multiply, if I look at P 1 A, then the effect is interchanging of the k th and the first row. So, in the case of Gauss elimination with partial pivoting, at each step, you may be interchanging rows. So, this interchange of rows can be achieved by pre multiplying by a first row. So, this is the permutation matrix obtained from the identity matrix by interchange of just two rows. So, reduction of A to an upper triangular matrix U using partial pivoting, you can represent it by pre multiplying your matrix A by P 1 E 1 P n minus 2 E n minus 2 P n minus 1 E n minus 1, where P i's are permutation matrices which account for the interchange of rows and E 1 E 2 up to E n minus 1 that is going to account for the subtracting multiple of an appropriate row by from another row. So, now, what we have got is, we have got the Gauss elimination with partial pivoting. Its effect on the coefficient matrix A, we can write it as E n minus 1 P n minus 2 E n minus 2 P n minus 2 and so on multiplied by A is equal to U. Earlier, we had only E n minus 1 E n minus 2 E 1 A is equal to U. So, what we did was, all these E n minus 1 E n minus 2 E 1 they were all lower triangular matrices. Then, their product is also lower triangular, each is a invertible matrix. So, you take its inverse and then you will get A to be equal to inverse of a lower triangular matrix into U. U is upper triangular and then inverse of this that gave us N. Now, what is happening is, we have got in between the permutation matrices. So, we want to show that this equation can be written in the form E n minus 1 dash E n minus 2 dash E 1 dash and then all these permutation matrices together into A is equal to U. So, these we want to show that all these matrices they are going to be lower triangular and invertible. These P n minus 1 P n minus 2 P 1 this together will give us a permutation matrix P. So, in P 1 P 2 P n minus 1, we were interchanging only one row at a time. In P we will be interchanging finitely many rows. So, I am going to show this for 3 by 3 matrices. So, this is to give you an idea and then the similar argument works. So, what we want to do is, we have got lower triangular matrix and permutation matrix in that order. So, I want to take all lower triangular matrices together and all permutation matrices together. So, that is what we want to do and we are going to illustrate it for 3 by 3 matrix. So, you consider E 2 P 2 E 1 P 1 A is equal to U. So, N is equal to 3. A is 3 by 3 matrix, U is upper triangular matrix, P 1 and P 2 they are obtained by interchanging the one row at a time from the identity matrix. E 1 and E 2 are going to be lower triangular matrices. So, we know that P 2 square is identity. So, what I can do is after E 1 I can introduce P 2 square because P 2 square is identity. So, you will get U is equal to E 2 P 2 E 2 P 2 E 1 then I am introducing P 2 square P 1 A. Now, this P 2 E 1 P 2 that I denote by E 1 dash. So, I have got U is equal to E 2 E 1 dash P 2 P 1 A. So now, I have achieved to have my permutation matrices together, but I need to show that the E 1 dash which I obtain that is a lower triangular matrix because I start with our E 1 is going to be lower triangular matrix. In fact, it will be unit lower triangular matrix. So, now, E 1 dash I have P 2 E 1 P 2. So, I need to show that it retains the lower triangular structure. So, let us look at E 1. So, E 1 is matrix E 1 minus M 2 1 minus M 3 1 0 1 0 0 0 1. So, that means, this is the matrix which accounts for R 2 minus M 2 1 R 1 and R 3 minus M 3 1 R 1. P 2 is the matrix in which you are interchanging second and third row. Then, when you look at matrix P 2 E 1 P 2 E 1 is going to be interchange of second and third row of E 1. We know that if you pre-multiply by a permutation matrix, that means it is interchange of rows. If you post-multiply by a permutation matrix, it corresponds to interchange of columns. So, what we are doing is you have got you consider E 1. So, that is the first step of Gauss elimination. Now, you will get a modified matrix. In that modified matrix, now you want to introduce 0 in the second column below the diagonal. Now, you may have to interchange the rows because I am taking 3 by 3 matrix. The only row interchange is going to be second and third row. So, that is what I am doing and now I am trying to show that E 1 is lower triangular and if you consider P 2 E 1 P 2, that also is going to be lower triangular. So, now P 2 E 1 that becomes interchange of second and third row. So, it is minus M 3 1 0 1 and minus M 2 1 1 0. What was second row becomes third row? What was third row becomes second row. Now, if you look at this matrix, this is not a lower triangular matrix. You have got one here, but now consider P 2 E 1 multiplied by P 2. So, what it will do is it will interchange second and third column. So, now, when you interchange second and third column, the matrix which you get is 1 minus M 3 1 minus M 2 1. What was second column becomes third column? What was third column becomes second column. So, now, this is a unit lower triangular matrix. So, we had P 2 E 1 P 2 that was our E 1 dash. So, that is going to be lower triangular E 2 is also a lower triangular matrix. So, we have U is going to be equal to E 2 P 2 E 1 P 2 E 1 P 2 E 1 P 2 E 1 E 1 then P 2 P 1 A. So, this matrix is E 1 dash. So, E 2 E 1 dash and then P 2 P 1 A. We have proved that this is unit lower triangular. This is also unit lower triangular. So, because it is unit lower triangular that means the diagonal entries are equal to 1, they are going to be invertible. So, we will get P 2 P 1 A is going to be equal to E 2 E 1 dash inverse into U and this will be unit lower triangular. So, that will be our L into U and this together will be P. So, you have got P into A is equal to L into U. So, now, Gauss elimination with partial pivoting is equivalent to P into A is equal to L into U where P is a permutation matrix. When we considered Gauss elimination method without this row interchanges. So, that means without partial pivoting. We said that we have got two equivalent ways. Either you proceed that you do the Gauss elimination method, the store your multipliers at appropriate places and then you do the partial pivoting method and then construct your matrix L and U is the final matrix which we obtain in case of Gauss elimination method. Other way was start with matrix A, try to write it as L into U and then you treat the elements of L and U to be unknowns. Take the multiplication equate it to the corresponding entry in A and that is how you can determine L and U and in both the cases the number of computations they are going to be of the same order. So, we have got a equivalent way of doing it and then from L U decomposition we went to Cholesky decomposition and so on. Now, here we have written P into A is equal to L into U. So, whether I can try to do it directly. Now, here the problem is we do not know what row interchanges we are going to need. Like when you do Gauss elimination with partial pivoting it becomes clear. Like look at the first column look at the entry which has got the maximum modulus and then interchange the corresponding row and the first column first row. But this we do not know beforehand and this it may be necessary at every stage. So, this P into A is equal to L into U that is going to be useful for the backward error analysis. But when you want to do Gauss elimination with partial pivoting then you have to proceed and what we have proved is existence of such a L U decomposition not for the matrix A, but for the matrix P. Now, the permutation matrix P. So, it has got start with the identity matrix and you do finite number of row interchanges because we have got n by n matrix. So, you will be doing it for finite number of times the row interchange. When you look at the determinant of P. So, when you do the row interchange then the sign of the determinant is changed. Determinant of identity matrix is 1. Now, it will depend on whether you are doing even number of interchanges or whether you are doing odd number of interchanges. So, depending on that determinant of P is going to be plus or minus 1 and then look at P into A is equal to L into U. L is unit lower triangular. So, determinant of L is going to be equal to plus 1 and determinant of U will be equal to determinant P into determinant A. So, it will be determinant of U will be plus or minus determinant of A. So, it will be not equal to 0 because we are assuming A to B invertible matrix. So, we know how to solve a system A x is equal to B. If you want to find inverse of a matrix then we have got formula in terms of adjoint formula the classical formula which involves the determinant. So, that is very expensive. So, if you need to calculate inverse of a matrix what you can do is as follows. So, suppose your A is invertible matrix M is to find A inverse. Now, this A inverse will be equal to A inverse E 1. First column of A inverse will be A inverse E 1, second will be A inverse E 2 and the last one will be A inverse E n. So, look at A inverse E 1 is equal to let me say U 1. This means A U 1 is equal to E 1. So, you solve this equation. Now, your system of linear equations matrix A is there it is invertible take your right hand side to be vector 1 0 0. When you take that as your vector and solve it you are going to get a vector U 1 write this as first column of your A inverse. Then you look at the system A U 2 is equal to E 2 where E 2 is the canonical vector 0 1 0 0 0. You solve it that is going to be your second column of A inverse and so on. So, you will be solving n systems of equations in which the coefficient matrix A is the same it is the right hand side it is different. So, whatever work you do like you can use cost elimination with partial pivoting. Then you keep the track of operations which you do and the upper triangular matrix which you are getting the classical formula is A inverse A X is 1 upon determinant of A into adjoint of A. Then you look at A X is equal to I where A X I is equal to E I I going from 1 to up to n. So, what I was calling U 1 U 2 U n here I am writing as X 1 X 2 X n. You do a L U D composition only once. So, that will be n cube by three flops and then you will be doing forward and backward substitution n times. So, and then the forward and backward substitution it is of the order of n square this you will be doing n times. So, it is n cube. So, that means the inverse of A it can be obtained in n cube by 3 plus n cube which is equal to 4 n cube by 3 flops and it is a much efficient way than using the classical formula for calculating the inverse. Now, I have been saying that the calculation of the determinant is expensive if you use the formula that it is more than n factorial. So, why not use the illusion of the equation of U 1 U 2 D composition? Like suppose your matrix A you can write as L into U then determinant of U is going to be equal to determinant of A or if you are using the Gaussian elimination with partial pivoting then your P into A is equal to L into U. So, then your determinant of U will be equal to plus or minus determinant of A. In fact, one does such things like if you want to calculate determinant by hand then one tries to make the simplification introduce zeroes like try to get some if you have many zeroes then you can the calculation of determinant becomes easier. So, here using computer you can reduce your matrix A to upper triangular form and determinant of that U will be equal to determinant A if there are no row interchanges or it will be plus or minus determinant of A depending on the number of row interchanges. So, elude a composition it can be useful even to calculate determinant of a matrix when you need to calculate determinant. Now, we have said that the Gauss elimination with partial pivoting now there is one case in which you will not need any row interchange and that is when your matrix is diagonally dominant by columns and that precisely means that when you look at the diagonal entry its modulus is bigger than some of the all remaining entries. So, that is definition of diagonally dominant matrix by columns. So, suppose you have got summation i goes from 1 to n i not equal to j modulus of a i j that means you are looking at j th column and the entries without exception of the entry on the diagonal. So, if you have got this to be less than or equal to modulus of a j j j is equal to 1 to up to n then in particular look at the first column you are going to have summation i goes from 2 to n modulus of a i 1 to be less than or equal to modulus of a 1 1. So, what we were doing was you look at the first column and then you look at the entry which has got maximum modulus. Now, suppose your first column is such that modulus of a 1 1 is not only bigger than each entry in the column, but it is bigger than bigger than or equal to some of the remaining entries. Then obviously, modulus of a 1 1 is the entry or a 1 1 is the entry with the maximum modulus in that column. So, no need of the row inter change. Now, this is for the first step what about the second one like now I do not need row inter change. Now, I will be subtracting appropriate multiples of the first row from the remaining rows I get a modified matrix. So, what is the guarantee that at the second stage also I do not need row interchange because now you get a different matrix. So, that we are going to do as a tutorial problem that this if your matrix is diagonally dominant by columns, then we had our matrix A. Suppose this is diagonally dominant by columns, then this A we it gets transformed after the first step to a matrix A 1. So, the matrix A 1 is you will have the first row as it is a 1 1 a 1 2 a 1 n the first column is 0 and then here you are going to have some a tilde matrix A or let me not use you let me call it matrix B. So, the matrix B is going to be n minus 1 by n minus 1 matrix. So, we are now going to work on this matrix B. So, in the tutorial problem we will show that B is also diagonally dominant. So, this is going to be our tutorial problem. So, now when we started with the Gauss elimination with partial pivoting I had said that you should not divide by a small number because then it introduces the error you lose the accuracy. So, then we said that in the column you will look at the element of the maximum modulus. Now, may be a better way of doing is that look at the whole matrix B. So, in the matrix there are n square entries in that n square entries you look at the element which has got maximum modulus and then that entry you bring it to the place first row and first column. So, when you do this what you will have to do is interchange the rows and also interchange the columns. When you interchange the rows you have the same system you do not change your system when you interchange the columns then what you are doing is you are renaming your variables like suppose I have a system which has x 1, x 2, x 3, x n are the unknowns. If I interchange first and third column that means what was my variable x 1 that is becoming x 3 and what was my variable x 3 that is becoming x 1. So, may be that is the better way of doing and that is known as Gauss elimination with complete pivoting. So, a x is equal to b a is a invertible matrix in the first step you look at modulus of a k l. So, the entry in the kth row and lth row it is maximum among the all the elements. So, that means you will have to do in square comparisons. Now, you interchange first and kth row and first and lth column in order to do that you will be considering p 1 a q 1 where p 1 and q 1 are permutation matrices. The pre multiplication by a permutation matrix means interchange of rows, post multiplication by a permutation matrix means interchange of columns. So, you will be doing that and then that should be a better way in terms of stability than partial pivoting and that is true, but one has to keep the cost in mind. So, in practice the partial pivoting it works well and that is why you have to do one does not really do complete pivoting, but the complete pivoting will be equivalent of doing the elude composition of not a matrix a, but p a q where p and q are permutation matrices that means they are obtained from the identity matrix by finitely many row interchange or equivalently column interchange. So, the Gauss elimination without partial pivoting is a is equal to l u with partial pivoting is p a is equal to l u and complete pivoting means p a q is equal to l u and you will need n q comparisons. So, it is too expensive and that is why the complete pivoting is not done. In our next lecture we are going to consider vector norms and the induced matrix norm. Thank you.