 Today, we are going to solve some problems. So, first we are going to show that product of two lower triangular matrices is lower triangular. If the diagonal entries of both the lower triangular matrices, if they are equal to 1, then the product also has diagonal entries is equal to 1. Then, we will show that inverse of a lower triangular matrix is again lower triangular and then we had used these results to show uniqueness of the L U D composition. So, I will recall that result. So, here is the first tutorial problem that you have got two matrices L and M say of size N by N and they are unit lower triangular that means the diagonal entries they are equal to 1. So, we want to show that L into M is also a unit lower triangular matrix. So, the proof is going to be straight forward, we will just look at the multiplication of these two matrices. The structure of our matrix L is going to be of this form. Along the diagonal we have got once the part which lies below the diagonal it can have non zero entries or zero entries, but definitely the elements which are above the diagonal they are going to be zero. So, that means L i j is equal to 0 if i is less than j, i denotes the row index and j denotes the column index and L i i is equal to 1, i is equal to 1 to up to N. Matrix M is going to have similar form and now we look at the product of L into M. So, we have L i j is equal to 0, M i j is equal to 0 if i less than j and L i i is equal to M i i is equal to 1 for i going from 1 to up to N. I j th element of matrix L M we denote by L M in the bracket i comma j by definition of matrix multiplication this will be given by summation L i k M k j k goes from 1 to N. So, it is i th row of L multiplied by j th column of M. Now, using the fact that L i j is equal to 0 if i less than j, you are going to have summation k goes from 1 to i because when k is equal to i plus 1 L i i plus 1 will be 0. So, this is going to be summation k goes from 1 to i L i k M k j. We are interested in the elements of L M when i is less than j. So, for the case i less than j M k j will be 0 for k going from 1 to up to i because for k is equal to 1 to up to i j will be bigger than k and M has the property that M i j is 0 if i less than j and hence this summation in this summation M k j is they are going to be 0. So, you get L M i comma j is equal to 0. So, as in the case of L and M when you consider the elements L M i j when i less than j they are equal to 0. So, now it remains to show that the diagonal entries they are equal to 1. So, now we will look at diagonal entry of L M say L M i comma i that will be given by i th row of L multiplied by i th column of M and then we obtain L M i comma i to be summation k goes from 1 to N L i k M k i. Now, as before this summation reduces to summation k goes from 1 to i L i k M k i the only non-zero term in the summation is going to be when k is equal to i. So, you get this to be equal to L i i M i i and this is equal to 1. So, thus product of two lower triangular matrices is going to be a lower triangular matrix and if the diagonal entries of both the matrices they are equal to 1 then the product also has similar property. Now, the next example which we want to consider is if L is a lower triangular matrix then its inverse is going to be again a lower triangular. So, you start with the invertible lower triangular matrix and we want to show that its inverse is lower triangular when we had discussed Gauss elimination method. We had said that finding a inverse using the classical formula is something not advisable because it is very expensive. So, what one should do is if one wants to find the inverse of a matrix then you solve the system A x is equal to b for right hand side b is equal to E 1 E 2 E n which are the canonical vectors and then when you consider L x is equal to E j whatever result you get you should write it as the j th column. So, that gives you inverse of A it is this result we are going to use to show that inverse of a lower triangular matrix is lower triangular. So, computation of A inverse A inverse is given by A inverse E 1 A inverse E 2 A inverse E j A inverse E n. So, the j th column of A inverse is C j which is equal to A inverse E j equivalently you get A C j is equal to E j j going from 1 to up to n. So, thus the column the j th column of A inverse is obtained by solving system of linear equations and A C j is equal to E j where E j is the canonical vector with 1 at j th plus and 0 elsewhere. Look at L to be a unit lower triangular matrix because it is unit lower triangular determinant of L will be equal to 1 the product of the diagonal entries and hence it will be invertible. Now, we want to show that L inverse is also unit lower triangular. So, you write L inverse as its columns L inverse E 1 L inverse E 2 L inverse E n the j th column C j of L inverse is given by C j is equal to A 1 j A 2 j A n j which is equal to L inverse E j. So, that gives you L of C j is equal to E j a system of equation to solve. Now, our matrix L it has got 1 along the diagonal entries 0's above the diagonal and below diagonal it is given by this multiplied by say j th column. I am denoting the j th column by b 1 j b 2 j b n j is equal to right hand side which is equal to E j. So, 1 at j th plus and 0 elsewhere since the matrix L is lower triangular we can use forward substitution. So, the first equation has only one unknown that is b 1 j and equating you get b 1 j to be 0. Now, second equation will be L 2 1 b 1 j plus b 2 j is equal to 0. Now, b 1 j is already 0. So, you get b 2 j is equal to 0 and continuing in this fashion you get b j minus 1 j is equal to 0. When you look at the j th equation the j th equation will be L j 1 b 1 j L j 2 b 2 j L j j b j j where L j j is going to be 1 and since b 1 j b 2 j b j minus 1 j is equal to 0 you obtain b j j to be equal to 1. If you look at j plus first equation it will have L j plus 1 j b j j plus b j plus 1 j is equal to 0. Now, b j j is equal to 1. So, you are going to get b j plus 1 j is equal to minus L j plus 1 j. So, that is how you can calculate the j th column and in the j th column the important thing is you get first j minus 1 entries to be 0 the j th entry to be equal to 1 and then remaining entries they may be non zero. They may be non zero or they may be zero it depends on your matrix L, but definitely in the j th column first j minus 1 entries they are going to be equal to 0 and j th entry is going to be equal to 1. So, that is what will make our inverse to be unit lower triangular. So, when you look at L inverse it is going to have one along the diagonal star means either it is zero or it is non zero and above you are getting all zero. So, we get L inverse also to be unit lower triangular. Now, when you look at upper triangular matrices then whether product of two upper triangular matrices is upper triangular, whether inverse of upper triangular matrix is upper triangular. So, the result is true and we can deduce this result for upper triangular matrices by taking the transpose like we have proved that product of two lower triangular matrices is again a lower triangular. Now, if you are looking at two upper triangular matrices say u 1 and u 2 then when you take transpose transpose of upper triangular matrix will become lower triangular matrix. So, take the transpose then use the fact that product of two lower triangular matrices is again a lower triangular matrix and then again take the transpose. Same idea for showing that inverse of upper triangular matrix is upper triangular and there we use the fact that the inverse and transpose these two operations they commute in the sense that a inverse transpose is going to be same as a transpose inverse. Whether I take inverse first and then take transpose or take transpose and then take inverse the result is going to be the same. So, let us show this now for upper triangular matrices u and v are unit upper triangular take the transpose then they become unit lower triangular. What we have proved just now is v transpose u transpose that is going to be a unit lower triangular matrix, but v transpose u transpose is nothing but transpose of u into v. So, if u into v transpose is unit lower triangular u v is going to be unit upper triangular. So, u and v unit upper triangular implies their product is also unit upper triangular for the inverse now u is unit upper triangular take its transpose it becomes unit lower triangular take its inverse that is going to be unit lower triangular which we proved just now because the operation of transpose and inverse commute this will be u inverse transpose. Now, if u inverse transpose is unit lower triangular then u inverse has to be unit upper triangular and now using these two results let us recall the result of uniqueness of l u d composition. So, if we have a is equal to l 1 u 1 is equal to l 2 u 2 where l 1 and l 2 are unit lower triangular matrices u 1 u 2 are upper triangular matrices and invertible then from this relation I can deduce that l 2 inverse l 1 is going to be equal to u 2 u 1 inverse I pre multiply by l 2 inverse and post multiply by u 1 inverse now left hand side is a unit lower triangular matrix right hand side is a upper triangular matrix if both are equal then both of them they should be diagonal matrix and then because the diagonal entries of l 2 inverse l 1 they are going to be 1 the diagonal matrix is nothing but the identity matrix. So, this is the uniqueness of l u decomposition so if a is invertible then a can be written uniquely as l into u when we proved l u decomposition of the matrix a we had made an additional assumption that look at the principle leading sub matrix that is formed by first k rows and first k columns of our matrix. So, you take a square matrix if its determinant is not equal to 0 for k is equal to 1 to up to n then we proved that under this assumption a can be written uniquely as l into u. Now, we want to prove the converse that suppose a is a invertible matrix and it is given to us that it can be written as l into u where l is unit lower triangular matrix and u is upper triangular then we want to show that determinant of a k is not equal to 0. So, thus what we are going to show is if a is invertible you can write a as equal to l into u if and only if determinant of a k is not equal to 0 for k is equal to 1 to up to n. Now, in order to prove this result partitioning of this matrix it helps let us do one thing we have got our n by n matrix and we are already considering the leading principle sub matrix. So, you have got sub matrix which is formed by first k rows and first k columns. So, look at our remaining sub matrices so a is a 1 1 a 1 2 a 2 1 a 2 2 where a 1 1 is k by k matrix. So, this a 1 1 we have been calling it a k a 1 2 will be formed by first k rows and next n minus k columns last n minus k columns. So, the size of a 1 2 is k into k by n minus k matrix a 2 1 that is formed by n minus k rows and k columns. So, it will be n minus k by k and a 2 2 will be matrix of size n minus k by n minus k. Next we look at multiplication of 2 n by n matrices a and b. So, c is equal to a b then if you partition matrices a b and c in a similar manner then we are going to have c 1 1 is equal to a 1 1 b 1 1 plus a 1 2 b 2 1 the result is also for c 1 2 like c 1 2 will be given by first row into second column. So, c 1 2 will be a 1 1 b 1 2 plus a 1 2 b 2 2 and so on similarly for c 2 1 and c 2 2. So, it is as if we are treating this as a 2 by 2 matrix, but for our result we need only the result about c 1 1. So, let us prove this result. Now, before we proceed notice that c 1 1 is going to be k by k matrix a 1 1 is k by k b 1 1 is k by k. So, this is going to be k by k a 1 2 is k by n minus k b 2 1 is n minus k by k. So, when you take this as a 1 2 b 2 1 is n minus k by k matrix. So, in order to prove this result we use the usual formula for matrix multiplication. So, c i j will be given by i th row of a into j th column of b. So, it is summation p goes from 1 to n a i p b p j. This sum i split into two sums p going from 1 to k and p going from k plus 1 to n. Now, you look at. So, our i and j they are going to be lie between 1 and k. So, 1 less than or equal to i j less than or equal to k. Then, look at a i p. So, i is between 1 and k p is between 1 and k. So, a i p will be nothing but i p th element of a 1 1. Then, b p j p is between 1 and k j is between 1 and k. So, b p j will be nothing but b 1 1 p j plus here i is between 1 and k, but p is between k plus 1 to n p means the column. So, that is why this a i p will be i p th element of a 1 2. So, it is a 1 2 i p and then b p j. So, b p j will be nothing but p is between k plus 1 to n j is between 1 and k. So, that is b 2 1. So, b p j will be nothing but p is between k plus 1 to n j is between 1 and k. So, that is b 2 1 p j. So, this proves c 1 1 is equal to a 1 1 b 1 1 plus a 1 2 b 2 1. Now, let us use this fact for lower triangular matrices. So, suppose a is a invertible matrix which is given to us that it can be written as l into u. Let a k denote the principal leading sub matrix of order k and our claim is that determinant of a k is not equal to 0 for k is equal to 1 2 up to n. So, this is given to us that a is equal to l into u l being a lower triangular matrix with diagonal entries is equal to 1 determinant of l will be equal to 1. So, determinant of u is equal to determinant of a which is not equal to 0 because a is given to be invertible matrix. u being an upper triangular matrix determinant of u will be product of the diagonal entries u 1 1 u 2 2 u n n and this not equal to 0 implies u i i to be not equal to 0. Now, partition matrix a l and u as before because l is lower triangular l 1 2 is going to be a 0 matrix because u is upper triangular matrix u 2 1 is going to be a 0 matrix. Now, a 1 1 will be given by l 1 1 into u 1 1 because the other l 1 2 u 2 1 term both of them they are 0. So, in our notation a k is going to be equal to l k into u k. So, determinant of a k is equal to determinant of u k because again the diagonal entries of l k is equal to 1 determinant u k is u 1 1 u 2 2 u k k which is not equal to 0 and thus we have proved that if a is an invertible matrix this invertible is important which can be written as l into u then the determinant of the principle leading sub matrix of order k is going to be non 0 for k is equal to 1 2 up to n. Now, as a simple example let us calculate l u decomposition of this matrix. Now, at this stage I will like to recall that in order to find l u decomposition what you can do is consider matrix a use Gauss elimination method to reduce it to upper triangular matrix. So, that is going to give you matrix u and the matrix l will be unit lower triangular constructed using the multipliers used in the Gauss elimination method. So, we are considering Gauss elimination without partial pivoting when we use Gauss elimination with partial pivoting we get l u decomposition of a. If you are using Gauss elimination with partial pivoting which involves interchange of rows then l u decomposition is not of the matrix a, but it is of the matrix p into a where p is the permutation matrix obtained by finite number of row interchanges in the identity matrix. So, let us reduce this 3 by 3 matrix to upper triangular form and then find l and u. One can also find directly l and u by multiplying l and u and equating the corresponding entries, but the way I am proving is using Gauss elimination method. So, I want to reduce this to upper triangular form. So, first I want to introduce zeros in the first column below the diagonal. So, I do second row minus 2 times r 1 and third row minus 3 times r 1. So, our multipliers are 2 and 3. So, they are going to appear in the first column of l. So, the first column of l will consist of 1 here and then these multipliers 2 and 3. When I do these operations these 2 are 0 then 2 into 2 is 4. So, 5 minus 4 you will get 1 2 into 3 is 6. So, that gives you this entry to be 2 then 8 minus 3 into 2 that will give you 2 and then 14 minus 3 into 3 that will give you 5. So, this is the first stage of Gauss elimination method. In the second step you look at the second column and try to make this entry to be 0. So, this can be achieved by r 3 minus 2 times r 2. Multiply second row by 2 and subtract from the third row. So, you will get 0 here 2 into 2 is 4 5 minus 4 this will be 1. Now, the multiplier is 2. So, that is going to occupy this place. So, you will get u to be matrix upper triangular given by here and l to be lower triangular which is given by this. So, now I want to consider some properties of positive definite matrices. For the positive definite matrix the definition is it should be a symmetric matrix and x transpose A x should be bigger than 0 for every non-zero vector. Now, this second property something difficult to verify for all non-zero vectors we want x transpose A x to be bigger than 0. So, now we are going to prove a necessary condition that if matrix A is positive definite then its diagonal entries they have to be strictly bigger than 0. So, if you are given a matrix A with one of the diagonal entry to be either a negative real number or negative real number or 0 then such a matrix cannot be positive definite. Now, for special case of diagonal matrices we are going to show that D is positive definite if and only if diagonal entries they are bigger than 0. For general matrices it is a necessary condition it is not sufficient that it can happen that diagonal entries are all bigger than 0, but still matrix is not positive definite, but for the diagonal entries they are for the diagonal matrices it is a necessary and sufficient condition that the entries D i i should be bigger than 0 and proof of both these results they are simple. We are going to use the fact that E i and E j are canonical vectors. So, if I look at E j transpose A E i I get I j th entry of my matrix A. So, the diagonal entries they will be given by E i transpose A E i, A is positive definite. So, you look at E i transpose A E i, E i transpose is row vector with one at i th place, A E i is going to be i th column of A. So, it is A 1 i, A 2 i, A n i. When I take the multiplication I get A i i since E i is a non zero vector and A is positive definite E i transpose A E i is bigger than 0 and hence we get the diagonal entries of A to be bigger than 0. Now, let us look at diagonal matrix. So, suppose D is diagonal matrix with diagonal entries to be D 1, D 2, D n we are going to show that this is positive definite if and only if D i is bigger than 0. Now, this way implication that D positive definite implies D i is bigger than 0, this we just now proved for any general matrix. So, it is going to be true for diagonal matrix also. Now, let us look at the converse. So, this is given to us that D is a diagonal matrix, D i's are bigger than 0. So, the first thing is D transpose is equal to D and now let us look at X transpose D X. You have got D to be diagonal matrix. So, X transpose D X will be nothing but summation j goes from 1 to n d j X j square X is a non zero vector that means at least one of the entry is non zero which will make this sum to be bigger than 0 and that makes diagonal matrix D to be positive definite. I want to consider again Gauss elimination method. So, in the Gauss elimination method we have proved L U D composition for example, for positive definite matrices or for diagonally dominant matrices. So, you have got a matrix A, you do the first step of Gauss elimination method and then in the next step you are going to work only on the n minus 1 by n minus 1 sub matrix. So, I start with matrix A to be positive definite. Now, the next sub matrix which I am going to work on or after first step of Gauss elimination method whether this property whether it will be preserved. Now, the answer to that is yes, but I am going to prove only the property that if your matrix A is a symmetric matrix you do first step of Gauss elimination method and the sub matrix on which you are going to work in the next step it retains the symmetric property. So, it is an important property because it can help us to reduce our computations by half. So, here is the first step of Gauss elimination method this is a matrix A and we want to introduce 0 in the first column below the diagonal. So, our multipliers are going to be M i 1 is equal to A i 1 by A 1 1 and then the operations which we perform are R i becomes R i minus M i 1 R 1 multiplied the first row R 1 by M i 1 and so on. So, this is the matrix A is subtract from i th row when we do this this is the matrix which we obtain. So, suppose A is symmetric then whether this N minus 1 by N minus 1 sub matrix which is formed by second third and nth row and second third and nth column. So, this matrix is also going to be symmetric this is what we want to show. Now, the modified entries here they are given by A i j modified entry 1 is equal to A i j minus M i 1 A 1 j because this is our operation we are subtracting the first row from the i th row. So, this is what we are doing A i j minus M i 1 A 1 j A i j will be entry in the i th row we consider the corresponding entry in the first row. So, that will be A 1 j and multiply by M i 1 and subtract. So, our matrix A it becomes A 1 1 R 1 delta 0 matrix and A 1 where A 1 is the N minus 1 by N minus 1 sub matrix this is our question if A is symmetric then whether A 1 is also symmetric. So, we look at A i j 1 by definition it is A i j minus M i 1 A 1 j what was M i 1 it was A i 1 by A 1 1. So, I substitute now I use the fact that A is symmetric that means A i j is equal to A j i. So, you get A j i minus A i 1 will be same as A 1 i which I am writing here A 1 j will be same as A j 1 which I am writing here and then A 1 1. So, you get A j i minus A j 1 by A 1 1 will be nothing but M j 1 and then A 1 i. Now, this will be nothing but A j i 1. So, thus if your matrix A is symmetric then A 1 is also going to be a symmetric matrix. Now, it is also true that if A is positive definite then A 1 is positive definite if A is diagonally dominant matrix then A 1 will also be diagonally dominant. So, the matrix is diagonally dominant will mean that we do not have to do row interchanges. Then when the matrix is positive definite then in fact we have seen that we can write its Cholesky decomposition. So, again no row interchanges we had introduced row interchanges for the sake of stability when we looked at backward error analysis we saw that we should not divide by a small number. So, you have to like that is why one considers or it is important that if you do not have to do row interchanges then it is it saves our computational effort. Now, these two results they are a bit involved so I am not going to prove those results, but we are going to consider the effect on the solution if you multiply column of the coefficient matrix by a non-zero number look at the system of equations A x is equal to b. In this if I look at a say i th equation and multiply throughout by a non-zero number then I am not changing the system my solution is going to remain the same. On the other hand if I multiply column of coefficient matrix by a non-zero number then my solution will be different my system is different. Now, what we are going to show is that if you multiply j th column of the matrix A by say a number alpha where alpha is not equal to 0 the only change in the solution is going to be in the j th component and that j th component gets multiplied by 1 by alpha. Our matrix A is invertible matrix if I multiply a column by a non-zero number alpha determinant of the new matrix will get multiplied by alpha. So, if A is invertible my new matrix also will be invertible because the determinant will be not equal to 0 in doing this I am changing my system. So, I get a new solution. So, when you compare the new solution with the original solution the only difference will be in the j th component all other components of the original solution and the new solution they are going to be the same. The j th component effect will be what was earlier x j that will become 1 upon alpha times x j. So, when we consider later on about the scaling of the matrix in order to make it well conditioned this result is going to be important. Now, the proof of this result is straight forward what we are going to do is A x is equal to b. So, write x x is our column vector x 1 x 2 x n this x we can write as x 1 e 1 plus x 2 e 2 plus x n e n where e j's are the canonical vector apply A and then see what you get. So, we have a non singular system that means the coefficient matrix A is invertible it is altered by multiplication of its j th column by alpha not equal to 0 then the solution is altered only in the j th component which is multiplied by 1 by alpha. So, we have got A x is equal to b x is column vector x 1 x 2 x n this will be equal to x 1 e 1 plus x 2 e 2 plus x n e n. So, when you consider when you apply A then A x will be x 1 A e 1 plus x 2 A e 2 plus x n A e n is equal to b A e j is the j th column we are going to multiply the j th column by alpha. So, the new system is A tilde which is equal to A e 1 A e j minus 1 these columns they remain as before the j th column becomes alpha times A e j and the j plus first column onwards up to n th column they all remain the same. So, now, I can from this relation I can write this as x j by alpha and alpha times A e j. So, I multiply this as alpha times and divide by alpha. So, I am going to have this is same as equal to b because in this I am just multiplying and dividing by alpha. So, I will get now this is nothing but A tilde x tilde is equal to b and what will be x tilde it will be x 1 x 2 x j minus 1 by alpha times x j minus 1 the j th one will be x j by alpha and the last one is going to be x n. So, we had original system A x is equal to b and when you alter it then x tilde which is the new solution it gets altered by only dividing x j by alpha. So, these were some of the problems which are based on our Gauss elimination method and also on the L U D composition and in the next tutorial it will be based on norm of a metric. So, for the Gauss elimination method when we want to see the effect of the perturbation then we need to talk about the norms and the norms come into picture. So, here where some of the problems now about the L U D composition what you can do is you can try to find a L U D composition directly you should get the same result and one of the important result which we proved in today's tutorial is invertible matrix has a L U D composition if and only if determinant of a k is not equal to 0 where a k is the leading sub matrix leading principal sub matrix. We had similar result for positive definite matrix that means it was a if and only if result that a is positive definite if and only if you can write a as m m transpose where m is going to be an invertible matrix. So, thank you.