 In this lecture, we will study some LU factorization methods. If you recall in the last lecture, we have seen that the Gaussian elimination process itself can lead to a LU factorization for a given coefficient matrix. In this lecture, we will further see three methods to do the LU factorization. We will also see how to solve a linear system once the LU factorization is done. Let us start our class with a very simple question. What are all the easily solvable systems? We can at least think of three types of systems that can be readily solvable. One is the invertible diagonal system and next is the invertible lower triangular system and similarly invertible upper triangular systems. Let us take up each of this and see how to solve a system when the coefficient matrix is one of these three. Let us first take invertible diagonal matrix. So, these are the matrices for which the diagonal elements are non-zero and one can easily see that the system AX equal to B can be solved immediately to get the solution X. Now, if our coefficient matrix is an invertible lower triangular matrix that is if the coefficient matrix looks like this, where all the entries above the diagonal elements are zeros and all the diagonal elements are non-zero. The corresponding linear system will look like this. Here you can see that X 1 can be obtained directly by taking X 1 is equal to B 1 by L 1 1. Now, once you have X 1 you can put it in the second equation which will look like L 2 1 X 1 plus L 2 2 X 2 equal to B 2. We have already obtained X 1 therefore, you can obtain X 2 from this equation. Similarly, you can go ahead and find other components of the unknowns in a forward substitution process right. So, this is what I have told X 1 is obtained as B 1 divided by L 1 1 and once you have X 1 you can substitute that in the second equation to get X 2 as this expression and so on. So, this is called the forward substitution process. So, if you are given coefficient matrix is an invertible lower triangular matrix then you can do a forward substitution and get the solution of the linear system. You need not go for the elimination process as we have done in the Gaussian elimination method. Now, let us take the invertible upper triangular matrices. These are the matrices which will look like this here all the elements under the diagonal elements are 0 and all the diagonal elements are non-zero right. The corresponding linear system will look like this where this is the matrix A and this is the vector X equal to B bit. This is very familiar to us because in the Gaussian elimination method finally, we get a upper triangular matrix. So, we know how to solve this system to get the unknown vector X we just have to do the backward substitution process where X n is now obtained as B n by U n n and once you have X n you can substitute that in the previous equation that is the n minus first equation and get X n minus 1 as this expression and so on right and that is the backward substitution process. Now, our given matrix A may not be one of these three matrices in which case it is not possible for us to do just a substitution process rather one may have to go for the elimination process as we discussed in the Gaussian elimination method. At the end of this section I will show you that the Gaussian elimination process is very costly from the computational point of view that is why often Gaussian elimination method is not preferred one needs to go for iterative methods. However, if we are very particular that we have to have the direct methods then we have no choice other than going for Gaussian elimination method. In certain situations for instance in the residual character method which we will see later we come across a situation where your coefficient matrix is fixed and we have to solve the system A X equal to B with various right hand side vectors. In that case what one generally does is they factorize the matrix A as L and U and keep it aside and for every B now they just have to do a forward substitution and a backward substitution to get the answer. In such situations L U factorization is preferred now we will see how a general linear system can be solved by doing L U factorization. Remember L U factorization means you should first write your matrix A in the form of the product of a lower triangular matrix under upper triangular matrix. Generally we denote the lower triangular matrix by L and the upper triangular matrix by U therefore, we should find L and U such that A is equal to L into U right. Now once you have such a factorization what is the advantage well you can write now A X equal to B as L U X equal to B. Now take this U X alone and name it as a vector Z obviously, Z is unknown because X is not known to us therefore, as a first step what you do is consider the system L Z equal to B. Remember Z is nothing but U X that is this vector is what we denote by Z therefore, this is now equivalent to L Z equal to B. B is the right hand side vector that is given to us therefore, a forward substitution will give you the solution Z for this system because it is a lower triangular system right. Once you get Z you plug in that into U X is equal to Z this is what we have taken. Now this is again a linear system because Z is already obtained here and therefore, the right hand side vector for this system is known to us. So, you get a upper triangular system U X is equal to Z therefore, you can do a back substitution to get the required solution X right. So, that is the idea. So, once you factorize your matrix A as the product of the lower triangular and the upper triangular matrix then you give me any number of right hand sides I can just simply do one forward substitution and one backward substitution and give you X ok. The costliest step of the Gaussian elimination method is the elimination process and that elimination process is only done once and with that one process you can now solve many systems where A is fixed, but B varies such situation as I told will occur in certain methods like residual character methods ok. Now the question is how to factorize a given matrix A into lower and upper triangular matrices right. As we saw Gaussian elimination method itself gives us this upper triangular matrix which is the final matrix that you get out of the elimination process in the Gaussian elimination method and when you collect all the multiplications m i j's in each step in this form then we have seen in the last class that we get A is equal to L U. I asked you to check this I hope you would have checked and seen that this is true. Now the question is is this L U factorization unique? Well the answer for this question is that the L U factorization is not unique why? It is very simple to see once you have one factorization L into U then you take any invertible diagonal matrix D and just you can write L into U as L D D inverse U right. I am just multiplying D and D inverse that will not change the system in any way and now just bracket L into D that will again be a lower triangular matrix and then bracket D inverse into U that will again be a upper triangular matrix and this lower triangular and upper triangular matrices will be surely different from the one which you have obtained. If you choose this invertible diagonal matrix suitable right. So, therefore, once you have one factorization you can generate infinitely many L U factorizations. Now the next question is is Gaussian elimination method is the only way to obtain L U factorization the answer is no. There are at least three methods that we will learn in this course one is the do little factorization another one is Crude factorization and finally, we will also learn Cholesky's factorization out of these three do little and Crude factorizations are computationally as costly as Gaussian elimination method. However, Cholesky's factorization is little efficient than all these methods, but it works only for symmetric and positive definite matrices. We will learn to construct L U factorization for a given invertible matrix using each of these three methods. Let us first consider the do little factorization what is mean by do little factorization well it is a L U factorization of a given matrix where the lower triangular matrix has all its diagonal elements as one. If you recall in the Gaussian elimination method the L U factorization that we got is precisely the do little factorization. You can see that all the diagonal elements of the lower triangular matrix have value 1 right. Therefore, the Gaussian elimination method in fact, gives us a do little factorization. Now the next question is can we always get a do little factorization for a given invertible matrix well we have a condition under which do little factorization surely exists. The condition is that all the n minus 1 leading principle minus are non 0 then the matrix A will surely have the do little factorization well. I hope you know what is mean by principle minus and leading principle minus you would have studied this in your linear algebra course. However, we will quickly recall here let A be an n cross n matrix as sub matrix of order k where k is less than n of the matrix A is a k cross k matrix obtained by removing n minus k rows and n minus k columns of A. The determinant of such a sub matrix of order k of A is called minor of order k of that matrix. Remember just for the minor you can remove any set of rows and the same number of columns you can remove and get the sub matrix whereas, principle sub matrix means well you have to remove n minus k rows and whatever index of the rows are removed the same index of the columns has also be removed. Say for instance you have 1 2 3 4 5 6 7 8 9 B the matrix A then if I remove say for instance third row and similarly the third column is removed the remaining one that is 1 2 4 5 is a principle sub matrix and its determinant is the principle minor. Whereas, if I remove the second row and say third column then it is just a sub matrix and its determinant is just a minor because you are removing the second row, but you are not removing the second column. If you are removing the same index then it is principle minor otherwise it is just minor. Now what is mean by leading principle minor? Well the leading principle minor are the principle minus where the sub matrix is obtained by removing the last n minus k rows and columns. Therefore, if I remove third row and third column it is not only a principle minor it is also a leading principle minor of order 2. Similarly, if I remove second row and third row and similarly if I remove second column and third column then I am removing the last two rows and the same index last two columns also. Therefore, the remaining one is the leading principle minor of order 1. I hope you understood the definition. Now if you go back what the theorem says if all the n minus 1 that is the leading principle minor of order 1 2 3 up to n minus 1 if all of them are non 0 then we will surely have do little factorization that is what the theorem is. Well we will not go to prove this theorem because the proof of the theorem is more or less similar to the existence proof of the Cholsky's factorization. We will spend time on understanding the existence theorem for Cholsky's factorization. Interested students can first understand the Cholsky's factorization existence theorem and then come back and read the existence theorem of do little factorization you will surely understand it because the proof is almost similar. Rather here we will just see how to compute do little factorization for a given matrix. Again to keep our discussion simple we will only explain the computational procedure for do little factorization in the case of a 3 cross 3 matrix and the generalization to n cross n matrix can be done in a similar way. Let us see how to obtain a do little factorization. Well it is pretty straight forward you write the matrix A as the lower triangular matrix with all its diagonal elements as 1. Remember that is the definition of do little factorization into the upper triangular matrix. There is no restriction for the upper triangular matrix only restriction for the lower triangular matrix is that all the diagonal elements are 1. Now once you write this what you do you multiply these two matrices and compare the elements of the right hand side matrix with the elements of the matrix A. That is how you will get all the unknowns on the right hand side. Remember all the L's and U's are unknowns. You can to multiply the first row of L and the first column of U. You can see that the left hand side will have A 11 whereas, the right hand side you will get U 11. Therefore, you directly get the value of U 11 as A 11. Similarly you multiply the first row with the second column of U. You will immediately get the value of U 12 as A 12. Similarly the first row of L with the third column of U will give you U 13. Again second row of L and the first column of U will give you L 21 as A 21 divided by U 11. Remember for this you need this to be non-zero. That is nothing but A 11 should be non-zero. If you recall we have already assumed that all the principal minus up to order of A 11 should be non-zero. If you recall we have already assumed that all the principal minus up to order N minus 1 are non-zero. In particular A 11 is nothing but the principal minor of A of order 1. So, that should also be non-zero which is assumed in the theorem. Therefore, this condition as per the theorem should be satisfied. If this is not so, then you cannot go ahead with the do little factorization. Well we have obtained all the elements in the first column of L and also we obtained all the elements of the first row of U. Now let us go ahead with this idea and see how to get the other elements like we still have to get L 3 2 and we have to get all these elements also right. Well you multiply the row 2 of L by column 2 of U that will give you this equation. Similarly row 2 and column 3 of U will give this equation. Here you can see that L 2 1 is known to us U 2 1 is known to us therefore, U 2 2 can be obtained immediately. Similarly L 2 1 is known to us U 1 3 is also known to us therefore, U 2 3 can be obtained. So, we have also obtained these elements these elements are already obtained and we also know these elements. So, we have to know this one and this one for that we will multiply the third row of L with the second column of U to get this equation. Remember this is known to us U 1 2 is known to us U 2 2 also is known to us in the previous step. Therefore, this unknown that is this one is now obtained explicitly and similarly you now take the row 3 of L with the second column and multiply it with column 3 of U to get this equation. Here also you can see that this is known to us this is known to us L 3 is now known to us from just this equation U 2 3 is also known to us. Therefore, this unknown is obtained in terms of known quantities and in this way we have obtained all the elements of the lower triangular matrix and all the elements of the upper triangular matrix also. So, in that way we have obtained the do little factorization it is a very simple idea just write A equal to L into U where L alone you have to write with lower triangular matrix having all its diagonal elements as 1 and then just multiply the right hand side compare the coefficients with the coefficients of the matrix A and you will get all the unknowns on the right hand side. In that process you only have to assume that U 1 1 and U 2 2 are non zeros right that will be equivalent to assuming that the principal minor of order 1 and 2 are non zeros. This is what our theorem was also demanding from us that all the principal minus of order n minus 1 should be non zero here we have taken n equal to 3. Therefore, the principal minor of order 1 and 2 both should be non zero that is what is demanded from this construction also well this is just a construction one needs to prove that theorem for that we will just give the proof in the case of Cholsky's factorization and one can easily understand the do little also once you know the proof of the Cholsky's factorization existence theorem. Well let us take an example let us consider the matrix A given like this. Now we are looking for a L U factorization of the matrix A remember you have to write A is equal to L into U with L having all its diagonal elements as 1 this is the only key idea in do little factorization. Once you have this then you know how to obtain each of this unknowns all this unknowns. Now you have the expressions for all these unknowns or you can simply multiply them and compare the coefficients also. So, you can readily see that U 1 1 is just 1 U 1 2 is also 1 and U 1 3 is minus 1 and similarly you can get these 2 unknowns also that is L 2 1 and L 3 1 they are given by A 2 1 divided by U 1 1 which is 1 and L 3 1 is A 3 1 divided by U 1 1 and that is given by minus 2. And then go on with finding U 2 2 and that is given by 1 and U 2 3 can be obtained using this formula and that is given by minus 1. And finally, we have to get L 3 2 and its expression is given like this and its value can be obtained as 3 and also U 3 3 which is given by this expression and its value is 2. Therefore, we have obtained all the unknowns and hence we obtained the LU factorization of the matrix A as L given like this and U given like this. Now if I give you a right hand side vector B it is very simple for you to obtain the solution of the system A x equal to B. How will you do that? First write L into Z equal to B remember this is L and this is Z and B and do a forward substitution that will give you the unknown vector Z which is in our case it is 1 0 and 3. Once you have the Z vector remember Z is nothing but U into x therefore you have to solve the system U x equal to Z. That again can be done by the backward substitution because now you have the coefficient matrix as the upper triangular matrix and that leads to the required solution given like this. You can notice that there is no approximation involved in this process and that is why it is a direct method. Why there is no approximation because we have not done any rounding process in this all the calculations are done with infinite precision basically we have kept them in the fractional form and did all the calculations. Once you understand duality factorization the crude factorization is not very difficult for you to understand because the crude factorization is the one where the upper triangular matrix has all its diagonal elements as 1 whereas the lower triangular matrix has no restrictions. Of course, other than being a lower triangular matrix that is all otherwise there is no restriction on this elements whereas the upper triangular matrix should have all its diagonal elements as 1. Now, I hope you can construct the crude factorization because the idea behind constructing crude factorization is very much similar to the way we have constructed the duality factorization. Only thing is you have to write U with all its diagonal elements as 1 whereas L is written without that condition and with this we have finished the construction of duality and crude factorizations. We are left out with the Chowsky's factorization which we will study in the next lecture. Thank you for your attention.