 Hi, we have introduced some direct methods for solving non-singular linear systems. In this we have introduced Gaussian elimination method, Doolittle factorization method, Crude factorization method and Cholsky's factorization method. Direct methods give exact solution when there is no arithmetic error involved in it. However, these methods are often not preferred in practical situations because these methods are quite costly to compute when the systems are large in size. Now, when this is said our question is how to measure the cost involved in computing solution using any method. Well that can be done by counting the number of arithmetic operations involved in a method. We will try to understand this by doing operation count for some direct methods. We just take Cholsky's factorization and Gaussian elimination method and we will count the number of arithmetic operations involved in these two methods and see how to compare their efficiency in terms of the number of arithmetic operations involved in them. Let us take the Cholsky's factorization first. Recall that in the last class we have given two ways to construct Cholsky's factorization. One is by going step by step starting from the sub matrix of order 1 compute its Cholsky's factorization which is just direct and then using that you can find the Cholsky's factorization of the sub matrix of A of order 2. Then once you get that which we name as L 2 then you go to find L 3 and so on and in that way we construct the Cholsky's factorization for the given n cross n matrix right. That is one way of doing another way is by direct comparison right. In that way also we have given the expressions for diagonal and non diagonal elements of the Cholsky's factorization in the last class. We will use this expressions that is the expressions obtained using direct comparison to do the arithmetic operation count because these are very explicit to see therefore, it is very easy for us to count the number of arithmetic operations involved in them. Recall the expression for non diagonal elements of the Cholsky's factorization is given by this expression. These are the non diagonal elements of the matrix L such that A is equal to L L transpose just to recall right. Now, let us try to count the number of operations involved in this expression. Remember that L i j is non 0 only for j is equal to 1 2 up to i right in that non diagonal elements are only up to i minus 1. Therefore, in this expression j runs from 1 to i minus 1 and that gives you all the non diagonal elements of the ith row before the diagonal element. And this has to be done for all the rows and there are n such rows therefore, i should run from 1 to n. Now, let us observe this expression and try to see how many arithmetic operations are involved in computing this quantities. For each j you can observe that there is one division involved in this expression. Of course, it should be for i greater than 1 right because when you take i is equal to 1 there is no non diagonal element which can be non 0 right that can be also seen in this expression j should run from 1 to i minus 1. Therefore, when i equal to 1 j is not going to exist. Therefore, you have one division for each j for i starting from 2 and goes up to n. Similarly, you can see that the expression has 1 subtraction and that should be for i greater than 2 and j greater than 1. You can observe that from these expressions and also the way the summation is written. And now, let us see how many multiplications are involved. You can see that there is one multiplication in each term of this summation and there are j minus 1 such terms. Therefore, there are j minus 1 multiplication for each j right. Similarly, there are j minus 2 additions for each j with this you can see that each component l i j for each j and each i you have this many arithmetic operations. You can see that there are i minus 1 j's like this and that will give us one typical row and there are n such rows. Therefore, we have to sum them all suitably and we can see that you have these many multiplications and divisions plus these many additions and subtractions involved in obtaining the non diagonal elements of the Cholsky's factorization matrix l. Now, we just have to add these terms. There are some elementary formulas that can be used in order to simplify these expressions. And you can see that this expression will finally simplifies to n cube by 3 plus some constant times n square plus some constant times n. In all these operation counts, generally we are not interested in the exact expression of this operation counts, rather we are interested in what is the power of the leading term in this expression because that tells us how fast the computational cost will increase as n increases right. That is the main interest in studying the efficiency of any algorithm not only in Cholsky's factorization. That is why we are not interested in obtaining the exact expression for this term, rather we are only interested in seeing what is the leading term. In fact, we can say that the number of arithmetic operations involved in the non diagonal elements of the Cholsky's factorization is of order n cube. It is pretty costly in general. Let us keep this in mind and go ahead and count the number of operations involved in the diagonal elements. Recall that the diagonal elements are given by this expression. You can see that for each i, we have one subtraction of course, for i greater than 1, i minus 2 additions and i minus 1 multiplications. Remember when we say subtraction, it may be also addition because subtracting a positive number with a negative number leads to actually addition. So, when I say subtraction, I am just looking at this sign and then counting. It may land up to be a addition of two positive numbers. Let us see what is the total number of arithmetic operations involved in computing diagonal elements and the total number of arithmetic operations is given by n into n minus 1. Well, it is contributing to order n square. Apart from that remember, we also have to take a square root, but generally square root will not contribute to order n cube. Therefore, the leading order term is actually coming from the non-diagonal elements and that is the costliest part which is contributing to order n cube in the Cholsky's factorization. Therefore, the total number of arithmetic operations involved in Cholsky's factorization is nothing but the sum of the arithmetic operations involved in the non-diagonal elements computation plus the number of arithmetic operations involved in the computation of diagonal elements. You can see that that contributes to n cube by 3 sorry this is a typo it is 3 n cube by 3 plus some expression whose leading term is n square. In literature this is also called the total flops. Flop means it is the floating point operations. Now, we got an idea of how many arithmetic operations or flops involved in Cholsky's factorization with this we can understand the time taken by the Cholsky's factorization in order to get a LU decomposition of a given symmetric positive definite matrix. With this let us now go to the Gaussian elimination method and try to see how many arithmetic operations are involved in the Gaussian elimination method. Let us recall that Gaussian elimination method has 3 parts. One is the left hand side elimination remember we have to eliminate this to get an upper triangular matrix u x equal to some b tilde some other b will come because when you do all row operations correspondingly you will do the same operation on the right hand side vector also. Therefore, when you are modifying the left hand side coefficient matrix correspondingly the right hand side will also get modified right. Therefore, we can say that the Gaussian elimination method involves 3 steps. One is the elimination step that is eliminating a to the upper triangular matrix u that is a to u and then the modification of the right hand side vector that is b to b tilde. Once you do that you get an upper triangular system then you go for the backward substitution to get the solution. So, that is the way we have done the Gaussian elimination method. Therefore, the method involves 3 steps. Let us try to see how to do the operation count for each of these steps. Let us take the elimination step. In this step if you see to make this coefficient 0 what we do is we multiply the first equation with m 2 1 right and then we will subtract that equation with the second equation roughly speaking that is what we do. So, how many multiplications are involved in just making this term 0 and thereby getting a modification for the second equation at the step 1 that is our first task. You can see that there are n multiplications remember I am only doing the left hand side elimination the right hand side we will do later. The left hand side elimination has in order to eliminate the first coefficient of equation 1 in step 1. We have to do n number of multiplications and then how many multiplications addition or subtraction is involved well again there are n terms therefore, n addition or subtractions are involved in this. In all this you note that we already know what is the coefficient of the first term right. Therefore, we will not do this elimination process explicitly in our code. We will directly write it as 0 therefore, we will not ask our computer to make this multiplication and this subtraction. This is how we will write our code therefore, this is not involved in our operation count. Apart from this you have n minus 1 multiplication and n minus 1 subtraction or addition right. Now, how many divisions are involved well in m 2 1 we are writing it as a 2 1 divided by a 1 1. Therefore, 1 division is involved in this. So, in order to eliminate the first term of the second equation in step 1 we need n minus 1 multiplication n minus 1 subtraction or addition and 1 division. Now, this is for one typical equation how many such equations are there there are n minus 1 equations. So, therefore, you need n minus 1 addition or subtraction for one single equation like that you have n minus 1 such equations right. Similarly, how many multiplication for one single equation you have n minus 1 multiplications and there are n minus 1 such equations. Therefore, n minus 1 square multiplications are involved how many divisions are there for one equation you have 1 division and there are n minus 1 such equations are there. Therefore, finally, how many operations are involved at step 1 we have n minus 1 square addition or subtraction n minus 1 square multiplications and n minus 1 divisions are involved in step 1. I hope you have understood how we have counted the operations. Remember we have counted only the arithmetic operations and that too for Naive Gaussian elimination method ok. We are not counting the pivoting operations because that involves finding maximum right. Actually, one has to also count that, but we are just doing the arithmetic operation counts because that is rather simple to count right. That is why we are counting and judging the efficiency of a method only through the arithmetic operations. Now, let us go to the second step. Well in the second step we have only n minus 2 equations right because already this much is done. Therefore, we will start from third equation onwards and go up to n. Therefore, we have n minus 2 such equations and similarly all these elements are made 0 right. Therefore, we will not be doing any operation with this we will be doing operations in order to make this term to be 0 right. Therefore, we will not even involve these terms in our operation because we will directly put their values as 0. Therefore, we will be starting from this term onwards and go. Therefore, you have n minus 2 such addition subtraction and n minus 2 such multiplications are there and of course, for each equation you have one division involved in that multiplication term m right. Therefore, in the second step we have n minus 2 square addition subtraction n minus 2 square multiplication and n minus 2 divisions. Now, you can go on doing this in the third step you will have n minus 3 square addition subtraction n minus 3 square multiplication and n minus 3 divisions and similarly you go on like that. Now, what is the total number of operations involved in the elimination step well you can add them all up and that can be written like this. Well we are using an elementary formula which is known to us for this term and for these terms we use this formula right. So, that is how we are getting these expressions and therefore, the total number of operations is simply you add them up that is the total number of operations. You can see that you have again order of n cube from here itself you can see order of n cube is coming right. Now, let us go to the right hand side elimination again you can do the counting in a similar way to see that the number of additions and subtractions involved in the right hand side vector that is to obtain the vector from b to b to b to tilde that is what we have denoted in our first slide for the system u x equal to b tilde right. For that we need this many operations and how many multiplications similarly you have at each step for each equation you have one addition or subtraction one multiplication we need ok. So, the division is already done in the elimination process and when it comes to back substitution you have these many additions or subtractions I leave it to you to go through the back substitution carefully and count the number of operations involved in getting each component of the vector x. Then you can see that it counts to the number of addition or subtraction as n into n minus 1 by 2 and similarly multiplication or division comes to n into n plus 1 by 2. Now, coming to the total number of operations we have to add all the arithmetic operations involved in all the above steps you can see that the number of additions and subtractions involved in all the above steps finally can be written as n into n minus 1 into 2 n plus 5 divided by 6 and also the number of multiplications and divisions involved in all the above operations includes n into n square plus 3 n into n minus 1 by 3. In this let us only consider the elimination part and particularly let us try to see how many multiplications and divisions are involved in the elimination part. You can see that these many multiplications and divisions are involved right we have just now seen this. Let us try to simplify this expression and see how it looks like it looks like this and you can see from here that this is also of order n cube right just like what we saw in the case of Cholsky's factorization. In fact, you can see that this is approximately equal to n cube by 3 that is the elimination process includes the number of operations which of order n cube in particular it includes approximately n cube by 3 number of multiplications and divisions. You can see that it may be little more or less, but what matters finally to understand how costly a particular method is we have to see what is the leading order in the number of operations. Here we can see that the leading order is 3 and that is pretty big because generally we work with linear systems of large dimensions something like 1000 or more can be even less in some practical applications. In such cases these methods are pretty costly and of course, apart from that you also have other part of the Gaussian elimination method like right hand side elimination and also back substitution. They also include some operations when you only look at the multiplication or division they happen to be this much right which is again n square but what matters is the leading order that tells us how costly the method is. So, you can clearly see that the Gaussian elimination method has three parts one is the elimination step which is the costliest part of the Gaussian elimination method which is of order n cube the right hand side modification is of order n square and the back substitution is of order n square as n tends to infinity. Hence the Gaussian elimination method is very costly in particular the elimination part is the most costliest part this is why often what people do is they make l u decomposition of the matrix A and in particular if your problem involves solving many linear systems where the matrix A is fixed but you have many right hand side vectors B to compute the solutions then what you can do is you do the l u decomposition once for A and then you use a forward substitution and backward substitution for every given B to get the solution of the linear system A x equal to B. Let us try to now compare the Gaussian elimination method with the Cholsky's factorization for that we will only consider the elimination part that is the left hand side elimination of the linear system which is equivalent to the l u factorization of the matrix A using the Naive Gaussian elimination method. If you recall the number of multiplications involved in this process is 1 by 3 into n cube approximately right similarly you can also see that the number of additions or subtractions involved in the elimination process amounts to approximately again 1 by 3 into n cube therefore if you add these two we can say that the total number of operations that is multiplication or division and addition or subtraction involved in the l u factorization of the matrix A using the Naive Gaussian elimination method is 1 by 3 n cube plus 1 by 3 n cube that amounts to 2 by 3 n cube number of operations plus of course you have some lower order terms right. But at the leading level you have this many arithmetic operations involved in the l u factorization of A using Gaussian elimination method. Now let us recall from the Cholsky's factorization we had then total number of operations coming to be approximately 1 by 3 into n cube of course plus some lower order terms. Now the higher order level you can see that Cholsky's factorization needs only half of the effort put to get l u factorization for the matrix A using Gaussian elimination method. In fact you can see that the do little factorization and the crude factorization are also equivalent to the Gaussian elimination method therefore they also amount to something equivalent to this many arithmetic operations involved in them. This shows that when you are working with a symmetric and positive definite matrix it is always preferable to go for Cholsky's factorization of course it is not surprising for us to see that Cholsky's factorization is little efficient than the other factorizations because Cholsky's factorization uses the symmetric nature of the matrix A and thereby it has to only compute l but not u explicitly. Whereas the Gaussian elimination method do little factorization and also crude factorization they work for any suitable invertible matrix and not necessarily symmetric and they have to compute both l as well as u explicitly that is why they involve more computational time than Cholsky's factorization. Therefore the moral of this class is that whenever you are working with symmetric positive definite matrix you always go for Cholsky's factorization. Remember that this operation count is only for the way we have computed the Cholsky's factorization using the algorithm we introduced in the last class. There are other ways to obtain Cholsky's factorization for instance one can also obtain Cholsky's factorization from the do little or crude factorization that may not lead to this kind of efficiency because you will be spending time anyway equivalent to this much of time when you are computing the do little or crude factorization. Therefore when you have a symmetric and positive definite matrix you go for the Cholsky's factorization in the methodology that we have introduced in the last class with this note we complete our discussion on direct methods. Thank you for your attention.