 Good morning. Today, we will study the mainstream method of solving linear systems of equations, and in that we study these methods in the general family of Gauss elimination family of method. Gauss Jordan, Gaussian elimination with back substitution and eluity composition these are the three methods that we will study in this lecture. Now, we have to note that these methods are typically applicable for square systems, and the way we will proceed is that we are expecting a matrix which is non singular as well as square invertible matrices. Now, when we have a system a is equal to b which we want to solve for known a and b, then first we see how we handle Gauss Jordan elimination. The good thing in Gauss Jordan elimination method is that the method goes in the systematic manner irrespective of how many right hand side you have to solve for. For example, if you have to solve for the system a x equal to b for different right hand side b 1, b 2, b 3, and at the same time you want to find out the inverse of the matrix also which is same thing as solving the system a x equal to b for n different right hand sides. They are the n columns of the identity matrix, because you know that a a inverse is identity that means a into first column of a inverse equal to first column of identity matrix and so on. And at same time you can evaluate an expression like this which is a inverse b, this is again like solving a matrix equation that is a p equal to b where b is a known matrix. So, all these tasks we can do in the Gauss Jordan elimination method in the same framework. For that first we assemble the matrix a the right hand side b 1, b 2, b 3 and an identity matrix if we want the inverse of a also and this matrix b here. After assembling this matrix we have is a matrix of size n by 2 n plus 3 plus p. In this 2 n plus 3 plus b p we have n columns here in a n columns here in the identity matrix 3 columns for the 3 right hand side and p columns here in b. After we assemble this large matrix which has been denoted here as c then we follow the algorithm. The algorithm is actually very simple what we have already studied. In the algorithm we apply elementary row transformations to systematically reduce the matrix a to an identity. Here what we do first is that we want to reduce this matrix to identity matrix through elementary row transformations. So, first thing we divide this first row by this diagonal element a 1 1 and then this becomes 1 this becomes 1. As we do this all these undergo changes right. Next we want zeros here in all these locations we want zeros then we multiply the first row with a 2 1 a 3 1 etcetera and subtract from the second row, third row and so on. In the process these fellows become 0 and these undergo changes. Whatever changes we record them. Next we want this diagonal entry to become 1 right. So, we divide this entire row second row by a 2 2. As we do that this will become 1 and there will be further changes in the subsequent entries in the row. So, this becomes 1. Next we multiply this row second row with these entries and subtract from the appropriate rows. So, these again undergo changes and in the first time itself these fellows have changed and they keep on changing right. So, now here we have got 0 and so on. Next we will choose this right. Now in this entire process if we find that there is a zero somewhere which will mean that we cannot divide by that entry then we must pivot. That means that from here to here we look for the entry with largest absolute value. Now that means that from here to here if you have 0 minus 0.12 minus 7 and so on. Then in the largest entry minus 7 will be because that has the largest absolute value. So, the sign plus or minus will not matter. We will take the largest value and accordingly we will make a row interchange. Now this pivoting is essential when a diagonal entry turns out to be 0, but as a matter of policy as an algorithm it is done at every step even if the diagonal entry is not 0. So, that means when you divide by the diagonal entry a k k before that we do the pivoting step anyway. Now as we continue with this activity if the matrix is non singular if it is invertible then by the end we will have all once here right. And note that when we apply the row transformation related to the second row we get these as 0 and at the same time if we multiply the second row with a12 and subtract from the first row this one will not get disturbed because it is already 0 here, but this will become 0 and so on. That means that as we proceed for a non singular matrix this whole thing finally will break into an identity matrix above everything else is 0 and below also everything else is 0. So, this way it will break by the time there will be a lot of changes here. What will happen if the matrix is singular in that case in this n by n matrix as we proceed and we find that while pivoting we are sitting at a diagonal entry which is 0 and below that everything else is also 0. In that case we will find that from here to here if this whole thing is 0 then we will not be able to pivot and we will not be able to get a non zero entry with which we can divide and that will be the flag for us to decide that this is a singular matrix. And at that stage we can decide what we are supposed to do there may be an application in which you do not expect a singular matrix and in that case you can flag an error and decide that this is a wrong data. On the other hand there may be applications where a 0 has to be tolerated there you may try to make some other arrangement of handling the situation which we will be discussing later. So, for the time being we keep our attention focused on a square non singular matrix in which we get this identity. This is precisely the algorithm which you can see here the steps are exactly that pivoting and elimination and along with that there is one small extra item which is this delta. There are three lines which mention delta, delta equal to 1 initialization here. If a pivoting process fails then set delta equal to 0 and exit that is a second situation where delta is used and a third is here that is delta update and there is another update here. What is happening here is that this delta is actually the determinant. So, it is used in two senses one in order to use for the exit clause that is if a singularity is detected then the determinant value is set to 0 and the program is exited. The other purpose for which the determinant is used here is by is for evaluating the value of the determinant itself that is as a byproduct that is this process of Gauss Jordan elimination in itself embodies a method to determine the determinant of the matrix for that another separate routine is not required. So, in this method the determinant value turns up as a byproduct of the entire process. So, for that purpose first we initialize it and then for the first second third diagonal entries we go on doing pivoting and the corresponding updates of the other roles. So, this pivoting step identify l such that absolute value of k is the maximum in the column from that diagonal entry downwards that is from k to n. So, when k is 3 we will be pivoting at this location. So, from here to here whatever is the largest absolute value entry that will be selected as the pivot and the kth row and that row lth row will be interchanged that is the pivoting step. After the pivoting step you conduct the ordinary to elementary transformations dividing the diagonal dividing the current pivotal row by the diagonal entry p k k and then subtract the c j k times that row from other roles for all j above and below. And that gives you zeroes in the off diagonal entries above and below the pivotal point. As you go on doing this then unless a singularity is encountered you keep on doing till the end that is k equal to n minus 1 by the time you are through with k equal to n minus 1 everything has been reduced except that this entry till now may not be 1. So, there will be a final need for one last row scaling that is dividing the last row by whatever entry is feeling there that is this step. Yes divide the row n by c n n before every such division of row by the current pivot entry current diagonal entry you need to update the determinant and that is why before division there is a determinant update here and there is a determinant update here. At the end of the algorithm you have got determinant stored in delta and the matrix has been reduced to c tilde in which the first n columns will be identity and what will be there in the next column. Note that we started with a b 1, b 2, b 3, i n and b. Now every elementary row transformation that we have applied is actually equivalent to a pre multiplication with a non singular matrix and that such matrices have been multiplied on this side many, many of them and that will mean that all of them together will mean multiplication with one square matrix from this side and through that multiplication this square matrix has been reduced to identity. Then you know which matrix has been pre multiplied that matrix is a inverse because a inverse is a matrix with multiplied with a will reduce this identity. We have applied equivalent row transformations here in order to reduce this identity that means effectively we have multiplied this entire large matrix with a inverse and the result will be sitting in this matrix now and that will mean that by the time we have got identity here this will be a inverse b 1 that is solution of a x equal to b 1 similarly this similarly this and this will be a inverse into identity which means a inverse and this will be a inverse into b all the expressions that we needed to evaluate all of them will be sitting here in the rest of the column. Now when there is a premature termination that will mean that matrix a is singular and then you can decide what you want to do. If you use complete pivoting now the outline that I just made is for partial pivoting if you want to conduct a complete pivoting then for every column interchange the variables will get scrambled and that means you need to store that permutation and then later after solution you need to unscramble that permutation of the x 1, x 2, x 3 etcetera. In our opinion that is not really necessary in most of the applications you can carry through with the partial pivoting based algorithm as outlined here. Now a few issues we can raise regarding efficiency. In this native algorithm you would have notice that in the beginning here we have stored an identity matrix which was not needed because it does not have any information content as long as we know that an identity matrix is supposed to be there we do not need any further information to actually store those 1s and 0s. Similarly at the end you have got another identity matrix sitting here which is again not necessary to really store therefore many of the numerical routine professional algorithms do not give extra space for this and here for this. What they do is that whatever will be the result of the calculations in these locations that are stored in the location of a itself in the process. Now if you are new in this kind of calculations computations then I would not advise you to implement that but then numerical algorithms professional algorithms can do that because after the after a particular entry here has been obtained in its final form then later you will never need the old one therefore it is not a problem to overwrite the old number and that this fact you should know because in your actual research problems at many occasions you may actually use numerical algorithms and the way they handle your data and the way they report your result you should know very well and therefore when you call a subroutine from any professional library then you should make sure what the algorithm for the implementation of this algorithm in that library does to A. If the original matrix A is getting overwritten by A inverse finally this one then it is important on your part to store the original A by a different name before sending that matrix to the library. Another important point is that many students while solving for A x equal to b tends to first find the inverse of A and then multiply A inverse b that is not a good idea. For evaluating A inverse b never develop the complete inverse because developing the complete inverse will actually require you to solve A x equal to b kind of systems for n different beads that is n columns of the identity matrix. Why should you spend so much of computational effort to just solve one system for a single b therefore for evaluating this for solving a single b you should not actually develop the complete inverse in that case you can just curtail your system to this point and apply the algorithm that will be much deeper. Now you might find that if you write the algorithm or if you try to do an hand calculation then you will find that Gaussian elimination this algorithm is perhaps an overkill regarding number of computational steps additional subtraction multiplication etcetera. If you want something cheaper then the option is the second algorithm of this family and that is Gaussian elimination with back substitution. You might say that this complete reduction of the matrix A up to identity through elementary row transformation is actually not called first because as long as we can reduce it to an upper triangular form in this manner that is below this everything else is 0 our job is done as if we can just apply elementary row transformation so that a reduces up to the stage of a tilde where a tilde is this matrix with upper triangular structure below that below the diagonal entries everything else is 0 then our job is actually done because here we can use the last equation first and determine x n because in the last row we have everything 0 except the last entry that means this last equation opened up will give us a n n prime x n equal to b n prime that means x n is directly determined then we can handle the equation just above that will have 2 unknown x n minus 1 and x n x n is already known then x n minus 1 can be determined that is 1 at a time we can determine if we come from the lower end upwards that is called back substitution this is why we say that this method is called Gaussian elimination up to this point and then back substitution so this we can do and the last unknown is determined first and then x n minus 1 is determined first i equal to n minus 1 is used and then after x n and x n minus 1 is determined then the previous one then the previous one and through one complete series of back substitutions you determine all these variables so this method Gaussian elimination with back substitution is somewhat cheaper than the earlier method you can say that the computational cost for a single right hand side is actually half compared to Gaussian elimination method which we discussed first however if you need to solve the system for several right hand sides large number of right hand side or if you want to find the inverse also if that is needed then Gaussian elimination will not offer you any advantage that is between these two methods if you want to solve the system for several right hand side or if you want to find the inverse also in that case you should prefer Gauss Jordan method on the other hand if you want to solve it for a single right hand side like this then you should prefer this method but in both of these methods prior knowledge of RHS is needed prior knowledge of the vector B is needed because the elementary row transformations which is applied to A at the same time is also applied to the right hand side now you might find that another algorithm which will not need the right hand side beforehand but will first process with the matrix A itself and keep it in such a ready form that the moment a right hand side vector appears you can do another little bit of calculations to find the solution it is like what they do in good restaurants they keep the arrangement ready for a large number of dishes and the moment the customer orders something that dish is prepared and brought back. So, that kind of a situation emerges when you study LU decomposition and LU decomposition also is a member of this particular family of family of Gaussian elimination methods and before going into the actual method as such we may try to make a little observation regarding how this Gaussian elimination method actually work what are the steps that were applied in order to reduce this matrix A to a tilde this upper triangular form. So, when we try to explore the anatomy of this Gaussian elimination process then we found that the matrix A was sequentially operated over by a large number of elementary row transformations equivalent to pre multiplication with corresponding elementary matrices R 1, R 2, R 3, R 4 etcetera the result was a matrix U which is upper triangular the steps were this in which now if we do not want the diagonal one then what we need to do is that jth row is updated by subtracting from it a j k by a k k into the kth row for j varying from k plus 1 up to the bottom that is n and that will mean that say in the first case for k equal to 1 what we will be needed we will be conducting the second row minus a 2 1 by a 1 1 into the first row that is equivalent to having an entry here right like this and so on. So, this step for k running from j running from k plus 1 to n is actually equivalent to this. So, this is the matrix R 1 with which the matrix A was pre multiplied during the step in which we would get all 0s here without bothering to get a 1 here and so on. Similarly, in the next case when we want to reduce the A 3 2 to A n 2 these things to 0 then we will be having another set of elementary row operations in which the R 2 matrix will emerge as the corresponding elementary matrix in which similar entries will be found here everything else is same as identity matrix right and so on. Now, we will notice that this matrix and in the next matrix in which these fellows will have entries like this and so on all of them are lower triangular matrices and there is something very interesting in lower triangular matrices. If you have a large number of lower triangular matrices here then the product is also a lower triangular matrix. That means this entire matrix product R is a lower triangular matrix and there is a further interesting point here that the inverse of a lower triangular matrix is also a lower triangular matrix. This is one of the exercises in this chapter in the book which I strongly advise you to do on your own. So, that you appreciate what is happening there and how that appears for example, the inverse of this matrix is exactly this matrix itself with all these minus signs becoming plus and nothing else. This structure of the inverse of lower triangular matrices with the original matrix has a relationship with elementary row transformations and their inverses which we will see. Now, if this matrix R is lower triangular and its inverse is also lower triangular then calling that inverse as L you can see that L U will become L R A and L is R inverse that means L U is A. This is the basic idea behind the algorithm called L U decomposition. Let us understand this let us try to appreciate this whole thing with a little example. Suppose we want to conduct a Gaussian elimination of this matrix we will do that we will apply the Gaussian elimination at the same time we will do a few more things. We will analyze how this Gaussian elimination takes place. In short we will be actually studying this anatomy of the Gaussian elimination. So, first round while conducting the Gaussian elimination what you want to do you want to get 0 here. In Gaussian elimination itself you do not bother to get 1 here you just want 0 in the sub diagonal locations. So, these two locations you want to make 0. So, what you will do you will say that from the second row I will subtract twice the first row and to the third row I will add the first row that will give me 0 here. Let us do it the result will be first row will remain unchanged from the second row we are subtracting twice the first row that means we will get 0 3 1. To the third row we are just adding the top row first row that means we will get 0 9 and 5. Now tell what will you do if our mode change we have lost this matrix suppose we have lost this matrix, but we have got this and in our hand we have the information regarding what elementary row transformations were conducted to get this. And now we want to regain back the original matrix to get that original matrix what we need to do what we did earlier from the second row we subtracted twice the first row and from the third row we actually added the to the third row we actually added the first row. Now in order to regain the original matrix what we can do see the first row was unchanged. So, that is this is the old first row sitting here that we have not lost that means if we add back twice the first row here and if we subtract this top row from the third row then we should be able to get the original matrix that means the reverse operation of this if we want to reverse it that will be R 2 should get back it is lost part of twice R 1 similarly R 3 should get updated with this that will be restoring the original matrix. So, what is the corresponding elementary matrix for this the elementary matrix for this is 0 0 0 that identity matrix changed through these row operations that means we will get 2 here and minus 1 here. So, that means through these elementary row operations or through pre multiplication with this elementary matrix equivalently we will be able to regain the original matrix. So, we can write C right now we can forget about this part now we have got 2 things in our hand one is a little reduced version of this matrix sitting here and another matrix note it is lower triangular another matrix which pre multiplied with this will actually give us the old matrix back now to get it completely in the upper triangular form we want a 0 here that means we can subtract from this last row 3 times this we should not touch the first row because that will spoil this 0 right because there is a 2 sitting here this is 0. So, it is not a problem. So, from the third row 3 times the second row we will subtract to get a 0 here. So, 3 times the second row subtracted from the third row will give us twice the second row has been subtracted from the third row. Now again what we did we applied this elementary row transformation. Now again we want to get back the original matrix. So, original matrix we will get back by this operation right. So, this remains in its place let it be there and we want to get back this matrix by inserting something here right. So, what is that thing that matrix will be the elementary matrix corresponding to this elementary row transformation and that will be that is very easy to get that is through addition of twice the second row of this to the third row of identity right. So, now you see another lower triangular matrix. So, this is again equal because this is same as this and this is the same as the product of this. Now, we can forget about this and this whole thing is equal. The product of these two lower triangular matrix will give us a lower triangular matrix which through multiplication you should verify. So, this is the decomposition of the original matrix into two parts two factors one is lower triangular and the other is upper triangular right. And now what you can do is that now you call the right hand side till now there has been no talk of the right hand side and we have come up to this point. This decomposition is called L U decomposition that means not necessary to apply the actual elementary row transformations on A to the upper triangular form up to this form. But separate out from the matrix A those elementary row transformations in the form of this lower triangular matrix such that the remaining thing remains upper triangular. So, here you have collected the set of elementary row transformations in the lower triangular matrix and on that side the upper triangular matrix is remaining. So, this is the spirit of L U decomposition. Now, note that in this we have taken such an example in which pivoting was never necessary. And that is why here in the beginning we discussed the process of Gaussian elimination for those matrices for which no pivoting is done. This is not a serious limitation in the actual process this was done in order to explain the situation easily. Now, in the method of L U decomposition the most important underlying understanding is that the square matrix when process through such elementary row operations does not have does not acquire a 0 in the diagonal position such matrices will have all minor leading minors non-zero. What are the leading minors? In this case this is the first leading minor the second leading minor is the determinant of this the third leading minor is the determinant of this and so on. For an n by n matrix there will be n such leading minors all those determinants should be non-zero for a matrix to be L U decomposable in this manner. If suppose such is the case for our matrix at hand then how do we proceed? In the procedure we will not make any reference to the right hand side vector. But suppose we have decomposed it like this and then we are supplied with a right hand side vector then how to solve it that solution is very easy. To solve A x equal to b we will denote U into x as y and then A decomposed in this manner L lower triangular U upper triangular L U then in place of A if we write L U then U x represented as y will give us L y equal to b and then U x is y that is what we have denoted. Now since b is known and the lower triangular matrix is known y can be immediately determined through a process of forward substitution because if this kind of a matrix L matrix is sitting as the coefficient matrix in the first equation we will have only x 1 which we can determine. The second equation we will have x 1 and x 2 after determination of x 1 x 2 can be determined and so on. So, these are forward substitutions. So, through a series of forward substitutions like this we determine the complete y y equal to 1 to n after y is determined then we address this system in which U is upper triangular like this which is our old friend and in that case we conduct a series of back substitutions. So, with a series of forward substitution followed by a series of back substitutions we can solve the system of equation for any right hand side if once we have converted the original matrix A into L and U factors. Now, the question still remains how to conduct the L U decomposition process you certainly would not go in this manner because this will be extremely costly because this here we have actually done the done double the job right in order to examine the anatomy of this entire process. Now, while decomposing a matrix into L and U factors we first recognize that the factors will look like this and these are the unknowns which we will need to determine. The number of unknowns here now these are the use the number of unknowns in this first matrix is n into n plus 1 by 2 exactly that many unknowns here right. So, that means and if we work out the product then products will look like this normal matrix multiplication because L U together after multiplication should give you A and this A i j is the product of i th row of L and the j th column of U that has been written here in two ways because only a few of the terms you will need to add not from k equal to 1 to n, but k equal to 1 to i because beyond the diagonal entry you will not need. So, when i is less or equal to j you can use this i terms are summed up when j is less then j terms can be summed up right. So, now you find that n n plus 1 by 2 and n n plus 1 by 2 unknowns here and here a total number of unknowns is n into n plus 1 that means this many unknowns and how many equations L U equal to A. So, A has n square element. So, when you equate term by term then you get n square equation. So, less number of equations in more number of unknowns that means n extra unknowns are there that many unknowns can be chosen. So, one particular choice leads to the famous algorithm called Doolittle's algorithm and that is by choosing the diagonal entries of the lower triangular factor as 1 all these diagonal entries are chosen as 1 and the others other n square entries are determined. And that determination can be done in a particular order such that the at every step you will be solving one simple equation in one unknown only. So, after choosing L i i what is done here it is written in algorithmic form which you should later verify at leisure, but currently look at this after determining after assuming L 1 1 to be 1 you should determine U 1 1 in this order in this order you should try to determine first determine U 1 1 then L 2 1 L 3 1 L 4 1 and so on then U 1 2 U 2 2 and then the sub diagonal entries from L then super diagonal and diagonal entry of the third column of U and then the sub diagonal entries of L below the third entry which is known to be 1 and so on. If you conducted in this manner then you will find that at every step you are solving just one equation in one unknown see here A 1 1 when you try to solve you get L 1 1 which is 1 into U 1 1 and nothing else that means A 1 1 will come here then you try to solve for this. So, second row into first column L 2 1 into U 1 1 is equal to A 2 1 and U 1 1 is already determined. So, you get this similarly you get all these next you will determine these two and this is already 1 then you will determine all these. So, when you write the equation in this order then at every stage you will be determining one unknown U 1 1 all these these two then all these then these three then all these and so on in this order that is the order is actually written here you can determine all these unknowns one by one and most of the professional algorithms for a duty composition actually give you the output in this manner in which this matrix as such has no value except that the sub diagonal entries list out the non trivial entries of L others are 1 and 0 and the diagonal and super diagonal entries give the appropriate non trivial entries of U because in that the sub diagonal entries are all 0. So, this save storage space many professional algorithms also make another economical use of storage in the sense that this matrix they return in the place of A itself the address the location in which you supplied A in that name in that place itself it sends you back this matrix. Now, this is do little algorithm now we address that original question which we avoided in the beginning what about matrices which are not L U D composable at the same time the other question arises what about pivoting for example, take this this matrix is non singular that means that we should not be saying that any Gauss elimination method will fail because it is singular and the diagonal entries 0 etcetera we should not be saying that that means that for this non singular matrix we should be able to apply any Gauss elimination family of method without any hindrance. Now, let us proceed as we proceed first we try to determine what is the U 1 1 entry because L 1 1 L 2 2 L 3 3 they have been already flagged as 1 above that everything else is 0 all sub diagonal entries of U are already 0 now we want to first determine this U 1 1. So, for that we equate A 1 1 with the product of first row of L and first column of U that means 1 into U 1 1 plus 0 plus 0 that means U 1 1 is this that is 0 we have found it next we will be determining L 2 1 and L 3 1, but the moment we tends to determine L 2 1 we need to write the equation for A 2 1. A 2 1 is second row of L into first row of U that means L 2 1 into 0 that is 0 plus 1 into 0 that is also 0 plus 0 into 0 that is also 0 that means in this product all 3 entries are 0. So, that means the sum is 0, but here it is 3 and we are stuck we are stuck because of this diagonal entry which is 0 which should not be sitting there, but it is sitting there what you can do. So, for this kind of situation we must do pivoting, but then when we do pivoting what happens the row structure changes. So, what we do is that in L U decomposition whenever we do pivoting in actual practice we do it at every step. So, in this particular case while pivoting we will be bringing the second row at the top. So, when we conduct that row interchange between first and second row then we get this matrix, but then we do not know the right hand side later when the right hand side will be given to us at that time we should know that in the right hand side vector also the first and second row should be swap for that purpose we keep track of the permutation. See this is permutation this can be stored not necessarily in a matrix, but in a vector and that vector is 2 1 3 that means second row of identity matrix, first row of identity matrix, third row of identity matrix 2 1 3 and this is that complete matrix, second row of identity matrix, first row of identity matrix and third row of identity matrix. So, this is a matrix which is called permutation matrix corresponding permutation vector is this that means we can conduct L U decomposition with pivoting in that case basically we will not be decomposing the original matrix given, but we will be decomposing a permutation of its rows and what permutation that we will be separately storing in a vector like this which is equivalent to store this permutation matrix. So, now with this permutation matrix setting here we have got this matrix which is a permutation of its rows and then this can be easily decomposed into L and U factors which are this which you can verify later that the product indeed gives you this matrix. That means that rather than conducting an L U decomposition we actually conducted a P L U decomposition in which the matrix P is a permutation matrix corresponding to which we store the permutation vector and later when right hand side vector appeared for which we need the solution then we accordingly permute the entries of the right hand side vector also and then conduct the rest of the forward and back substitution. Now, you will ask a question why are we stressing this point that the right hand side vector will be given to us later if so then can we not conduct the whole operation later only the point is that there are many many applications many many computations in which a matrix stays as it is and at different times at different stages of the entire large computation several right hand side vectors appear for which we need to find out the solution. Many times we cannot set up the right hand side vector until we solve the same system beforehand for another right hand side vector in such a situation where the same matrix A will be needed for solution of several right hand side vectors at different stages of the entire large algorithm. It helps us a lot if we conduct the L U decomposition once and then store that L and U factors and then whenever new right hand side vector emerges in possibly subsequent iterations we conduct the forward and back substitutions. This gives us efficiency of the entire process. Now in this lesson till now we have covered the method for solving linear systems of equations with square non singular coefficient matrices and if by accident a singular matrix appears there then we know how to determine the singularity how to detect that singularity and the other case in which infinite solutions etcetera would be possible that we have already covered in the previous lecture. Two more important issues come into practice one is that when the given system given square matrix is particularly good then how to take advantage of that particular good feature and when the given matrix A is particularly bad in that case how to handle the bad features of that matrix particularly good what do we mean by that by that we mean special system for which special method will take advantage of the particular advantageous situation. Before introducing a particular special situation we quickly see a few definitions a function of this form quadratic function of x 1, x 2, x 3, x 4 up to x n in which only quadratic terms are there no constant term no linear term is called a quadratic form which can be shown as x transpose A x which is this in the summation notation. Now this kind of a quadratic form is typically defined with respect to a symmetric matrix A because in any case if you try to put a non symmetric matrix there one can quickly change for example if A were not symmetric then in place of A we could put A plus A transpose by 2 giving the same function but in a matrix which is symmetric that is why this kind of a quadratic form is typically defined in terms of a symmetric matrix A such matrices appear enormously in practice in science and engineering for example the stress tensor, inertia tensor all of these are represented with symmetric matrices. So this is always defined with respect to a symmetric matrix. Now the important definition which we should keep in mind is that this quadratic form this kind of a function and equivalently the corresponding underlying matrix A is called positive definite when this evaluates to positive values for all non 0 x and for 0 x obviously it is 0. It is called positive semi definite if the value of this evaluates to non negative positive or 0 for all non 0 x. So these two definitions of positive definiteness and positive semi definiteness we should keep in mind. Now there is a test by which you can determine whether a matrix is positive definite or not. This is for positive semi definiteness when you remove this less a greater than or equal to sign with strict inequality then you get the test for positive definiteness. This is called freedom stress criteria but as you know it will become quite costly computationally because you may need to determine a lot of determinants. Anyway if a matrix is positive definite then a linear system a f equal to b with that matrix A as a coefficient matrix is equal to solve and that method is coalescity composition. So if the matrix A n by n matrix A symmetric and positive definite then there exists a non singular lower triangular matrix L such that this happens. What is here being done? In the Doolittle's algorithm for LU decomposition we made a choice regarding the diagonal entries of L. We said that all of those we want to be 1. Here we make another choice for the diagonal entries rather we make a choice in the nature of the L and U factors. We say that we want L and LU decomposition in which the U factor and L factor are transposes of each other and that you can talk of when the matrix A is symmetric because L L transpose will always give a symmetric matrix. So if the matrix A is symmetric then you can ask for an LU decomposition in this manner in which L is lower triangular and the upper triangular factor U is its exact transpose. Now when the matrix A is symmetric you can talk of such a decomposition and when you try to conduct the decomposition you will succeed when the matrix A is not only symmetric but also positive. So for that algorithm it actually is the same as or similar as LU decomposition algorithm in which 1 by 1 you try to find out L 1 1, L 2 1, L 3 1 then L 2 2, L 3 2, L 4 2 and so on because above the diagonal terms you do not need to find because the U factor is actually the transpose of the L factor and you need not find it separately. Rest of the algorithm for the solution of the linear system is same as in LU decomposition through a series of forward and back substitution. So this algorithm which has been elaborated in pseudo code in the slide here you should verify later currently what we do? We try to see that through an example in which we also try to see why the matrix should be positive definite. Suppose the matrix is A with these entries 4 2 4 2 alpha 1 4 1 14. We want to decompose it in L 1, L 3 and L transpose factor with these 0's already known for 3 different values of alpha 0 1 and 2. First what should be this value L 1 1 which is also sitting here because this is just transpose of this. So for finding that we try to equate the A 1 1 entry on both side so 4 is equal to L 1 1 into L 1 1 plus 0 plus 0 that means L 1 1 square is 4. So what is L 1 1? We take the positive entry next we try to find this entry L 2 1 and for that we equate this term to the corresponding term on this side. So 2 is equal to L 2 1 into 2 plus something into 0 plus something into 0. So that means L 2 1 is 1 so we get 1 here 1 here. Next for this term we say that we will have something into 2 plus 0 plus 0 so that something should be 2 we get this correctly. For this we do not need to do any computation because this is symmetric for alpha this is the crucial step. We have here alpha equal to 1 into 1 plus L 2 2 into L 2 2 plus 0. Now for alpha equal to 0 we will have L 2 2 square is equal to 1. So we can write it like this for alpha equal to 0 we will have L 2 2 square is equal to 1 minus 1 that means that it will not be possible because we want the factors to be real. For L 2 2 for alpha equal to 1 we will have L 2 2 equal to 0 which will be fine at this stage but at the next stage we will need to divide by that L 2 2 and we will be stuck there. On the other hand for alpha equal to 2 we will succeed in getting L 2 2 equal to 1 and continue with the decomposition process. So from this point onwards I advise you to complete this sequence of operations and p y this positive definiteness which you will get with alpha equal to greater than 1 is needed for a successful completion of positively decomposition. So this positively decomposition also gives you a test of positive definiteness and it is a very stable algorithm with no pivoting necessary and this saves space and time compared to an ordinary LED decomposition process. The rest of the thing regarding the further special structure of matrices is in sparse matrices which we will omit from this course for the time being. In the book there is a section on sparse matrices handling and other particularly advantageous situations which you should go through when you want to cover advance topics. In the next lecture we will discuss the situation of particularly bad coefficient matrices which we will handle in the next lecture. Thank you.