 Hi, in today's class we will start with a new chapter on numerical linear algebra. In this chapter we will learn to develop methods for linear systems and also we will learn to develop methods for computing some Eigen values and Eigen vectors of a given matrix. In today's class we will take up linear systems. In linear systems we have two classes of methods one is the direct methods and another class is the iterative methods. Direct methods are those which gives us exact solutions when there is no rounding error is involved. On the other hand iterative methods gives us a sequence of vectors which is expected to converge to the solution of the corresponding linear system. In this class we will start with direct methods for linear systems. In this first we will learn Gaussian elimination method. Before getting into the method let us quickly recall what is mean by linear systems and when we can have a unique solution for a linear system. As you know a linear system consists of some n number of equations which are linear in its unknown and it is given like this a 11 x 1 plus a 12 x 2 and so on up to a 1 n x n equal to b 1 that is the first equation similarly the second equation and so on. Here a i j's and b i's are known to us and we are interested in finding the x i's. We can write this system in a matrix form where you have the coefficient matrix a 11 a 12 and so on and you have the unknown vector x 1 x 2 up to x n written in the column form and that is equal to the right hand side vector b 1 b 2 up to b n which is also written in the column form. As I told the matrix a and b are given to us and x is to be obtained. As you know we can write it as a x equal to b where a is this matrix and x is this column vector and the vector b is this column vector. Let us now see when the system a x equal to b is solvable well. We have a very well known theorem studied at our basic course on linear algebra. Let us recall here without proof you are given a n cross n matrix a and the right hand side vector b. Then the following statements are equivalent for the system a x equal to b that is you have determinant of a is not equal to 0 and for any given right hand side vector b the system a x equal to b has a unique solution x. So, it means you expect a unique solution for your linear system if and only if determinant of a is not equal to 0 that is a is an invertible matrix right. And you can also prove the equivalence of these two statements with the statement that if you take b equal to 0 in particular then the only solution of the system a x equal to 0 is the 0 vector. We will not go to prove this theorem because it is the part of linear algebra course interested students can go and take any linear algebra book and recall the proof of this theorem. We will only keep this theorem in mind and assume from now onwards that any matrix that we work with is an invertible matrix. The first method that we are going to learn is the well known Gaussian elimination method it is introduced by a German mathematician called Fredrisch Gauss. The basic idea of Gaussian elimination method is to take the given system and convert it into an upper triangular system by applying certain elementary row operations. Once you have the upper triangular system it is direct for us to get the solution of that upper triangular system through a backward substitution process. As we know from our elementary linear algebra course the solution of the given system is equivalent to any system that is obtained through certain elementary row operations. The upper triangular systems solution will be the solution of our original system that is how we obtain the solution of our original system. Let us only consider 3 by 3 system and try to understand the Gaussian elimination method and the method for any n by n system can be generalized very easily and in a very similar way once we understand this method for 3 by 3 system. Now how are we going to achieve this upper triangular system? Well we will go through in steps in the first step we will try to make this coefficient as 0 and this coefficient as 0. So, that will give us the first step. Once we achieve the first step then we will go to make this coefficient to be 0 in the second step whereas, the first one this is achieved in the first step. Then you have an upper triangular system right. So, how are we going to do? You can clearly see that you take A 2 1 divided by A 1 1 and multiply it with the first equation. Let us call this as E 1 this as E 2 and this as E 3 and then you subtract E 2 with this equation sorry this is going to be E 1. So, you multiply the first equation with this number and then subtract the second equation with the first equation that will make this term to be 0 right this term that is the idea. Similarly, you can make the other the coefficient of x 1 in the third equation also as 0. Let us make this idea more precise you can see that for this we need A 1 1 to be non 0 otherwise you just cannot go ahead with this idea. So, if A 1 1 is not equal to 0 then define M 2 1 as A 2 1 divided by A 1 1 and similarly M 3 1 as A 3 1 divided by A 1 1 and now as I told you multiply this first equation with M 2 1 and then you take that and subtract with this second equation and that will obviously make the first term to get cancelled right. So, that is the idea. Now, what we will do for getting the other reduced system in step 1 we have to first retain the equation 1 as it is then do the operation E 2 minus M 2 1 into E 1 just recall we are calling this as E 1 E 2 and E 3 and then you go to do the third equation with E 3 minus M 3 1 E 1. Now, what you do you remove this equation and put this equation here similarly you remove this equation and put this equation here that gives you the reduced system at the level 1. So, I am just showing you the calculations you can do it very easily. So, the second equation will look like this and that will now be replaced with the second equation of our given system. So, we will call this term as A 222 and this as A 232 remember correspondingly the right hand side vector will also change and we will call it as B 22 right and that is what we are doing. We are removing our second equation and retaining this equation here whereas, the first equation is retained as it is in a similar way you can also compute E 3 minus M 3 1 E 1 and you replace that here instead of the third equation in the original system. So, that gives you the reduced system in the first step remember this is still not the upper triangular matrix because this term may not have 0 coefficient that is A 322 may not be 0 if it is 0 you can just stop it at this level itself. If this is not equal to 0 then you go for the second step where the idea is exactly the same you have to eliminate this term right. So, for this you will use what we will call as M 32 and that is nothing, but A 322 divided by A 222 right and then you will multiply it with the second equation and then subtract the third equation with the second equation. That is you define M 32 as A 322 divided by A 222 provided if A 222 is not equal to 0 if A 222 is 0 then again your Gaussian elimination method will fail at this level. Once you have this you multiply that with the second equation 32 and then subtract it with this equation right. So, again what are all the process that we have to do at this level well the first two equations are retained now remember in step 1 first equation was retained now we have to retain the first two equations keep the upper triangular structure in mind and see this and then the third equation will be replaced by E 3 we are still calling it as E 3 and this as E 2 and this as E 1. So, E 3 minus M 32 into E 2 that is replaced by the third equation of this step that is this equation will be replaced now by this equation. That will give us the upper triangular matrix well if it is a 4 by 4 system then you have to go in a similar way for one more step and 5 by 5 system means two more steps and so on ok, but the idea is same that is why we are just taking the 3 by 3 system and understanding the method once you understand it any dimension can be generalized in a similar way now you see you have only the upper triangular system here right. Now, from here we have to get the solution for this upper triangular system which will be the same as the solution of our original system also and this phase is called the forward elimination phase now as we remarked earlier the solution of this system is going to be equivalent to the solution of the given system right. Now, what you do is if a 3 3 3 is not equal to 0 we can write x 3 as b 3 3 divided by a 3 3 3 right. So, you got x 3 once you have x 3 you can put that value into the second equation that is this equation because you can put this value and get x 2 directly right that is the idea and once you have x 3 and x 2 sorry this is x 3 then you can put these two value that is x 2 and x 3 you substitute and get x 1. So, that is the backward substitution phase therefore, Gaussian elimination method has two phases one is the forward elimination process and other one is the backward substitution process and we call this method as name Gaussian elimination method. Just to remark Gaussian elimination method can also give rise to a LU factorization of a given coefficient matrix very naturally where the upper triangular matrix is precisely the matrix which comes out of our final eliminated system and you take the lower triangular matrix as 1 0 0 m 2 1 is the number we calculated in step 1. Now, you can write your matrix A as L into U that is what is called LU factorization of a matrix A. It means you will find the upper triangular matrix and the lower triangular matrix and write A as the product of the lower triangular matrix and the upper triangular matrix ok and Gaussian elimination method gives the LU factorization of a given matrix. This is just a remark here it is nothing to do with the Gaussian elimination method. However, in our next section we will be learning LU factorization where we will be learning some other methods to do this LU factorization. Here I am just remarking that Gaussian elimination method is also one of those methods. Now coming to the question of why are we putting this special name name Gaussian elimination method why that word name came here well there are two reasons for that. One is the method may not work always even if you have invertible system. For instance if you have the matrix A which is nice invertible, but if A 1 1 is equal to 0 you cannot even start the method right because to define m 2 1 and m 2 2 we need A 1 1 to be not equal to 0 right because they are sitting in the denominator. Therefore, if A 1 1 is 0 or in the second step if A 2 2 2 is 0 then also we cannot go ahead with the Gaussian elimination method that is the problem with this method. And another one is that this method will increase the rounding error drastically ok. There are other ways to make the method work which is called the partial pivoting idea that we will come in the next discussion, but as we explained the Gaussian elimination method these are the inbuilt problems in the method that is why we call this method as name Gaussian elimination method. Let us give an example to get a feeling of how the name Gaussian elimination method can increase the rounding error drastically and spoil the accuracy of the solution. For this let us take this system of three equations where in the second equation you have the term 2 by 3 x 2 and 1 by 3 x 3. Remember these two numbers when return in the floating point representation has infinitely many digits in the mantissa right. Now we will try to calculate the solution of this system using name Gaussian elimination method and we will involve 4 digit rounding in our calculation. Remember from our previous lectures how to do calculations with floating point approximation right. Every step every single operation is done you have to do a floating point approximation of the resulting number ok. Like that the calculations should be carried over here the floating point approximation is using 4 digit rounding. Remember we have to also represent the matrix in the 4 digit rounding because at all levels whichever number we use whether it is a input number or the calculated number it has to go with 4 digit rounding only. Therefore, we will first write the given system in the 4 digit rounding form well all coefficients remind the same there is no approximation involved other than this term and this term. Just observe that this term has a very big rounding error and after eliminating you get this system. Here you notice that you have 0.0001 coming because of this rounding problem and in the second process you can see that you are dividing this number with this number and that will make m 3 2 to be a very big number and that results in the coefficient of x 3 very big. You can observe this this itself gives us a feeling that something is going wrong here. Let us compute the solution now using the backward substitution phase and the solution finally, comes to be this vector. Now what is the exact solution? Well the exact solution computed with infinite precision this can be done by just keeping the terms in the fractional form itself without doing any rounding or you can simply do the rounding with a very high precision ok. So, like double precision and all then also you will get a pretty close solution when compared to the exact solution. So, here the exact solution is given like this and our approximate solution is this you can see that the component x 1 and x 2 are no way close to the exact solution. So, that is the danger of the rounding error into any method in particular neo Gaussian elimination method where is very sensitive to rounding errors. What is the reason for this drastic amplification of the error? Well I have already told that this is precisely because you are trying to divide this term by this one right. In the original system that is if you are making this calculation without any error then this would have been 0 right, but because of the rounding error this is non-zero and that amplified the error drastically. Otherwise the neo Gaussian elimination method would have failed at the second step itself, but now it went on and found a bad solution ok. How to get right rid of all these problems in the neo Gaussian elimination method? Well that is using an idea called partial pivoting and we call the method as modified Gaussian elimination method with partial pivoting. We can also do full pivoting, but that will be very expensive when compared to partial pivoting and the accuracy that we achieve in full pivoting is not that good improvement when compared to the partial pivoting. Also once you understand the partial pivoting idea you can do the full pivoting without much difficulty in understanding the method. Therefore, in our course we will introduce only partial pivoting and we will not discuss the full pivoting technique. Let us go to see what is mean by partial pivoting. So, you are given the system again we will consider only a 3 by 3 system and try to understand the method and generalizing it to any n cross n system is very easy and direct. The only difference in the partial pivoting is that you take the column at which you are going to do the elimination process. For instance in step 1 you are going to do the elimination process for these two terms right. Therefore, you take this column and find the maximum of the absolute values of this these coefficients and then whichever attains the maximum you replace that equation with the first equation that is the idea that is take a 1 1 a 2 1 and a 3 1 say for instance this is 1 and this is say 4 and this is minus 4 then you will have 1 4 and 4 the maximum will be therefore, 4 and you can in fact see that s 1 will never be 0 I leave it to you to think why it is the reason is that we are always working with invertible matrix right. Now, you pick up that component at which the maximum is attained with the lowest index. For instance in our example we have taken 2 4 and minus 4 as the coefficients. Therefore, you are taking 2 4 and minus modulus will be 2 4 and 4 and the maximum is achieved in these two coefficients. However, we will take only the least one that is a 2 1 we will take ok. This is just to make the algorithm more precise it does not matter whichever you take, but for the algorithm sake we will take the least one and you have to interchange that kth row with the first row and let us call the new system after interchanging with the superscript 1 ok. In this case suppose if the maximum is achieved at the second equation then from the original system you push the second equation to the first equation and the first equation to the second equation and now you are all other elimination process of step 1 will go as we did with the Naive Gaussian elimination method and we will achieve the eliminated system where these terms become 0. Remember this is the system after the pivoting is done and then we went for the elimination process. Now what you have to do in order to go for step 2 you take these two elements and do the similar partial pivoting that is you find the maximum between these two numbers and then take the one which achieves the maximum at the least index and call it as L and that Lth equation will be now replaced with the second equation. It may be the second equation itself in which case the swapping will not happen if it is the third equation then third equation will go to the second equation and second will go to the third equation and then you call the new pivoted system like this and then again go for the elimination of this coefficient exactly as we did with the Naive Gaussian elimination method and obtain the upper triangular matrix in this way the only difference between the modified Gaussian elimination and Naive Gaussian elimination is that we are doing this pivoting step as a extra step in each of the elimination process that is the only difference between these two methods. Once you have the upper triangular matrix you can go for the backward substitution. Let us take the example that we did previously and now you see in the first step you have to take the maximum between 6, 2 and 1 therefore the maximum is achieved at the first equation only. Therefore, there is no interchange of equations will happen and the first step of the modified Gaussian elimination method leads to the same system as what we obtained in the Naive Gaussian elimination method. Now you have to do the pivoting among these two equations by checking these two coefficients you can see that the maximum is now achieved at the third equation. Therefore, third equation will go to the second equation and second equation will come to the third equation right. And once you do the elimination process now you are going to divide 0 0.0001 because that came to this position divided by 1.667 because that has gone to the second position right that will make the process more comfortable and will not magnify the error drastically. And therefore, your solution is now going to be x 1 equal to 2.602 x 2 equal to minus 3.801 and x 3 is minus 5.003. And you can see that the solution without pivoting that is in our previous example we obtained this solution what is the exact solution the exact solution is this you can compare these three and see that the modified Gaussian elimination method in this example gave a very good approximation to the exact solution than the one with the Naive Gaussian elimination method. Therefore, whatever it is your modified Gaussian elimination method will give you a better approximation to your original system when compared to the Naive Gaussian elimination method. And this is all about the Gaussian elimination method thank you for your attention.