 we are considering iterative methods for solving a system of linear equation. So, n equations in n unknowns. Last time we have found a sufficient condition for convergence in the Jacobi method. Today, we are going to look at the Gauss-Seidel method and we will show that the sufficient condition which we obtain in the Gauss-Seidel method, this will be satisfied if the sufficient condition in the Jacobi method that is satisfied and the condition which we get is going to be that the coefficient matrix should be diagonally dominant by rho. In fact, if the matrix is diagonally dominant strictly by column, then also the result is going to be true, but I will not prove that result. So, we will first consider the Gauss-Seidel method, obtain a sufficient condition for convergence of it, then compare this sufficient condition with the sufficient condition which we obtained yesterday for convergence of Jacobi iterates. Then we will look at a specific example, if the coefficient matrix A happens to be upper triangular, then we will show that for a system of size n, when you will do n iterates, you are going to obtain an exact solution. If the system is upper triangular, then we do not need to use these iterative methods, because then you can do the back substitution, but this is just to illustrate that if you apply your Jacobi iterates or Gauss-Seidel iterates, then in finite number of iterates you are going to obtain the exact solution. Then we will consider a specific example and if we will see, we may solve some more problems which are using or which are based on the Newton's method, secant method or fixed point iteration for solution of non-linear equation. So, this is going to be plan of today's lecture. So, now let us look at the Gauss-Seidel iterate. We are considering n equations in n unknowns. So, this is the relation satisfied by the exact solution, B i's they are known, the coefficient matrix A is known. So, hence we know A ij. So, you have got x i is equal to B i minus summation over j, j not equal to i, A ij x j divided by A ii. Our assumption is that coefficient matrix A is invertible and all the diagonal entries they are non-zero. So, that is why we can divide by it. This term we split as summation j goes from 1 to i minus 1 and summation j going from i plus 1 to n with the understanding that if i is equal to 1, this term will be absent. If j is equal to n or if rather i is equal to n, then this term is going to be absent. So, this is the equation satisfied by the exact solution. Now, we are going to define the iterative method. We had done it yesterday already, but let me recall. So, this is the Gauss-Seidel iterates x i k, the k th iterate will be equal to B i minus summation j goes from 1 to i minus 1 A ij x j k. So, we are going to proceed systematically. So, you would have already calculated x 1 k x 2 k x i minus 1 k. So, these are the recent values used here minus summation j is equal to i plus 1 to n A ij x j k minus 1. We have come up to x i. So, x i plus 1 up to x n you have to use the values from the k minus first iterate divided by A ii. Take the subtraction. So, you are going to have x i minus x i k is equal to B i's will get cancelled minus summation j goes from 1 to i minus 1 A ij by A ii x j minus x j k minus summation j is equal to i plus 1 to n A ij by A ii x j minus x j k minus 1. So, this is our E i k this will be E j k this will be E j k minus 1. So, now take the modulus of both the sides and you are going to get modulus of E i k to be less than or equal to alpha i. So, alpha i is summation j goes from 1 to i minus 1 modulus of A ij by A ii beta i is this summation alpha 1 is defined to be 0 beta n to be defined to be 0 modulus of E j k will be less than or equal to norm E k infinity and modulus of E j k minus 1 it will be dominated by norm of E k minus 1 infinity. So, this is the relation we obtain for error in case of the Gauss-Seidel iterates. So, the things are little more complicated than the Jacobi method. Now, what we have got is modulus of E i k that means it is the error in the ith component of the kth iterate. On the right hand side you have got terms which involve norm E k infinity and norm E k minus 1 infinity. Our aim is to find norm E k infinity to be less than or equal to some constant times norm of E k minus 1 infinity. That means the maximum error in the kth iterate should be less than or equal to constant times maximum in the k minus first iterate. If we can obtain such a relation then one uses this successively and one gets the maximum error in the kth iterate to be less than or equal to constant its raise to k and then multiplied by the error at 0th stage. So, suppose I call that constant to be eta. So, I want to get norm E k infinity to be less than or equal to eta raise to k into norm of E 0 infinity E 0 is the starting error. So, that will give us the condition that if eta is less than 1 then you are going to have convergence. The error will tend to 0. So, now we have got the relation we obtained. We had modulus of E i k that means kth iterate. On the right hand side you have got norm E k infinity norm E k minus 1 infinity. So, you have got such a relation for i goes from 1 to up to n. Now norm E k infinity will be achieved for some value of i. Suppose that value is i is equal to m then you use that particular equation to obtain the desired estimate. So, let us look at it. So, you have got modulus of E i k to be less than or equal to this. Now, suppose the norm E k infinity that is maximum of modulus of E i k when i varies between 1 to n. Suppose this is attained when i is equal to m. So, you have got modulus of E m k. So, look at this estimate when i is equal to m. So, norm E k infinity is equal to modulus of E m k which is less than or equal to now I am looking at i is equal to m. So, it is alpha m times norm E k infinity plus beta m times norm E k minus 1 infinity. So, take this term on the left hand side and you will get norm E k infinity to be less than or equal to beta m upon 1 minus alpha m into norm of E k minus 1 infinity. So, we will be assuming that alpha m is less than 1. So, that 1 minus alpha m will be bigger than 0. So, it will preserve the inequalities and you are dividing. So, we have to make sure that 1 minus alpha m is not equal to 0. So, we have obtained an estimate. It is norm E k infinity to be less than or equal to beta m upon 1 minus alpha m norm E k minus 1 infinity. What was m? M was the index when norm E k infinity is equal to modulus of E m k. So, this is the norm. So, E m k is the error in the m th component. In the error you have got. So, your modulus of E m k is going to be modulus of x m, x is the exact solution. So, x m minus x m k. So, x m k is something which we compute, but what about x m? If we knew the exact solution, we do not have to do all these things. So, that means for the analysis it is ok. We can say that the norm of E k mean infinity that is when i is equal to m. But as such this bound in that bound beta m upon 1 minus alpha m we do not know what is m. So, the best we can do is look at maximum of beta i upon 1 minus alpha i 1 less than or equal to i less than or equal to n. So, that is going to be our constant eta. So, what I want is norm E k infinity less than or equal to some constant into norm of E k minus 1 infinity and that for. So, that constant then we put a condition that that that constant is equal to m. So, that should be less than 1 which will give us convergence. So, we have norm E k infinity to be less than or equal to this. So, look at eta to be maximum of beta i upon 1 minus alpha i 1 less than or equal to i less than or equal to n. I do not know what m is, but this m if I take eta to be maximum of these quotients then beta m upon 1 minus alpha m is going to be less than or equal to eta. So, we have norm E k infinity to be less than or equal to eta times norm of E k minus 1 infinity. That means norm E k minus 1 infinity will be less than or equal to eta times norm of E k minus 2 infinity. So, continuing this argument you get norm E k infinity to be less than or equal to eta raise to k norm of E 0 infinity. So, if this eta is less than 1 then that will guarantee that norm E k infinity will tend to 0 as k tends to infinity. So, this is a sufficient condition for convergence in the case of Gauss-Seidel iterates. Now, next we want to compare this sufficient condition with the sufficient condition which we had obtained in case of Jacobi iterates. So, for the Jacobi iterates the condition was mu which is maximum of summation j goes from 1 to n modulus of a i j by a i i maximum taken over 1 less than or equal to i less than or equal to n this mu should be less than 1. For the Gauss-Seidel method we have got eta should be less than 1 where eta is defined in terms of beta i's and alpha i's and our claim is that mu less than 1 will imply that eta is less than 1. In fact, we will show that eta is going to be less than or equal to mu. So, I want you to observe that in beta i you are taking summation j goes from 1 to i minus 1 modulus of a i j by a i i in alpha i it is summation j is equal to i plus 1 to n modulus of a i j by a i i. If you look at alpha i plus beta i it will be nothing but summation j goes from 1 to n j not equal to i mod a i j by a i i. So, our mu is going to be equal to maximum of alpha i plus beta i. So, eta is this maximum mu is this maximum. So, what we will show is alpha i plus beta i minus beta i upon 1 minus alpha i is bigger than or equal to 0 or we want to show that beta i upon 1 minus alpha i is less than or equal to alpha i plus beta i and the proof is going to be straight forward. So, here mu is maximum of alpha i plus beta i 1 less than or equal to i less than or equal to n eta is maximum of beta i upon 1 minus alpha i i. I consider alpha i plus beta i minus beta i upon 1 minus alpha i this will be equal to alpha i minus alpha i square this beta i and this beta i will get cancelled. So, you will have minus beta i alpha i divided by 1 minus alpha i. So, this is nothing but alpha i times 1 minus alpha i plus beta i upon 1 minus alpha i then mu is maximum of alpha i plus beta i and hence 1 minus alpha i plus beta i will be bigger than or equal to 1 minus mu. So, it is alpha i upon 1 minus alpha i then mu is less than 1. So, 1 minus mu is less than 1 then mu will be bigger than 0 alpha i is summation j goes from 1 to i minus 1 modulus of a i j by a i i mu less than 1 will also imply that alpha i is also less than 1. So, this number is going to be bigger than 0. So, the maximum which you are taking here this number is bigger than this number. So, that proves that your eta is less than or equal to mu less than 1. So, thus if mu is equal to maximum of this summation less than 1 then we are going to have convergence both in the Jacobi method and Gauss Seidel method and that means your a should be strictly rho diagonally dominant. So, now we have got a sufficient condition which guarantees convergence of both Jacobi iterates and Gauss Seidel iterates as I had mentioned before that we are not claiming that if the Jacobi iterates converge then Gauss Seidel iterates they have to converge. In the Gauss Seidel iterates because you are using the most recent value of your iterate it is likely to give better results, but one can construct a pathological example when the Jacobi method converges, but the Gauss Seidel method does not converge. So, we have compared some sufficient conditions and then we said that if the sufficient condition in the Jacobi method is satisfied then the sufficient condition in the case of Gauss Seidel method also will be satisfied. So, if your a is strictly rho diagonally dominant then both the Jacobi method and the Gauss Seidel method or these iterates they are going to converge to the exact solution. So, let us look at a simple example where 4 by 4 system we will take and we will look at the first few iterates in the Jacobi method and in the Gauss Seidel method. So, here is the system you have got 4 equations in 4 unknowns the diagonal entries are 4, super diagonal and sub diagonal they are minus 1. So, it is a tridigonal system. So, it is a tridigonal system and it is diagonally dominant the right hand side is so chosen that the exact solution is nothing but 1111. We are going to choose the initial iterates to be all 0. So, x 0 is going to be a 0 vector then in the case of Jacobi iterates you are going to have x i 1 is equal to b i divided by 4 i is equal to 1 2 3 4. We are taking all these 0's. So, the x i 1's first iterates they will be given by b i 4 when you consider the second iterate in the Jacobi. So, x 1 2 so look at the first equation. So, here x 1 2 will be 3 plus 1 times x 2 1 divided by 4 then when you consider x 2 2 this is going to be equal to 2 plus x 1 1 plus x 3 1 and then divided by diagonal entry that is 4. So, when we calculate x 2 2 we have already calculated x 1 2. So, here if instead of x 1 1 we use x 1 2 the recent value available then that becomes the Gauss Seidel method. Here what we are going to do is then you look at x 3 2 that is going to be equal to 2 then plus x 2 1 plus x 4 1 divided by 4. Even though x 1 2 is available we are not using it here that is for the Jacobi iterates. In the case of Gauss Seidel iterates what we do is when you look at you have the first iterate for the Jacobi you had x i 1 is equal to b i by 4 i is equal to 1 2 3 4. In the case of Gauss Seidel the first component x 1 1 is going to be b 1 by 4 when you calculate x 2 1 this x 2 1 will be equal to b 2 plus x 1 1 plus x 2 0 divided by 4. So, now whatever is the recent value of x 1 1 that we are using for Jacobi what we did was it was all 0s. So, our x 2 1 was b 2 by 4 whereas, here x 2 1 will be b 2 plus x 1 1 by 4 because x 2 0 will be 0 when you consider x 3 1 for x 3 1 again x 2 1 is available whereas, x 4 1 we have not yet reached. So, we are forced to use x 4 0. So, this is the difference between Gauss Seidel and Jacobi iterates. So, let us calculate the iterates. So, the first for the Jacobi iterates you have got x 0 is equal to 0 vector. So, x 1 1 will be 3 by 4 x 2 1 will be 2 by 4 that means 1 by 2 x 3 1 will be 2 by 4 and x 4 1 is equal to 3 by 4. So, it is you are replacing sort of coefficient matrix by the diagonal matrix. So, this was for the first iterate for the second iterate x 1 2 is going to be equal to 3 plus what we have got x 1 1 plus x 2 1 we had calculated for x 1 or rather it is going to be x 1 2 is equal to b 2 plus x 2 1 divided by 4. So, x 2 1 is 1 by 2. So, that gives you x 1 2 to be equal to 7 by 8 then when you look at second iterate the second component that is going to be equal to the right hand side 2 then the value of x 1 1. So, that is 3 by 4 then plus the value of x 3 1. So, that is going to be 2 by 4 divided by 4. So, you get it to be 13 by 16 you can check that you get the same value for x 3 2 and you get the value x 4 2 to be 7 by 8. So, for this particular example when what you are getting is x 1 1 plus x 2 1 plus x 2 1. So, what we are getting is for the Jacobi method we start with the iterate to be 0 0 0 0 that is our x 0 then our next iterate x 1 is going to be 3 by 4 then 1 by 2 1 by 2 and it is going to be again 3 by 4. So, this is our x 1 and the next iterate which we get is 7 by 8 then 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0. So, you get 13 by 16 13 by 16 and you have got again 7 by 8 and exact is 1 1 1 1. So, this is our x 2 and then 1 can continue. Now, let us look at the Gauss Seidel iterates for the Gauss Seidel x 1 1 is 3 by 4 whereas, when you calculate x 2 1 you are going to use the value to be x 1 1 the recent value whereas, for x 2 0 you have to choose it to be you have to take it to be 0 only. So, you get 11 by 16 in case of x 3 1 again x 2 1 is we have calculated. So, you will use that value whereas, x 4 0 you have to take it to be 0 and then you get these values. So, in the case of Gauss Seidel method your x 1 vector is going to be 3 by 4 then 11 by 16 43 by 64 and then something you will get. So, this is x 1. So, here it looks like that the Gauss Seidel iterates they are going to converge faster like exact solution was 1 1 1 1. So, if you use the recent most value then the result the error will be small. So, this was just an illustrative example and as I mentioned that whatever we consider in this course it is not for hand computations it is all it all make sense when you are going to use big systems using a computer. So, here one can one has to write a program it will not be difficult to write a program for Jacobi method or Gauss Seidel iterates you have to give some stopping criteria and then you can see that it will give you an approximate solution in contrast to the direct solution, but if n is big then one may have to resort to indirect methods because the direct method may be too expensive. So, now I want to quickly tell you or show you that if the coefficient matrix is upper triangular matrix then in n iterates you are going to reach the exact solution. So, now we look at a x is equal to b Jacobi method and a is assumed to be upper triangular matrix. So, in case of Jacobi method this is our formula that x i k is equal to b i minus this sum j naught equal to i divided by a i i i goes from 1 to up to n a is upper triangular and that means a is equal to b i is equal to a i j will be 0 if i is bigger than j. If we take the initial approximation to be all zeros or in fact whatever approximation you take. So, when you have got a to be upper triangular then your first equation is going to be a 1 1 x 1 plus a 1 1 x 1 plus a 1 1 x 1 plus a 1 n x n is equal to b 1. Your second equation will be a 2 2 x 2 plus a 2 n x n is equal to b 2 and your last equation will be a n n x n is equal to b n. So, in the last equation there are no x 1 x 2 x n minus 1. So, whatever are your initial iterate or initial approximation it just does not come into picture. So, this in the case of Jacobi method put i is equal to n then it tells you that x n 1 is going to be b n upper triangular matrix. So, what is your initial approximation for the n th component you have already obtained an exact value. So, we are starting with some approximation x 1 0 x 2 0 x n 0. We calculate the Jacobi iterate you are going to get the iterates x 1 1 x 2 1 x n 1 of x n 1 of which the last component x n 1 is going to be the exact value. If your matrix is upper triangular now when you go to the next iterate then what will happen will be the n th component is already exact then n minus first component will become exact. So, like that when you will do n iterates in the end the n th iterate is going to be equal to the exact solution. So, we have x n minus 1 2 is equal to b n minus 1 minus a n minus 1 n x n 1 divided by this this n th component is already exact. So, now in the second iterate n minus first component also is exact and then when you consider n iterate then we obtain the exact solution. This is about the solution of iterative solution of system of linear equations. So, what we have considered is we looked at solution of non-linear equations and now iterative solution of system of linear equations. Our next topic is going to be approximate solution of differential equations, but before we start that topic what I want to do is solve some problems which are for the Newton's method and secant method and bisection method based on those. So, we are going to look at some of the problems based on those and there are some results about the norms. So, we are also going to look at those problems. So, in the lectures we have seen that we had left something to be done in the tutorial. So, we are going to look at those problems also. So, now first let us look at the first problem. So, the problem is g is from a b to a b it is a continuously differentiable function and capital M is maximum of modulus of g dash x, x belonging to a b this is less than 1. Under this condition g is going to have a unique fixed point in the interval a b. So, I call that fixed point to be c we look at the Picard iterates. So, it is start with x 0 in a b and define x n plus 1 is equal to g of x n. Then we want to show that modulus of x n plus 1 minus c is less than or equal to m upon 1 minus m modulus of x n plus 1 minus x n. What is the importance of this example or this problem? When we proved the convergence of the Picard's iteration method we had modulus of x n plus 1 minus c to be less than or equal to m times modulus of x n minus c. Continuing the argument we got modulus of x n plus 1 minus c to be less than or equal to m raise to n plus 1 modulus of x 0 minus c since m is less than 1 m raise to n plus 1 tends to 0. So, that proves x n converging to c. So, now I know that if some conditions are satisfied then this Picard's iterates they are going to converge. Now, in practice I want to know where to stop that if I require some accuracy say the error should be less than 10 raise to minus 6. Then how many iterates I should calculate? I cannot calculate modulus of x n minus c. c is the fixed point which we do not know because we do not know we are trying to find the approximation. So, unless I am considering an illustrative example this modulus of c minus x n I cannot calculate, but what I can compare is the distance between the two. The two successive iterates so, mod of x n plus 1 minus x n. Now, suppose the two successive iterates modulus of x n plus 1 minus x n if they are small whether that will mean that modulus of x n plus 1 minus c also is small. So, the answer is yes and that is where this example shows or this is the result which we are going to show that modulus of x n plus 1 minus c is going to be less than or equal to m upon 1 minus m modulus of x n plus 1 minus x n. So, m is less than 1. So, suppose m is equal to half then this is going to be equal to far half we will have m upon 1 minus m to be equal to 1 then if modulus of x n plus 1 minus x n is less than 10 raise to minus 6 that also guarantees that x n plus 1 minus c will be less than 10 raise to minus 6. This is something one can compute. So, this gives you the stopping criteria and proof of this result is not difficult. So, let us prove this result. So, our assumption is g is continuously differentiable on interval a b c is the unique fix point g of c is equal to c modulus of g dash x is less than or equal to m less than 1 that is our assumption then x n plus 1 minus c will be equal to x n plus 1 is g x n c is g c. So, g x n minus g of c using the mean value theorem this will be equal to g dash d n x n minus c where d n lies between x n and c modulus of g dash in d n will be less than or equal to m. So, you will have modulus of x n plus 1 minus c to be less than or equal to m times mod of x n minus c. Now, here I add and subtract x n plus 1. So, I write x n minus c as x n minus x n plus 1 plus x n plus 1 minus c use the triangle inequality. So, it is less than or equal to m times mod of x n minus x n plus 1 plus m times modulus of x n plus 1 minus c take this term on the other side and then you will get modulus of x n plus 1 minus c to be less than or equal to capital M divided by 1 minus m modulus of x n minus x n plus 1. So, this is the error in the successive iterates this is going to be the error in the n plus first iterate and we have related these two errors. Now, the next problem which we are going to do is we will try to find that how big n should choose in a particular situation. So, that is going to be our next problem. So, we look at c to be the smallest positive root of f x this calculating the 0 of a function f we relate it to calculating a fixed point of another function. So, if I have a fixed point I define g x is equal to x cube minus x square minus x by 4 plus 1 by 5 then f of c is equal to 0 if and only if g of c is equal to c we are going to show this take x 0 to be equal to 0 and x n plus 1 is equal to g x n. So, these are our Picard's iterates we want to find n such that modulus of c minus x n is going to be less than 10 raise to minus 3. So, we are first going to look at how the 0's of f are going to be 2. So, it is a cubic equation. So, it is going to have 3 roots we look at the smallest positive root. So, that it is approximation we are considering we will show that f c is equal to 0 if and only if g of c is equal to c and finally, find n which will imply that the error is less than 10 raise to minus 3. So, this is our f x this 25 x I split as 5 x and 20 x then g x is our function x cube minus x square minus x by 4 plus 1 by 5. So, g x is nothing but f x divided by 20 and then plus x. So, we have this is our g x when I divide by f x by 20. So, I will have x cube then minus x square minus x by 4 and then plus 1 by 5. So, that is our g x and then I am dividing by 20. So, this is will be g x f x by 20 will be g x minus x or g x minus x. So, this is our g x f x by 20. So, this is our g x or equivalently g x is equal to f x by 20 plus x that gives us f of c is equal to 0 if and only if g of c is equal to c. Now, when we will want to look at the how the 0s of f they are situated f is a cubic polynomial. So, it is going to have 3 roots it can happen that all the 3 roots are real or there will be 1 real root and 1 pair of complex roots. Now, if I can find points like suppose I have got value of f at 2 points if it is of opposite sign then there is at least 1 0 there can be more than 1 0. So, there is at least 1 0. So, that is why we will try to find 3 intervals in which there are going to be in which f has 0. So, we will look at value of f at say minus 1 0 1 and 2. So, f of minus 1 you can check that it is minus 11 f of 0 is 4. So, f minus 1 and f of 0 they are of opposite sign. So, it has at least 1 root in the interval minus 1 to 0 f of 1 is going to be minus 21. So, in the interval 0 to 1 there is 0 to 1 there is 0 to 1 there is at least 1 root and f of 1 is minus 21 f of 2 is 34. So, f 1 and f 2 are opposite signs. So, there has to be at least 1 root in 1 to 2. Now, f can have at most 3 roots. So, in each interval there is going to be exactly 1 root and we are concentrating on c which belongs to 0 to 1 the smallest positive root of our function f. Now, g x is our function given by this you want to calculate. So, for a fixed point iterate we need to calculate the maximum of modulus of g dash x over an appropriate interval because that is our sufficient condition that modulus of x n minus c is going to be less than or equal to m raise to n into modulus of x 0 minus c. So, we need to calculate capital M which is maximum of modulus of g dash x over interval in our case it is going to be interval 0 to 1 g is a polynomial function. So, we can calculate its derivative. When you want to find absolute value of a continuous function you have to compare the values at the two end points and at the critical point. The critical points are those points at which either derivative vanishes or derivative does not exist. In our case g is a polynomial. So, we will have to only look at the points where the derivative vanishes. Now, what happens? Hopefully one gets only finitely many such points. So, one compares the values at those points and decides which is the absolute maximum and which is also absolute minimum. One need not consider the second derivative. The second derivative test tells you only about the local maximum and local minimum. So, now we are going to look at the derivative at the two end points. The value at the two end points and the value where the derivative vanishes. That means the second derivative should be equal to 0. So, g x is given by x cube minus x square minus x by 4 plus 1 by 5. Its derivative will be 3 x square minus 2 x minus 1 by 4. The second derivative will be given by 6 x minus 2. So, this will be 0 when x is equal to 2 by 3. So, we need to compare the value at the two end points 0 and 1 and at the critical point 2 by 3. These values are minus 1 by 4, minus 1 by 4, 3 by 4 which will give us capital M to be equal to 3 by 4 and then modulus of c minus x n will be less than or equal to 3 by 4 raise to n. Now, one thing here, we do not know c x 0 is 0, but we know that c is going to be in the interval 0 to 1. So, I dominate modulus of c by 1. So, this should be less than 10 raise to minus 3. So, that gives you condition that your n should be such that 10 cube is less than 4 by 3 raise to minus 3. So, this is minus 2 n. So, in our next lecture, we are going to consider some more problems and then we will start the new topic approximate solution of differential equations. So, thank you.