 numerical methods for linear systems. In this, we have studied direct methods. Well, direct methods give exact solution of a linear system if there is no rounding error involved in the computation. However, direct methods are often not preferred in practical applications because of their computational cost. Well, we have seen in one of our lectures that direct methods are computationally inefficient. We have seen this in terms of the number of arithmetic operations involved in some of the direct methods. Especially, direct methods are quite costly in computation when the number of equations in the linear system is quite large. An alternative choice is to go for iterative methods when the linear system is a large system. Iterative methods generate a sequence of vectors that is expected to converge to the exact solution of the linear system. Well, in this way even iterative methods can give exact solution, but it may take infinitely many steps to arrive at the exact solution. However, if you want only an approximate solution, then often iterative methods are good choice. In this lecture, we will start our discussion on iterative methods. We will study the most simplest iterative method called Jacobi method. This method is named after a German mathematician named called Gustav Jacob Jacobi. Well, if the given system is a diagonal system, then we all know that the linear system Ax equal to b can be solved very easily. We do not need to go for any elimination process. However, if the given system is not diagonal, then what to do? Jacobi suggests that you first write the coefficient matrix as d minus c, where d is the diagonal matrix whose diagonal elements are precisely the diagonal elements of the coefficient matrix A and take minus c to be A minus d. Well, once you have written A in this form, you can write your linear system Ax equal to b as d minus c into x equal to b and that can be written as dx equal to cx plus b. Now, you see the left hand side involves only a diagonal matrix, but however, the system is not reduced to a diagonal system. Why? Because now the right hand side involves the unknown vector x. Therefore, we have not gained anything by rewriting our original system Ax equal to b in this form. It is only that we have separated the diagonal elements on the left hand side and pushed all the other elements to the right hand side. Now, just imagine that if this is somehow known to us, then the system will be reduced to a diagonal system. But there is no way that we can know this term because knowing this is equivalent to knowing the exact solution itself. Therefore, the idea is let us choose an arbitrary vector say denoted by x naught and plug in that vector on the right hand side of this matrix expression. Then you see this quantity is known to us and thereby we can obtain a diagonal system and this diagonal system can be readily solved to get a vector x, but note that this vector will not be the solution of our original system. Why? Because we have obtained it by plugging in an arbitrary really chosen vector. However, we can consider this as the first approximation to our solution x. We will call this vector as x 1 which is obtained by first choosing an arbitrary vector x naught and plugging in that on the right hand side of our equation. Remember the exact solution of your original equation is the same as the exact solution of this equation also right. However, since we are plugging in some arbitrary vector into this expression, now the solution of this system will not be the solution of our original system. However, we will consider it as a first approximation to our exact solution that is the idea and that is called as x 1. Once you obtain x 1 by solving this system d x 1 equal to c x naught which is known to us plus b, then you can plug in that x 1 on the right hand side expression. Now you see instead of x naught we are plugging in x 1 and solve this diagonal system once again and call the solution of that diagonal system as x 2. Once you get x 2 again plug in that vector x 2 into this right hand side expression and get x 3 and so on. In general assume that you know this vector x k for some k equal to 0, 1, 2 or so on. For instance if k is equal to 0 we have chosen this vector which is x naught arbitrarily right and then we got x 1 and by taking k equal to 1 you plug in x 1 on the right hand side get x 2 and so on. In general at the k th level you have obtained the vector x k and when you go to the k plus 1 th level you plug in that vector x k on this right hand side expression and solve this diagonal system to get the vector x k plus 1. In that way you are generating a sequence x naught which is arbitrarily chosen then you are getting x 1 from there then plugging in x 1 you are getting x 2 and so on. So, you are getting a sequence of vectors like this and this process is called the iterative process or iterative procedure means what you start with a x naught and you have an expression say let us denote it by t of x. So, it is an expression involving a vector in that expression you are plugging in x naught let us call this as x naught right you plug in that x naught get x 1 and again plug in x 1 into this expression get x 2 and again and so on. So, this procedure is called the iterative procedure. Now, if D is an invertible matrix then you can write this expression rather in a nice way like this by multiplying both sides by D inverse right and in that way we get x k plus 1 equal to D inverse into C which we will denote by D into x k plus the vector C which is nothing but D inverse B. So, this is a iterative procedure that leads to a sequence x k which is called the iterative sequence and this method of generating the iterative sequence is called the Jacobi method. In general an iterative method is that for which we have a sequence generated in this form where the function t is something given to us different form of t will lead to different methods. In particular in Jacobi method the function that is called the iterative function and that t in the Jacobi method is given by this expression ok. I hope we have understood the Jacobi method let us try to understand the Jacobi method in a rather simple way by looking at each equation of the system separately. What we did so far is the Jacobi method in the matrix notation. Now we will try to derive the Jacobi method coordinate wise just to have a clear picture of how the formulas look like. For that sake we will take a 3 cross 3 system the method can be derived in a similar way for any other dimensions. Recall the basic idea of Jacobi method is to keep the diagonal elements on the left hand side and push all the non diagonal elements to the right hand side right. There by in the first equation you will keep this term that is the first term on the left hand side and push all the non diagonal elements to the right hand side and thereby you will write this equation as a 11 x 1 is equal to b 1 minus a 12 x 2 minus a 13 x 3. Similarly in the second equation you will keep the diagonal element on the left hand side and push these two terms on the right hand side and in the third equation you will keep the diagonal element that is the third term on the left hand side and push these two terms on the right hand side. And then what you will do you will divide both sides by the diagonal element that is in the first equation you will divide both sides by a 11. Remember for that the diagonal element should be non 0 otherwise it is not possible for us to generate the Jacobi iteration. If a 11 is not equal to 0 then this equation can be written as x 1 is equal to 1 by a 11 b 1 minus a 12 x 2 minus a 13 x 3 right. Similarly all the other equations can also be rewritten in a similar way and we get this system. Remember this system is equivalent to the original system only thing is we have just rewritten the terms that is all. Therefore, the exact solution x for this system will also be the exact solution for this system. This we have to keep in mind because we will be using it quite often. So, once you have this at this level we have no approximation right. We have just rewritten these expressions in this form. Where the approximation comes well the approximation comes when we choose an arbitrary vector which we call as initial guess plug in that vector on the right hand side of this expression and get correspondingly some values on the left hand side. In that way you only have an approximate value it may be a bad approximation at the initial level to your exact solution. So, you have x naught that you plug in on the right hand side and get x 1. Once you get the x 1 again you plug in on the right hand side to get x 2 and so on. It goes on like this and this is the expression for the Jacobi iterative procedure in the case of a 3 by 3 system. Similarly, you can derive it for any other dimensions. So, you got a sequence of vectors as a output of the Jacobi iterative procedure right. Now, the next question is will the Jacobi sequence converges. Remember the Jacobi iterative sequence depends on two things. One is the system that is given to us and the second thing is the initial guess that we have chosen right. Therefore, the question is for a given non-singular linear system and for a given initial guess does the corresponding Jacobi iterative sequence converges. The next important question is if the sequence converges remember if the sequence converges say x k converges to x right. Is this vector the exact solution to our linear system that is the question ok. The second question is not very difficult for us to answer. Assume that this iterative sequence x k converges to x. Then remember the iterative sequence is defined as x k plus 1 is equal to b x k. I am just writing it in the matrix notation what we wrote is in the coordinate wise notation right. In the matrix notation it is written like this. From here you can see what is this matrix b ok. Now, if the sequence converges then this converges to x and this term converges to also b x right and that is equal to this plus this is anyway not a sequence therefore, it remains as c. So, if the sequence x k converges to x then from this equation you can see that this limit satisfies this equation. This is precisely equivalent to satisfying our original given system right. Because if you recall your b is nothing, but d inverse c and your vector c is nothing, but d inverse b the right hand side vector that can be written as d x minus c x is equal to b right that is d minus c into x equal to b and that is nothing, but a x equal to b. Therefore, if at all the Jacobi iterative sequence converges in that way the limit will be the exact solution of our given linear system. Therefore, the only question is whether the sequence converges or not. We will see two examples in one example we will see that the corresponding Jacobi iterative sequence converges and in the other example we will see that the sequence actually diverges. Let us take the first system given like this. Now we want to write the Jacobi iteration for this remember for that you have to keep the diagonal elements on the left hand side and push all the non diagonal elements on the right hand side and then divide both sides of each of the equations by their corresponding diagonal element right. If you do that you will get the expressions like this and then you put the superscript to define the iterative sequence right. Let us try to compute few iterations by giving some initial guess and see how the iterative terms are coming out. Before that let us note that the exact solution of this system is given approximately like this. I am giving you the solution up to 6 significant digits just to compare it with our iterative terms. Now let us go to compute the iterations using the Jacobi iterative procedure for this system given like this. As I told we need a initial guess for that we can take any vector as a initial guess, but here I have taken the trivial choice of 0 vector that makes the life more easier at least for the first iteration. So I have taken like this with that you can see the first iteration turns out to be like this. If you go back you can see that in the first equation you have to put the 0 vector x naught is equal to 0 0 0 right. In that way this becomes 0 this also becomes 0 therefore x 1 will be minus 1 by 3 that is what we obtained here minus 1 by 3 that up to 6 significant digits I have written it as minus 0.63. Similarly in the second equation if you put x 1 equal to 0 and x 3 equal to 0 because we have taken our initial guess as this we will get x 1 as 1 by 4. Similarly in the third equation if you put x 1 equal to 0 and x 2 equal to 0 you get 0 that is what we got in our first iteration it is minus 1 by 3 and this is 1 by 4 and 0. Once you get x 1 the next step is to get x 2 how will you get that will plug in the vector x 1 on the right hand side of the Jacobi iterative formula and get x 2. Once you get x 2 again plug in x 2 on the right hand side to get x 3 once you get x 3 plug in on the right hand side of the iterative procedure get x 4 once you get x 4 you plug in on the right hand side to get x 5 and so on. So, mathematically this process will go on infinitely and generates a infinite sequence x k right, but on a computer you cannot go on doing this process we have to stop this process at some point we will see later how to stop this procedure but in this example I have stopped my computation at the fifth iteration. Now let us observe coordinate wise in the first coordinate we got minus 0.3333 in the second one we got this value third, fourth, fifth just compare it with the value that we are supposed to get at least up to 6 significant digits we are supposed to get this value and when you see the first coordinate of the iterative vectors you can see that we are going closer and closer to what we want as we go on computing the iterations that is what we meant by saying that we seems to be converging to the exact solution the same situation is also happening with the second and the third coordinate. You can clearly observe that as we go on with computing the iterative sequence the coordinates are going closer and closer to the corresponding coordinate of the exact solution right. So, it seems that the Jacobi method is converging for this system and for the initial guess that we have taken. Next we will go to another system you can see that in this system we have just interchanged the first equation by the second equation of our previous system and the second equation is taken as the first equation of our previous system. In that way we have not changed the system mathematically because we have only interchanged the first and second equation therefore, the exact solution of the previous system is the same as the exact solution of the present system right. And if you recall the exact solution is given like this up to 6 significant digits. Now, let us write the Jacobi iterative procedure for this system and see numerically whether that sequence is going to converge or not. So, when you go to write the Jacobi iteration for this system you can see that the diagonal element now for the first term is this. If you recall for the previous system the diagonal element for the first equation is this because that was sitting as the first equation right. Now the diagonal element is this and similarly the diagonal element of the second equation is also changed right. Previously this was the diagonal element now it is changed to this one because we have interchanged the equations right. And therefore, now we have a different expressions for the Jacobi iteration if you recall the system is same mathematically but the Jacobi iteration is different because the equations positions are changed. Let us go to compute the iterations for that we need a initial guess again I will choose 0 vector as my initial guess with that my first iteration that is x 1 is given by 1 minus 2 and 0 right that is very clear. Now I will plug in x 1 on the right hand side of this expression and then get x 2 and that is given by this vector. Now once you get x 2 plug in on the right hand side get x 3 plug in x 3 on the right hand side get x 4 and so on. Now observe the coordinates you started with 1 that became 9 in the second iteration further it became 33 in the third iteration and 2 2 2 and so on. So, that gives us a feeling that our iteration is going away from the exact value the same situation is happening in the second coordinate as well as in the third coordinate also right. That clearly shows us that we are going away from the exact solution and gives us a feeling that the Jacobi iterative sequence for this system when we started with the initial guess as the 0 vector is going to diverge right. So, that clearly gives us the answer for our previous question that will the Jacobi sequence always converges the answer is no right. So, that is the answer for this question is no that also gives us a good challenging question of when the iterative sequence converges well that depends on obviously the system that we are working with and also the initial guess. We will see that for some particular systems the Jacobi method will converge in respect to whatever may be the initial guess we will discuss this result in the next class. Thank you for your attention.