 Hi, we are discussing iterative methods for non-singular linear systems. In the last class, we have discussed an iterative method called Jacobi method and we have also seen two examples. In the first example, the Jacobi iterative sequence tend to converge to the exact solution and we have also given another example where the Jacobi method tends to go away from the exact solution. That gives us an interesting question of when the Jacobi iterative sequence will converge. In this lecture, we will try to answer this question. We will start our discussion with the definition of the error involved in the kth iteration of a iterative method. Well, it is not only for Jacobi method, it can be any iterative method from where suppose you computed the term x k, then the error involved in x k when compared to the exact solution is defined as x minus x k and we use the notation e k to denote that quantity x minus x k. Well, if you recall in the last class, we have derived the expression for the iterative sequence of the Jacobi method and it is given by x k plus 1 is equal to b into x k plus c, right, where b is an n cross n matrix and c is an n dimensional vector. So, we have derived the expression for this b and c in the last class and if you recall the same equation will also be satisfied by the exact solution of the linear system ax equal to b. That is x also satisfies the equation x is equal to bx plus c, why? Because simply we have written ax equal to b in this form that is all. We did not do anything, we just rewritten this system in this form and then we have plugged in a known quantity on the right hand side and got the corresponding vector and called that as the terms of the iterative sequence, right. So, therefore, we also have this equation satisfied by the exact solution. Therefore, the error involved in the kth iteration can be written as bx plus c, which I am replacing with this x and x k is replaced by its formula which is given by bx k minus 1 plus c, right. Therefore, you can get this as b which is a matrix therefore, you can write it as b into x minus x k minus 1, right. And this is again the definition of the error involved in the k minus first iteration, right. Therefore, you can write it as b e k minus 1, right. So, therefore, the error involved in kth iteration is nothing but b times error involved in the k minus 1 iteration. That is what I am writing e k plus 1 is equal to b into e k. Now, take the vector norm on both sides of this equation to get norm e k plus 1 is equal to norm b e k, remember b e k is a vector. Therefore, this is a vector norm, right. Similarly, this is also a vector norm that we have taken. Since this is a general discussion, we are not restricting ourselves to any particular vector norm like L 1 norm or L 2 norm or L infinity norm. This discussion will go on with any choice of the vector norm. Now, if you recall in one of the previous lectures, we have discussed subordinate matrix norm that is matrix norm subordinate to a vector norm, right. What it is? You give me a vector norm, from there I will generate a matrix norm using that vector norm, right. Why we do that? Because it satisfies three important properties that we have proved in one of the previous classes. Whenever you see an expression like this a into x kind of expression, you should immediately recall the property that this is less than or equal to norm a. This is the subordinate matrix norm into norm x which is a vector norm, right. Here also you see a similar expression here, b into e where b is a matrix, e is a vector. Therefore, this can be written as less than or equal to norm b which is the subordinate matrix norm and this is the vector norm, right. And now you see this inequality holds for every k, right. In particular it also holds for e k that is you can write e k as norm b into norm e k minus 1, right. Now if I apply that here I will get this is less than or equal to already one norm b is there. Now one more norm b will come from here and makes it norm b square into e k minus 1. Again you apply the same inequality here. So, this inequality for this term and that will give you norm b q times e k minus 2, right. So like that you can keep on going up to e 0, right. When you reach e 0 here correspondingly you will get norm b to the power of k plus 1, right. Now you see this is less than or equal to this term is some finite number. Why? Because we have chosen x naught arbitrarily, right. We will choose it in the space or n. Therefore norm x minus x naught which is precisely your e naught is going to be surely some finite number, right. Now what is our interest? If you recall our interest in this discussion is to see when our Jacobi method that is the iterative sequence generated by the Jacobi method should converge, right. When will it converge is the question. Equalantly the question is to see when this term goes to 0 in some sense, right. Well if this goes to 0 then x k goes to x, right. Therefore our interest is to see when this term goes to 0 in some sense. For that we have taken a norm it means we are trying to see this condition in the sense of a given norm, ok. That is we want this term to go to 0, right. From this inequality we can see that this happens when this term goes to 0 because this is already a fixed number. Therefore this goes to 0 if this goes to 0. Why it is so? We know that the norm is always greater than or equal to 0. Therefore if this term goes to 0 then you can apply the sandwich theorem for sequence to see that this is already 0 and if this goes to 0 then this will also go to 0. That is the idea. Now the question is when this term goes to 0? Well we can clearly see that this term goes to 0 as k tends to infinity if norm b is less than 1. So, to summarize we want this sequence to converge in order that this sequence should converge we need e k with respect to some given norm should converge to 0. We have just now seen that if norm b is less than 1 then the error will converge to 0 with respect to the vector norm. Remember you choose a vector norm and you want e k to converge to 0 with respect to that vector norm for that this Jacobi iteration matrix should have its corresponding subordinate matrix norm to be less than 1, right. When can we say that a matrix is diagonally dominant? Suppose you have a matrix a which is given by a i j where i varies from 1 to n and j varies from 1 to n. Now say for instance you take a matrix something like 7 minus 3 and 2, 0, 2 and 5 and 1 minus 1 minus 1 minus 1 minus 1, 1 and 3, ok. So, to check whether this matrix is a diagonally dominant matrix or not what you have to do is you take all the terms other than the diagonal term that is what we meant by saying j not equal to i. Remember you fix a i for instance you fix say first row and take all the non diagonal elements take their absolute value and then sum them up that is what the left hand side says. In this example it is nothing but 3 plus 2 and that should be less than the absolute value of the diagonal term that is this should be less than the diagonal term that is 7 in the first row. Similarly you take the second row you see that the non diagonal elements are 0 and 5, right. Therefore 0 plus 5 should be less than the diagonal element which is 2. So, this is not happening therefore this matrix is not a diagonally dominant matrix because this should happen for all the rows, ok. So, you remove the diagonal element and sum all the non diagonal elements of a row with their absolute value and then check whether this inequality is satisfied. If this happens for all the rows then we call the matrix A as diagonally dominant. Now our convergence theorem says that if the coefficient matrix A is diagonally dominant then the Jacobi method converges it means the iterative sequence given by this expression converges. Now the idea is to use the condition that the coefficient matrix is diagonally dominant and prove that norm B is less than 1 that is the idea of the proof of this theorem. We will present this theorem by choosing some particular norm which is the L infinity norm because in that way the proof goes more nicely and it is easy to understand. Therefore in the proof of this theorem we will consider only L infinity norm to compute the quantity norm B and see how it goes. The theorem says that if A is diagonally dominant then the Jacobi iterative matrix this is called the Jacobi iterative matrix will have its subordinate matrix norm to be less than 1. Well the proof of this theorem is more or less similar to what we have shown here. Only thing is the same derivation we will do coordinate wise. In order to see how the diagonally dominance of the matrix A is going to make this quantity norm B to be less than 1. To very clearly see this we will do the same computation coordinate wise. Remember this equation is given like this for each coordinate. How it is so? What we are doing is suppose you have A 1 x 1 plus A 1 2 x 2 plus A 1 3 x 3 equal to B 1 if you take then what you are doing in the Jacobi method you are keeping the diagonal element. Since this is the first equation of your system the diagonal element is the first term you are keeping that on the left hand side and then taking all the other elements that is A 1 2 x 2 plus A 1 2 x 2 plus A 1 2 x 3 x 3 on the left hand side and finally, dividing both sides by 1 by A 1 1 right. That is what I am writing here. The ith equation can be written as x i is equal to I am dividing by the diagonal element A i i then keeping B i as it is all the non diagonal elements are taken on the right hand side. Therefore, you have a minus here and then the summation this summation remember it will exclude the diagonal element because it was not taken to the right hand side. That is why we have written J naught equal to I and this is for one typical equation that is the ith equation and that I will run from 1 to n. This is precisely what the Jacobi method is remember the same equation also holds for the exact solution that is what we remarked that is x i can be written as 1 by A i i into B i minus sigma J equal to 1 to n J naught equal to i A i j into x j k right. Now if you subtract these two x i minus x i k plus 1 that will give you the ith component of the vector error vector E i k plus 1 right. When you subtract these two this gets cancelled and similarly you will have here the coefficient A i j will remind and then you will write x j k sorry there is no k here this is exact solution. Therefore, x j minus x j k will come and that will give us E j k right. Therefore, you will have E j k plus 1 is equal to minus because this minus is here sigma j equal to 1 to n J naught equal to i A i j divided by this diagonal element A i i into E j k right. So, that is what we have here. What we are having here and this should hold for each equation that is for i is equal to 1 2 up to n. So, the corresponding error involved in the Jacobi iteration at the k plus 1 iteration is given by this expression right. Now take the modulus on both sides right and then you take the modulus inside the summation that will give you this to be less than or equal to norm of this into norm of this right and there is a plus sign here. Now what happens is remember your E k vector is nothing, but E 1 k E 2 k and so on E 1 k E 2 k and up to E n k right. Now what is the infinite norm of E k it is nothing, but maximum over all mod E i k right or maybe E j I will put here j varies from 1 to n right. Now you see you have E j in each of this terms what if I replace each of this term by its infinite norm then I can say that this is further less than or equal to sigma A i j divided by A i i into norm E k infinity right because I am replacing each term in the sum by its maximum value therefore that will give me something value bigger than this right. Therefore I will have this inequality now you see this inequality still involves this term let us see how to dominate this term you see for each i you have this number i is equal to 1 you have a number like this 2 and so on up to n and this is common for all this i's because it does not depend on i as well as it does not depend on j therefore it will also come out of this summation and you have this. Now take the maximum over all these numbers ok take maximum over all these numbers and call this as mu then you can say that this is further less than or equal to mu times norm E k infinity right. So, that is what we are getting here mod E i k plus 1 is less than equal to mu times infinite norm of E k and this happens for all i equal to 1 2 up to n right remember the right hand side is independent of i whereas the left hand side depends on i and this inequality holds for all i therefore you take suppose you take this quantity which is nothing, but maximum over all i equal to 1 to n such that norm E i k plus 1. So, this maximum will achieve at some index between 1 and n right. So, therefore this inequality will hold for that index also because this holds for all i therefore it will also hold for that index at which the maximum norm is achieved. Therefore, you can write with respect to that maximum norm also that is you can replace this by the maximum norm because it holds for all i therefore it will hold for that coordinate at which the maximum norm is achieved and that is less than or equal to mu times E k infinity. Now you see finally, we got this inequality and this is again a recursive inequality in the sense that you can apply this inequality to this term just like what we did in the previous slide you can apply this inequality to this and that will give you mu into mu times norm E k minus 1 right and that will actually give us mu square norm E k minus 1. Again you do the same idea you can say that this is less than or equal to mu cube into norm E k minus 2 infinity and so on you can go until where you can go up to E 0 here and correspondingly that will give us norm E k plus 1 infinite norm is less than or equal to mu k plus 1 times infinite norm of E 0 ok. Now if you recall in tutorial 1 we have solved a problem where we have seen that if your sequence x n is such that x n minus l is less than or equal to mu k into x naught minus l right. If that is so then you can say that this sequence x n converges to l as n tends to infinity if mu is less than 1 sorry this is n here. Now we have the same kind of inequality here and therefore, you can see that if mu is less than 1 then of course, this goes to 0 as k tends to infinity. You can also see this result using the sandwich theorem because this is always greater than or equal to 0 and so if this goes to 0 this is a fixed number right therefore, this will also goes to 0 as k tends to infinity right. So, that is what we get therefore, all we need is to see whether this mu is less than 1 or not. In fact, from the way mu is defined you can see that this is true if a is a diagonally dominant matrix. Remember if a is diagonally dominant then a i i will always be greater than sigma j equal to 1 to n removing the diagonal term that is j is not equal to i right times mod a i j therefore, all these terms all these terms are less than 1 and now you are taking maximum of all numbers which are less than 1 therefore, mu will also be less than 1. So, this is what precisely we wanted to show that if a is diagonally dominant then the Jacobi sequence converges that is x k converges to x as k tends to infinity is what to prove. What we did is we just took the infinite norm and try to prove this this is equivalent to saying that e k with respect to infinite norm tends to 0 as k tends to infinity by going into coordinate wise we have seen that the required term that is infinite norm of e k well here we did with one index more e k plus 1 is less than or equal to this quantity times the infinite norm of e 0 and by imposing the condition that a is diagonally dominant we saw that mu is less than 1 therefore, this goes to 0 as k tends to infinity and therefore, by sandwich theorem we can see that this goes to 0 as k tends to infinity. So, that completes the convergence proof for Jacobi method remember that this theorem gives only the sufficient condition for the convergence of the Jacobi method with this let us go to the examples that we have discussed in the last class if you remember we considered 2 examples in the first example we took this system and we have seen that the Jacobi iteration tends to go closer and closer to the exact solution as we go on computing the iterations on the other hand the second example that we took where we took this system this tend to go away from the exact solution x that is what we have seen in the last class. Now it is more clear for us why this system was behaving well in terms of the Jacobi iteration because this system is diagonally dominant whereas, this system is not diagonally dominant, but that does not mean that the Jacobi iteration should diverge because our theorem says only that the diagonal dominance of a implies convergence if the matrix is not diagonally dominant our theorem simply does not say anything. In other words what we proved is only the sufficient condition for the convergence of the Jacobi method therefore, our theorem is simply silent for this system because this system is not diagonally dominant numerically we have seen at least to the terms that we have computed those terms of the sequence were going away from the exact solution. In fact, we can also get a necessary and sufficient condition for the convergence of the Jacobi iteration sequence we will see all this in the coming lectures. Thank you for your attention.