 Now, we are discussing iterative methods for linear systems. In this, we have completed Jacobi method and Gauss-Seidel method. We have also learned sufficient condition under which the iterative sequences generated by these two methods converge. In the last class, we have also seen through an example that an iterative sequence may converge rather slowly or it may not even converge. In this lecture, we will try to prove another theorem which gives us a necessary and sufficient condition under which an iterative method converges. This condition depends on the spectral radius of the iterative matrix of a method. In this, we will consider a general form of the iterative method and we will prove this necessary and sufficient condition for a general iterative method. For this, we need a concept called spectral radius of a matrix. Let us first define this concept. Let A be an n cross n matrix. The spectral radius of the matrix A is defined as the maximum of the absolute values of the Eigen values of the matrix A and it is denoted by rho of A. Now, that is an interesting result concerning the spectral radius which says that you give me any subordinate matrix norm. The norm of A with respect to that subordinate matrix norm will surely be greater than or equal to the spectral radius of the matrix A. Not only that, you can also see that the spectral radius will be the greatest lower bound of the set of all subordinate matrix norm of A. Means what? You collect all possible subordinate matrix norm. Well, this is a mathematical concept. You cannot do this practically. Just mathematically speaking, take all subordinate matrix norm, find the value of this norm when it is applied to the matrix A which is fixed for you. Then, the spectral radius will be the greatest lower bound of that set. That is little difficult for us to prove. So, we will not get into the proof of that result. Anyway, we will not be using that. It is only an interesting result. Whereas, the result which is stated here is an interesting result and also it is easy to prove. So, let us try to prove this lemma. You take any Eigen value of A and the corresponding Eigen vector of that Eigen value. Let us make it unit vector and call it as V. Then, you can write mod lambda is equal to mod lambda into norm V. Why we can write it? Because V we have taken as the unit vector. Therefore, norm V will be equal to 1. Therefore, we can write like this. Now, use one of the properties of the vector norm and push this modulus of lambda into the norm and you can get norm lambda V. You just go to the definition of vector norm. You can see that this is one of the properties of the vector norm. So, we are using that property. Now, once you see lambda V, you can immediately write it as A V. Why? Because lambda is the Eigen value and V is the corresponding Eigen vector of lambda. Therefore, you can write lambda V as A V. Now, whenever you see this expression, immediately one important property of the subordinate matrix norm should come in our mind. That is nothing but norm A V which is the vector norm is less than or equal to subordinate norm A into vector norm V. Again, you can see that vector norm V is equal to 1 because V is a unit vector respect to this norm. Therefore, this is equal to norm A. Now, you see you have chosen this Eigen value arbitrarily. Therefore, this inequality that is mod lambda is less than or equal to norm A holds for any Eigen value of A. In particular, it will also hold for that Eigen value at which the maximum is achieved. That is, it also holds for the spectral radius of A. Spectral radius of A is nothing but the maximum of modulus of all such lambdas. Therefore, the spectral radius will also be less than or equal to norm A. So, that is what we understand and that is what we want to prove in this lemma also. This lemma gives us a encouraging message that if we use spectral radius in our analysis, we may get a better estimate for the error. So, that is what we are going to do. Before that, what we will do is instead of considering Jacobi method or Gauss-Hiedel method, we will make our discussion a bit general and consider any iterative method which can be written in this form. Recall in the last class, we have seen that Gauss-Hiedel method can also be written in this matrix form as well as Jacobi method can also be written in this matrix form. Now, just to make our discussion little general, we will consider any iterative method which can be written in this form. Therefore, in our discussions Jacobi and Gauss-Hiedel can be taken as a particular case. Now, let us state the important theorem which is the necessary and sufficient condition for convergence of any iterative sequence that can be written in this form. What the theorem says? You take any initial condition that is more important, you take any initial condition, the sequence x n generated using this formula converges to the solution x equal to b x plus c. Why we are taking this as the system? Because if you recall, we had A x is equal to b, generally we will write this system in the equivalent form as x equal to b x plus c. Therefore, any solution of our original system will also be the solution of this system. That is how we generally derive the iterative methods with that in mind. We say that this sequence converges to the solution of this system which in that sense is equivalent to the solution of our original system. And the theorem says that this happens that is the convergence happens for any initial guess if and only the spectral radius of the iterative matrix is less than 1. Remember the condition is imposed on the iterative matrix. Recall in our previous two theorems, the condition was imposed on the coefficient matrix that is the coefficient matrix should be diagonally dominant that is what the condition we imposed in our previous theorems. They are sufficient conditions. Now here we are imposing condition on the iterative matrix. Often students make mistake of checking this condition that is rho of the coefficient matrix is less than 1. That is not correct. You should not check this condition for the coefficient matrix, but you should check this condition for the iterative matrix B. Now our interest is to prove this theorem, but to prove this theorem we need two important results from linear algebra. We will recall these results without proving them. The first result is for any n cross n matrix B any matrix need not be an iterative matrix. This result is from linear algebra and it holds for any matrix for any n cross n matrix of course, for us everything is with real entries. The spectral radius of B is less than 1 if and only if limit n tends to infinity B n x equal to 0 and this should happen for every x in R n. This is one result that we will be using in the proof of our theorem and the next result says that if B is a n cross n matrix and the spectral radius is less than 1 then i minus B inverse exists and it can be written as i plus B square and so on. This is a familiar result for us when B is a real number, but it also holds for matrices that is what we are seeing here. You would have seen these results in linear algebra course. So, we will not prove this theorem here, but we will use them in our main theorem. With these two results in mind let us prove our main theorem. Let us first assume that the spectral radius of B is less than 1 and let us prove that the sequence x n converges for any given initial guess x naught. How to prove this? Well let us start with the expression for the iterative sequence which is given by x k plus 1 is equal to B x k plus c. Now you see this expression was used in computing x k also. Therefore, the same expression can be applied to x k by replacing x k plus 1 by x k. Then you will get instead of x k you are just putting its expression from where it is computed. Now with a simplification you can write this expression as B square x k minus 1 plus I am just clubbing these two terms and getting B plus i into c. Now once you have this again you can apply the same expression for x k minus 1 right. In that way you can keep on going like this and till you hit the last step of x naught. At that stage you will have B k plus 1 into x naught and the second term would have become the finite sum B k plus B k minus 1 plus up to B plus i into c right. Now you see we have this nice form x k plus 1 is now given like this. Now you see we have scope to use our two lemmas that we stated before. How can you use them? Well for that you take limit k tending to infinity on both sides. Once you take this you can recall the first lemma since you have assumed that the spectral radius of B is less than 1 that lemma says that limit k tends to infinity B k plus 1 into x equal to 0 for any x in particular whatever initial guess that you have chosen will also satisfy that condition right. With that you can see that this term goes to 0 and the second term of course when you take limit k tends to infinity this finite sum will become a series like this. Now again you can use the second lemma which says that if the spectral radius of matrix B is less than 1 then this series converges and the limit of this series is nothing but i minus B inverse right. Now we will put these two lemmas into this expression remember this tends to 0 and this will be equal to i minus B inverse. Therefore limit k tends to infinity x k plus 1 is equal to this. This is the limit of this sequence. So limit exists that is what we have seen using these two lemmas. Now what is this limit? Well let us call this limit as x then you can see that i minus B right x is equal to c that is nothing but x is equal to B x plus c right. This is precisely what we have demanded in the statement of our theorem. If you go back and see what we said is that the sequence should converge to the solution of this system and this is what now we are precisely seeing that the limit is nothing but the solution of this system. So that completes the proof that if the spectral radius of the iteration matrix B is less than 1 then the corresponding iteration sequence will converge to a vector x which is the solution of the system x is equal to B x plus c. So we finish the proof of one side. Let us now assume that the iterative sequence converges for any initial guess x naught that is very important. The statement is that the sequence converges for any initial guess x naught and this is the assumption and we have to prove that the spectral radius of B is less than 1. How to prove this? Well there are many ways to prove it. Let us take a simple approach. Assume the contrary that the spectral radius of B is greater than or equal to 1 that is we are given that the iterative sequence is converging for any initial guess but we are also assuming that the spectral radius is greater than or equal to 1. With this assumption we will now see that we can arrive at a contradiction. What is that contradiction? I will pick up one initial guess for which the corresponding iterative sequence is diverging that is what I am going to prove. For that I have to choose a particular initial guess conveniently to make the sequence diverge. How am I going to choose this initial guess? Well I will choose my initial guess in such a way that x naught that is my initial guess minus x remember x is the exact solution of the system x equal to B x plus c right. This is a vector now. Now I will choose my initial guess in such a way that this vector is an eigen vector corresponding to which eigen value. Well the eigen value at which the maximum is achieved that is nothing but that eigen value which came as the spectral radius of my iterative matrix. So this can always be done you are given eigen value then you can just take any eigen vector and then you can just write it in this form by conveniently choosing your vector x naught. You have freedom to choose x naught but you have no freedom on x because it comes as the solution of your system right. But you have freedom on x naught. So you can always choose such a vector. Now we will see how we are going to arrive at a contradiction. Well let us take this term if you recall x k plus 1 is nothing but B x k right plus c that is the way we have defined the iterative sequence. The same equation also holds for x therefore, x should be equal to B x plus c. Now we are subtracting these two therefore, this will go off and you will have B into x k minus B into x that I can write as B into x k minus x right. Now the idea is something similar to what we did in the previous case. You apply the same expression for x k and write this as B square into x k minus 1 minus x right. Now once you do that you keep on continuing this recursively until you hit the dead end where you have B k plus 1 into x naught minus x sorry this x star is a typo it is x. Now what is this? This is nothing but the Eigen vector right. Therefore, you can write it as lambda i k plus 1 into the Eigen vector again this star should not be there it is a typo x naught minus x and you have this because this is a Eigen vector. Now what we assumed as a contrary that lambda i is greater than or equal to B k plus 1 into 1 that is what we have assumed that is this is something greater than or equal to 1 raise to k plus 1 right. If lambda i is equal to 1 then you can see that the right hand side will never go to 0 as k tends to infinity it will be a fixed number. On the other hand if lambda i is strictly greater than 1 then the right hand side will go to infinity and therefore, the left hand side will also go to infinity remember in this derivation we only have equality there is no inequality. Therefore, whatever happens on the right hand side will surely happen to the left hand side also if it was a less than or equal to then you cannot say that the left hand side will go to infinity if right hand side goes to infinity right. But in this case you can say because it is a equality that means that our assumption that lambda i is greater than or equal to 1 made us to choose a initial guess for which the corresponding iterative sequence is not converging which is not correct as per our assumption that our iterative sequence always converges for any initial guess right. Therefore, our assumption that the pectoral radius of B is greater than or equal to 1 is not correct that means the spectral radius of B should be always less than 1 if the iterative sequence is converging for any x naught that is more important for any x naught if it converges that will surely imply that the spectral radius is less than 1 and this completes the proof of this important theorem. Let us also have an important observation which we have already made in the case of the previous theorem on Jacobi and Gauss-Seidel method. You can see that if you are starting with a x naught such that x naught minus x is an eigen vector again there is a typo here then this equality shows that if lambda i is greater than or is very close to 0 then the convergence is going to be faster right. On the other hand if the spectral radius that is in this case it is lambda i is very close to 1 then the convergence is going to be very slow. You can see directly from here right as k tends to infinity the convergence is happening because this term is going to 0 as k tends to infinity if lambda i which is nothing but your spectral radius is less than 1. If it is very close to 1 the convergence will be very slow if it is very close to 0 then it goes much faster right. So, that is what is a clear observation from here and in fact, it holds for any choice of x naught also. With that note in mind let us now see the example that we discussed in the last class and also at the beginning of this class. If you recall we have considered this system which is not diagonally dominant and not only that you cannot exchange any of the rows to make it diagonally dominant right. Then we have formulated the Jacobi method and Gauss Seidel method and we have seen that Jacobi method was converging with the initial guess was taken as x naught is equal to I think 1 1 1 with that we have observed that Jacobi method was converging, but very slowly why it was so now we can try to answer that for that you have to take this matrix B j and find all its eigenvalues and then take the modulus of the eigenvalues. Let us say it is lambda 1, lambda 2 and lambda 3 then you take the modulus of all this eigenvalues and then take the maximum of all this numbers. In this case it happens to be 0.94444. Now go back to our previous observation your spectral radius of the Jacobi iteration sequence was pretty close to 1 right. So, that is why we witnessed that the convergence was pretty slow. Also we have observed that the iterative sequence of Gauss Seidel method was oscillating and never converged. Now by computing the spectral radius of B j you can also understand why it was happening like that. It was because the spectral radius of B j, remember B j is the iterative sequence of the Gauss Seidel method and its spectral radius is 1. That may be the reason for why it was not converging but it was oscillating. From here you can see more clearly the behavior of these two methods and also we can see that for a given system you may not always expect these methods to converge. Even convergence of one method may not imply the convergence of the other and also the sequences may converge very slowly. These are some of the disadvantages of iterative methods. Therefore before applying these methods into your applications you have to have an idea of whether these methods are going to converge if so how fast they will converge right. For this all these analysis that we did in this section are very important like this in numerical analysis. Doing analysis on methods will give us lot of ideas about how to use these methods more efficiently and what are all the limitations of these methods. With this note let us finish this lecture. Thank you for your attention.