 So, we have been looking at convergence of iterative methods for solving linear algebraic equations and we have made some progress. We have found out the difference equation that we need to analyze for solving the convergence problem. For solving the convergence problem we have looked at. So, I will just take a brief review of what we have done. So, we have this matrix A which we write as s minus t. We split this matrix. Well, we wrote this in two ways. One was l plus d plus u and then using this l, d and u we formulated two matrices s and t and then we showed that the iteration scheme, essentially any iteration scheme, Jacobi iteration scheme, Gauss-Seidel iteration scheme, relaxation iteration scheme, it can be written as x k plus 1 is equal to s inverse t x k plus s inverse b and finally, we wanted to converge to the solution, let us say x star is equal to s inverse t x star. The solution is x star is equal to s inverse t x star. This iteration scheme should converge to this fixed point. This is called fixed point of the equation because it when you substitute it on the right hand side, you will get back the same vector. So, x star is called as a fixed point. So, we subtracted this and then we got this equation E error at point k plus 1 is s inverse t error at k. We got this equation where error was defined as error at iteration k was defined as x k minus x star. This is the distance from the true solution. Well, we do not know the true solution. We are trying to reach it. Nevertheless, the error between the guess solution and the true solution is governed by this difference equation, error at iteration k plus 1. So, this tells you how iterations progress. You start with some error and so, we come up with the analysis that E k is nothing but s inverse t raise to k E at 0 error at 0 initial error and what we logically deduced was if s inverse t raise to k, this goes to null matrix as k tends to infinity. Then if s inverse t has a property, it is a nice matrix that when you multiply s inverse t with itself multiple times, the product travels towards the null matrix. It converges to the null matrix and as k goes to infinity, you get null matrix. If that happens, then error will go to 0. Error going to 0 which means the guess converging to the true solution. Error going to 0 is same as guess converging to the true solution. So, now we need to analyze. So, what seems to be at the heart of this is this equation. At the heart of this is this equation and I am going to analyze equations of this type. I am going to analyze equations of this type. So, I am going to abstract it. I am not going to exactly keep working with s inverse t. We will apply results specific to this particular problem. I am going to look at a generic problem say z k plus 1 is equal to b z k. This is linear difference equation. This is a linear difference equation. z belongs to R n and initial value is this is the initial condition. So, this is my abstract problem. For this, I am going to generally in today's class, I am going to analyze behavior of these kind of equations. This equation if you notice this equation is a special example of this kind of equations. I want to analyze. So, what do I mean by I want to analyze? I want to come up with a judgment about the behavior of this solution. I want to come up with judgment about how solution behaves asymptotically as k tends to infinity without requiring to compute. I do not want to actually compute the solution. If I want to compute the solution and then later realize that it is diverging, converging. I am not interested in that. I want to analyze without having to do explicit computations. Well, you might say this equation you know just like I have done there. If you start solving, if you know z0 you can say that z at any point k is nothing but b to power k z0. It is very easy to show that not a problem. Just apply it multiple times. This is the solution computational solution. If you know b matrix, if you know z0 then this is how it will solve. In this particular case, we do not know what is z0. If you map to this problem, I do not know what is the initial error. If I knew initial error, I knew the solution. Then the problem of doing iterations would not arise. So, I want to analyze this without knowing z0. I just want to relate this to properties of matrix b. If matrix b has certain nice properties then I can claim that asymptotically this difference equation behaves in a particular way. That is where I want to reach. So, for the time being let us forget about this iterative methods. Let us concentrate on this equation and let us start getting insights into what happens. How do I analyze this? This is a very, very fundamental equation, linear difference equation. It arises in many, many forms in computations. It arises in solving real problems, engineering problems. And it is good to get insight into how this behaves. Let us first look at a scalar equation. And let us see whether the insight that we get from the scalar case, how can we translate that insight into the vector matrix case. So, my first analysis is zk plus 1 is equal to beta zk where z belongs to r, beta is a real number and this is my difference equation. I am starting from my initial condition corresponds to some z0. Well, you can very easily show that zk is equal to at iteration k. This will be beta to power k z. It is very, very easy to show. Repeat that use of this equation. You can get this, isn't it? It is very easy. I want to talk about how zk behaves as k goes to infinity. Will it depend upon beta or will it depend upon z0? It will depend only on beta. Will it depend upon z0? Why? I need support for that. Why should it depend only on? So, suppose z0 is very large. Will it matter? What really matters? What is changing with k? z0 is not changing with k. What is changing with k is beta to power k. Well, if I say that mod beta is strictly less than 1, suppose you have this additional information. Then what will happen is if, so one case is mod beta is strictly less than 1, in that case beta to power k goes to 0 as k goes to infinity. So, what happens to mod zk? So, mod zk tends to 0 as k goes to infinity. What is important is for, this is very important for any z0. z0 is 0.1, z0 is 1000, z0 is million, z0 is billion whatever. This has to happen. Why? Because mod beta is less than 1. Every time beta k shrinks. Every time beta k shrinks and then it goes to 0 for any z0. So, I can look at this equation, look at just beta and talk about how asymptotic behavior is going to be. Second case, my second case is beta is equal to 1 or mod beta is equal to 1. So, beta could be minus 1, beta could be plus 1. What will happen in this case? It will. So, at any point in time mod zk will be equal to mod z0. So, zk will neither grow nor it will decay. The third case is mod beta is greater than 1. What happens here? It will go to infinity whatever is z0. See that is another important thing that even if I start, see again this is very very important. For any z0, even if I start with z0 to be 10 to the power minus 10, after some time this equation will actually diverge. Zk will go to infinity. Even if you start with z0 which is a small number, your equation is not going to converge to a small number. Just because you are starting from small number, same thing is here. Even if you start from a very large number, come what may, if mod beta is less than 1, zk is going to go to 0 after some time. If mod beta is greater than 1, even if it is slightly greater than 1, even if it is 1.0001, it might take longer time to go to infinity, but it will go to infinity. As k goes to infinity, beta to the power k will go to infinity when mod beta is greater than 1 and the solution will, what we say is that the solution will diverge. So this qualitative behavior of the solution for this particular case, I did not actually, I need not solve the equation for a particular z0. I can just look at beta and say that well, if beta is like this, this is what is expected to happen. So that is going to be my basis. Now, how do I extend this analysis to a matrix case? The tough part is that matrix B is a full matrix and then we have a trouble there. We cannot do straightforward extensions because what is matrix being less than 1? Well, we do not know. What is matrix being greater than 1? We do not have such notions. Matrix is an array of numbers and then we have to work out some tricks to come to use this kind of analysis. Okay, so what I am going to show next is that in the attempt to solve this problem, to analyze this problem in the matrix case, what pops out surprisingly or not so surprisingly is the eigenvalue problem. In fact, you can see what is the origin of eigenvalue problem when you start solving this linear difference equations. Okay, so probably when it was taught to you for the first time, you wondered why some matrix times vector is equal to scalar times vector. Why did somebody think of this? Okay, what is the logical way of arriving at such an equation? Where do you hit upon such equation? See, for example, we talked about positive definite matrices. So when you are first introduced positive definite matrix, you do not know, I mean why somebody thought about a positive definite matrix. But when you understand the sufficient condition for optimality, you see that naturally there is a need to define a matrix which has a special property which is X transpose AX is always greater than 0. Then, you know, the local point, the Hessian is positive definite, then you will get a minimum and so on. Okay, so the same thing is here. You will see that what will pop out in the attempt to solve this problem, you will get eigenvalue problem. So now I am going to move to the vector case and I want to keep these ideas. I want to, these ideas are very nice. I want to use them to analyze my vector case. So okay, so let us start looking at the vector case here. The solution in the scalar case was beta to power k into Z0. Okay, some scalar. Let me take a motivation from my scalar case and I propose a solution for the vector case. So my problem now is Zk plus 1 is equal to B Zk and I am proposing a solution. I am proposing a solution Z to power, not Z at kth iteration is some lambda, some scalar to power k. I am just taking motivation from the scalar case. I do not know what lambda is right now. Some scalar into some vector, into some constant vector. Okay, if you look at the scalar solution, it was a scalar raise to k into some number. That was Z0. It turned out to be Z0. Well, we still have to worry about how Z0 here will come into picture. But this is my guess solution. This is my guess solution. If this solution has to satisfy the linear difference equation. Okay, now if this solution has to satisfy linear difference equation, I will just substitute. I will get lambda to power k minus 1 times V. V is a vector by the way. V is a n cross one vector. V is a n cross one vector. Lambda is a scalar. Okay, and this is my solution at any iteration k. Okay, so lambda to power k minus 1 should be the left hand side is equal to B times lambda. Do you agree with me? Just see. Look at this. If this has to be a solution to this problem, okay, then it should obey the difference equation. That is the first criteria, right? It should obey the difference equation. This equation I am just going to rearrange and write. So I am writing this as lambda to power k lambda V minus B V is equal to lambda is a scalar. Okay, lambda to power k into lambda will be lambda k plus 1. The first term is covered. Okay, B times V. Lambda is a scalar. So I can take it on this side. Not an issue. Okay, so lambda to the power k I am taking it outside and okay, I am not interested in the trivial solution of lambda equal to 0. Because then I will get 0 equal to 0. I am not interested in the trivial solution. Obviously. Okay, so we rule out lambda is equal to 0. Let us say lambda is not in general, lambda is not equal to 0. Well, there will be some situations where lambda will be equal to 0 but we will talk about it a little later that right now we are interested in the non-trivial solution in general. So a general non-trivial solution will be obtained when this equation holds a non-trivial solution to the problem. I have just written this equation. Lambda cannot be equal to 0. It will give me a trivial solution. Okay, I am interested in non-trivial solution of this particular problem. Okay, so this is lambda I minus B times V is equal to 0. Okay, have you seen equations of this type? Where? Well, not eigenvalue problem. I want to relate in some other context. Matrix times vector is equal to 0. When does this happen? A is equal to 0. A x equal to 0. So when does this A x equal to 0 as a non-trivial solution? Yeah, so you have seen equations of this type, right? A x equal to 0. Okay, when do you get non-trivial solution? When do you get a solution x is not equal to 0? You get non-trivial solution for this when A has non-zero null space or what can you say about a rank of this A? It should be full rank. If it is full rank then only solution you will get will be 0, right? So it should not be full rank. A should not be full rank. What should happen when you get non-zero solution? Columns of A should be independent. If columns is everyone with, everyone is getting this? Everyone with me on this? If columns of A are linearly dependent what will happen? You will get a non-trivial solution x. Okay, if columns of A are linearly dependent then only you will get a non-trivial solution x. Non-trivial means non-zero solution. You will get a non-zero solution for A x equal to 0. Okay, only when columns of A are linearly dependent. Okay, when the columns of A are linearly dependent what do we call the matrix A to be? Singular matrix, right? When A is singular matrix, okay. Now just compare this equation with this equation. I want a non-trivial solution v, a non-zero solution v. If v is equal to 0, okay, then 0 is equal to 0. I am not interested in that solution. I am interested in a non-trivial solution. Okay, so when will I get a non-trivial solution? What is singular? Lambda i-b, only when lambda i-b is singular, okay, only when lambda i-b is singular then only I will get a v which is non-trivial which is non-zero and it will give me 0. Okay, only when lambda i-b, when lambda i-b is singular, only then you will get a solution to this problem which is non-trivial. You will get a vector v which is not equal to 0 vector, okay, only when this is singular. Okay, what is the algebraic condition for this matrix to be singular? Determinant, determinant equal to 0. That is the origin of your eigenvalue problem. So I am just continuing from there or determinant of lambda i-b equal to 0. Okay, normally when you start studying eigenvalues and eigenvectors we start with the point which is actually the end point. Okay, so normally you are taught that eigenvalues of a matrix B are defined as this. Okay, so why this? This comes out because you are trying to solve a linear difference equation. Okay, we are trying to solve a linear difference equation and this equation pops out if you want to get a non-trivial solution for the linear difference equation. Actually, same thing happens if you are trying to solve linear differential equations. This equation pops out when you are trying to solve linear differential equations. Look at Strang's book. It gives a beautiful derivation for how it happens. So this equation is a fundamental equation. It pops out and that is why we keep studying eigenvalues, eigenvectors. Okay, let us look at this equation in little more detail. There is one more thing here. At this point, how many unknowns are there? Another view point, the same equation. How many unknowns are there in this equation? 3 unknowns. V is a n cross 1 vector. You do not know V, you do not know lambda. So how many unknowns are there? V is a matrix which is known. Lambda is not known. Okay, and elements of V are not known. We still do not know what is V. Okay, how many equations I have? I have n equations. These are n equations. These are n equations. There are n plus 1 unknowns. To solve it exactly, how many equations you need? n plus 1 equations in n plus 1 unknowns. Is this a linear equation or a nonlinear equation? Linear. Think, what are the unknowns? Lambda and V. Does lambda multiply V? Is it a linear equation? My unknowns are V1, V2, V3, Vn and lambda. Lambda multiplies. Okay, you have n nonlinear equations. In n plus 1 unknowns. Okay, you have n nonlinear equations in n plus 1 unknowns. Okay, and you have to solve it. To solve it, somebody would have said I will use Newton Raphson or something. We have used very, very intelligent argument. We said a solution will exist only when this matrix is singular. Okay, solution will exist only when this matrix is singular. How many solutions you should get here? You will get a nonlinear equation in general has multiple solutions. Okay, it will become evident here that this nonlinear equation will have multiple solutions. Okay, so you have this additional equation now. This one more equation. Those are n equations in n plus 1 unknowns. This is additional equation n plus 1 equation. This and that together you can solve. Okay, what do you get here? Now, this you know. When you put this, you get a polynomial of degree n. Okay, so this is actually a polynomial of degree n and this has roots lambda 1, lambda 2, lambda 3. Right? This has roots lambda 1, lambda 2 up to lambda n. It will have n roots some of them could be repeated, some of them could be whatever, whatever. We do not worry about it right now. So it has n roots. It is a nth order polynomial. It will have n roots. Now corresponding to each root, you will get one v vector because this equation will hold for every lambda. Okay, if you plug in lambda 1 here, you will get one matrix which is singular. Okay, corresponding to that matrix, there will be a null space. Okay, corresponding to that matrix is a null space and that v will belong to that null space. If you plug in lambda 2, there will be one more singular matrix and v2 will be in that null space and so on. Okay, so for every for every eigenvalue, this roots of this equation, well this is called a characteristic equation of matrix B and roots of this equation are called as eigenvalues. All that you already know. Now, so what happens is lambda 1 i minus B v1 is equal to 0. So v1 is the first eigenvector we call this. This is actually a vector in the null space of lambda i, lambda 1 i minus B. This is a singular matrix for this value of lambda. Okay, so there are n different numbers which make this lambda i minus B singular. Okay, for each of those cases, we have a vector v1, v2, v3. Okay, so I have this lambda 2 i minus B v2 is equal to 0 and so on. So I can write this n eigenvectors. I get this n eigenvectors. Now, what did I start with? I started with solving my linear difference equation and I took a guess solution. Okay, I started solving this z k matrix. k plus 1 is equal to B z k, right. This is my linear difference equation. I wanted to solve it and then I said I have a guess solution z k is equal to lambda to power k into v into vector v. Okay, it turns out, turns out that there are multiple such lambdas. Okay, there are multiple such lambdas and there are multiple such vectors v, not just one vector. Okay, not just one vector. So I have multiple solutions. I can call them as fundamental solutions or eigen solutions. So I have a solution which I would call as, let me call this as solution number 1 which is lambda 1 to power k v1. This also obeys the difference equation. Okay, then let us call this z2 k which is lambda 2 to power k v2. This will also obey the difference equation. What is the first fundamental criteria? The solution should obey the difference equation. Okay, so this will obey, this will obey. There seems to be n such solutions which obey the difference equation. In fact, this is a linear difference equation. You can show that any linear combination of this eigen solutions or fundamental solutions will also obey the difference equation. Any linear combination, so I can construct a solution which is like this. I can construct a solution which is linear combination of fundamental solutions. So I will, let me use the same notation that is here. So linear difference equation, you can very easily show that if each one of these fundamental solutions obeys the difference equation, just pluck in and see that a linear combination of these also will obey the linear difference equation. Okay, so the first criteria is satisfied. What is the second thing that we need to do? One more thing that we need to do is to relate this solution to the initial condition because a solution is found for a particular initial condition Z0. Z0 has not come into picture till now but do you agree that this guess solution will exactly satisfy the linear difference equation. Okay, next thing is to match it with the initial condition. Now when I am writing an arbitrary linear combination, this is a general solution. Lambda is unknown here, alpha 1. Lambda is known. Once B is known, matrix B is known, I can write, I can get Eigen values. Lambda 1 to lambda n are known. Lambda 1 to lambda n do not depend upon initial condition Z0. They only depend upon B. They only depend upon B. What about V1 to Vn? Moment you have a matrix, Eigen space is fixed. Eigen directions are fixed. You might, you know, she might choose a different Eigen vector, she might choose a different Eigen vector but they are same Eigen directions. Values, okay, 1-1 and –1-1 are same Eigen vectors in the sense that same Eigen directions except, so Eigen directions are fixed. So I need to know alpha 1, alpha 2 to alpha n and what we will see is that alpha 1 to alpha n are determined by the initial condition Z0. So this solution is Z0 if this has to be a solution, then Z0 should be equal to alpha 1, okay, alpha 1 V1 plus alpha 2 V2, alpha n Vn, okay. Everyone with me on this? I am just going to write this as V1, V2, Vn, alpha 1 to alpha n, alpha 2 Vn, alpha 2 to alpha this is the first column, this is the second column, these are all vectors, right? These are the column vectors and I am just putting them next to each other, forming a matrix, okay. Let us for the time being assume that Eigen vectors are linearly independent. Eigen vectors are linearly independent. Let us assume for the time being a simplified case. So this matrix, this is a n cross n matrix, each one of them is a n cross 1 vector, Eigen vector is n cross 1 vector, n of them are placed next to each other, this is a n cross n matrix, okay and this alpha 1 to alpha n, so you have to solve, you know Z0 when you are solving the linear difference equation, I know the initial condition, I know Z0, right? So it turns out that alpha 1 to alpha n are determined by the initial condition that is, so you have to solve this equation, psi times is equal to Z0, okay or I could rewrite this as if Eigen values of or if Eigen vectors are linearly independent, I can simply write psi inverse is everyone with me on this? See what I wanted to say is that look, we started solving this equation, okay, then we came up with a guess solution, we found that there is not just one guess solution, there are n guess solutions, what are these n guess or fundamental solutions? Lambda 1 to power k, V1, V1 is the first Eigen vector, lambda 2 to power k into V2, V2 is the second Eigen vector and lambda 1, lambda 2, lambda 3 are Eigen values, right? These are Eigen values, okay and V1 to Vn are Eigen vectors, okay, so this seems to be a fundamental property of the given matrix, if I give you a matrix for any initial condition, this will be the general solution, okay, I need to find out any linear combination of this, it is a linear difference equation, you can very easily show that if this equation is obeyed or this fundamental equation is satisfied by the difference equation, any linear combination of these fundamental equations is also satisfied by the difference equation. So general solution is linear combination of fundamental solutions, okay, general solution is linear combination of fundamental solutions. Now this general solution should match with the initial condition, okay, initial condition will give me this unknowns alpha 1, alpha 3 and so on. How do I get the unknowns? If Eigen vectors are linearly independent psi inverse z0, I have completely solved the problem, okay, now next question arises, do I get insight by solving it this way, okay, you would say well I could have solved this always as, I know that this is the solution, I know this is the solution, what do I gain by writing it in such a complex way, okay, that is what I want to come to know that how does it relate to the analysis of this equation. Now let me see how do we translate the wisdom that we gained from scalar case to the vector case, okay, this is actually equation helps us in many ways than b to the power k because it gives you insight into what is happening internally. First of all, first of all if you look carefully here, if you look carefully here, I have written the solution, a general solution of this problem as linear combination of some fixed vectors, do you see this? I am saying any solution, any for any z0, for any z0, the solution is given by this, okay, lambdas are fixed, move into give me b matrix, lambdas get fixed, eigenvalues get fixed, eigen directions are fixed, so v1 to vn is determined, okay, so what is specific to initial condition is only alpha 1, alpha 2, how you combine the fundamental solution is the only difference, right, how do you create that linear combination is the only difference, otherwise the solution always behaves according to this, is that clear? There are fixed directions, linear combination of fixed directions, okay, and then this lambda 1 to lambda n is fixed, it does not vary with z0, okay, only it is multiplying constant stage, great. Now, let us start looking at what is changing with k, what is changing with k? Lambda i to power k is the only term which is changing with k, because once I have matrix b, eigen directions are fixed, okay, once you give me initial condition alpha 1 to alpha n get fixed by this equation, because this depends only on the eigenvectors, this depends only on the eigenvectors, so alpha 1 to alpha n get fixed from the initial condition, v1 to vn are eigen directions, only thing that is changing with k is lambda to power k, okay, I need to now analyze how a scalar rest to power k behaves, I am very good at it, right, b to power k, b is a matrix, suppose b is a 100 cross 100 matrix, difficult to analyze, right, difficult to analyze 100 cross 100 elements how they behave as a function of k, here I have reduced the problem to only analyzing n numbers, what are these n numbers eigenvalues, okay, now let us try to get inside into lambda i to power k, how does lambda i to power k behave, can you tell me something, when lambda is it is less than 1, so it is minus 10 also it will converge, minus 10 is less than 1, modulus, so this seems to be important, mod lambda i should it be strictly less than 1, if it is equal to 1 then you have trouble, right, so let us look at different cases, okay, so my case 1 which is analogous to the scalar case is mod lambda i is strictly less than 1 for all k, all i equal to 1 to n, this is my first case, all eigenvalues are such, well there is one more additional thing that crops in here which is different from the linear scalar case, eigenvalues did not be always real, eigenvalues can be complex, okay, so actually when you are doing this analysis your space is not real numbers, they could be complex numbers, okay, so in the complex plane, in a complex plane lambda plane, okay, what I am saying is that if I draw this unit circle this is my lambda plane and this is my unit circle, this is imaginary axis, this is real axis, okay, what I am saying is that if all eigenvalues of b are strictly inside the unit circle, whatever is the matrix 100 cross 100, 10,000 cross 10,000 or 5 cross 5, if all eigenvalues are strictly inside the unit circle, okay, go back and look here, if all eigenvalues are strictly less than unit circle, are inside the unit circle, what will happen to mod zk, what will happen to mod zk, norm zk, sorry, norm zk because norm zk is a vector, so norm zk, we need to look at now norm because what will happen to norm zk, well you had the proper analysis, so norm zk, norm alpha 1 lambda 1 to power k v1 plus alpha n lambda n to power k vn which is less than or equal to alpha 1 mod alpha 1, sorry, mod alpha 1 mod lambda 1 to power k norm v1, okay and these quantities, this is not changing, this is not changing, what is going to change is lambda to power k, if mod lambda i for all i is strictly less than 1, okay, what will happen, the right hand side will shrink and this will shrink, left hand side is lesser than the right hand side, the right hand side and this is of course greater than 0, right, norm is always greater than 0, norm is always greater than 0, so if right hand side shrinks as k goes to infinity, okay, then the left hand side also will shrink, okay, so this will go to 0 because this will go to 0 because mod lambda i is, okay, now my next case, so if all eigenvalues are inside unit circle, why unit circle, eigenvalues are complex numbers, okay, if all eigenvalues are inside unit circle is one more step where you should show that this is less than or equal to mod lambda i raised to k and so on, so I have skipped one small step in between but you can do that, it is not difficult, okay, what if even if one eigenvalue does not obey this, if you know there are 100 eigenvalues, 99 of them are inside unit circle, one of them is on the unit circle, what will happen, if one eigenvalue say the last eigenvalue is on the unit circle, what will happen, other components all the other terms will shrink, one term will not shrink, okay, so this will not go to 0, this will be bounded, this will be bounded, it will neither grow, okay, it will neither grow after some time, not shrink it will become constant, okay, the bound it is bounded above and it is bounded below because this is between 0 and some value, if one of them is on the unit circle, so it does not help me, if the eigenvalue is lying on the unit circle somewhere, even if one of the eigenvalues is lying on the unit circle, I have a trouble, the trouble in the sense, well there is something that we have achieved here, right now let us not worry about the convergence problem, what we have achieved is that just looking at the eigenvalues I can tell you how the difference equation is going to behave, right, I can talk about the qualitative behavior of the difference equation by knowing relative position of the eigenvalue in the complex plane, whether it is inside unit circle, whether it is outside unit circle, whether it is on the unit circle, what if one eigenvalue is outside unit circle, you know, effect of other eigenvalues will go to 0 but that one eigenvalue will keep growing, okay and zk will go to infinity come what may, see this fact that for any initial condition, it is what I want to emphasize, see this analysis tells us how the solution will behave for any initial condition, so initial conditions will only determine alpha 1 to alpha n, even if one of them is on the unit circle and remaining are inside unit circle, then the solution will be bounded as cogos to infinity, it will not go to 0, okay and if even one of them, even if one eigenvalue is outside the unit circle, okay I am guaranteed, so I can just look at the eigenvalues, so my second case, my second case equivalently is that some eigenvalues are strictly less than 1, remaining are on the unit circle, okay, all that I can say is that is bounded as k goes to infinity, okay and my third case is obviously when some eigenvalues are such that lambda i mod lambda i is strictly greater than 1, okay, strictly greater than 1, it could be 1 as I said it could be 1.0001 does not matter, if it is outside the unit circle it is outside the unit circle, okay, what we can guarantee is that as k goes to infinity norm vk will go to infinity, it does not matter few eigenvalues are inside the unit circle, okay, one eigenvalue outside the unit circle can make the solution go to infinity at some time or the other as k goes to infinity, okay, so this will become unbounded as k goes k becomes infinity, so this analysis we could do without requiring to solve for a given z0, I do not have to solve for z0, I just take matrix v, I look at it eigenvalues, okay, eigenvalues are certain nice properties, well I am done, I can tell whether solution is going to converge or going to diverge, so we started by looking at ek plus 1 is equal to S inverse t ek, we started by analyzing this difference equation, so what is the model of the story, eigenvalues of S inverse t should be strictly inside unit circle, I should choose A is equal to S minus t, I should choose this split in such a way that eigenvalues of S inverse t should be strictly inside the unit circle, if that happens okay, I am guaranteed that convergence will occur, okay, whenever I start from, I start from an absurd initial guess, my iterations will converge to the solution, okay, my iterations will converge to the solution, so this is the key, now we will start developing further because well you might say you are transferring the problem from one difficult problem to other difficult problem, if you have a 1000 cross 1000 matrix, finding out eigenvalues is also equally a tough problem, okay, finding out eigenvalues is not a joke, you have to solve a polynomial of order 1000 and then well MATLAB can do it but there is a limit, the matrix size starts growing, it also will hit into a, it is not a trivial problem to get eigenvalues, so we are, we have a nice analysis but still we have problem because this still needs a lot of computations, okay, instead of doing eigenvalues you could have even solved for your z0 and try to see how the solution is behaving, okay, so this is, so is there some more something else, some other properties I can use, yeah, so we will look at that in our next lecture.