 In our last lecture, we have defined power method for calculating Eigen vector associated with dominant Eigen value. Then, if the matrix is invertible and if the Eigen value which has got lowest modulus, so modulus of lambda n, if it is strictly less than modulus of the remaining Eigen values, then we can apply power method to the A inverse and then obtain approximation to Eigen vector associated with the least Eigen value. So, thus we can approximate the Eigen vector associated with the largest Eigen value in modulus or the least Eigen value in modulus. Now, what about some intermediate Eigen value? So, for that we need to have some approximation to say l th Eigen value and then we can extend our power method. So, that is known as inverse power method which we are going to consider today. After defining inverse power method, we are going to consider what is known as q r decomposition of a matrix. We have considered various decompositions of matrix, l u decomposition, then Cholesky decomposition and today we are going to consider decomposition of A into q into r, where q is going to be orthogonal matrix and r is going to be upper triangular matrix. Once we consider this q r decomposition, we can define q r method for finding Eigen values of matrix and then we are going to consider relation between q r decomposition of matrix and Gram-Schmidt orthonormalization process of the column vector. So, this is roughly the plan of today's lecture. So, let us look at the power method, recall it. So, our A is n by n matrix, our assumption is that Eigen vectors u 1, u 2, u n they form a basis for C n. Eigen values lambda j's are arranged in this manner and assumption is mod lambda 1 bigger than mod lambda 2 bigger than or equal to mod lambda n. So, these are our assumptions, not all matrices will have a basis of Eigen vectors, but then we have seen that if the matrix is normal, then there is a basis of Eigen vectors. If our matrix A has n distinct Eigen values, then also it has a basis of Eigen vectors. So, this restriction of Eigen vectors forming a basis is not too restrictive. There is a big class of matrices for which it will be satisfied. So, thus our assumption is A should have a basis of Eigen vectors and the basis of Eigen vectors, and then it should have a dominant Eigen value. That means, modulus of lambda 1 bigger than mod lambda 2 bigger than or equal to mod lambda 3 bigger than or equal to mod lambda n. So, afterwards we can have greater than or equal to, but the Eigen value which is biggest in modulus should be a simple Eigen value and it should be strictly bigger than moduli of the remaining Eigen values. Then we start with a non-zero vector. Now, this non-zero vector can be expressed as a linear combination of u 1, u 2, u n, because u 1, u 2, u n it forms a basis and then we assume that our z, the vector, arbitrary vector which we are choosing, its component in the direction of u 1 is not equal to 0. So, alpha 1 is not equal to 0. So, with this assumptions and lambda 1 to be bigger than 0, that means lambda 1 is real and positive. So, we formed a raise to k z divided by norm of a raise to k z. Under these assumptions, a raise to k z upon norm of a raise to k z converges to alpha 1 u 1 by norm of alpha 1 u 1. So, this is going to be unit Eigen vector associated with Eigen value lambda 1. So, it is easy to calculate these iterates. You start with a arbitrary vector z not equal to 0, find these iterates. If your matrix A is a sparse matrix that means lot of zeros, then this will not be computationally too expensive and this is going to converge to a unit Eigen vector corresponding to the Eigen value lambda 1. So, this is the power method. Now, we want to look at extension of this method which will allow us to find Eigen vector associated with intermediate Eigen value. So, again our assumption is that A has n linearly independent Eigen vectors. A u j is equal to lambda j u j, j is equal to 1 2 up to n and mu is given to be an approximation to lambda l. So, some l th Eigen value in between it need not be the largest Eigen value or the least Eigen value in modulus. So, since mu is approximation to lambda l, its distance from lambda l is going to be smaller than distance of mu from the remaining Eigen values. So, mod mu minus lambda l is less than mod of mu minus lambda j, j not equal to l. Now, our Eigen values these are roots of characteristic polynomial. Now, for the characteristic polynomial, we can find an interval in which our Eigen value lies. For example, we had considered bisection method. So, we have got a polynomial. Now, you find two numbers or two real numbers where the polynomial has opposite sign. Then that interval is going to contain a root of the polynomial. So, you go on saying sub dividing it, getting smaller and smaller interval. So, in such a manner or by some other method, we can have a approximation to Eigen value lambda l. So, starting point of our inverse power method is you are given mu an approximation to l th Eigen value and our aim is to find approximation to an Eigen vector corresponding to this l th Eigen value lambda l. So, we have a u j is equal to lambda j u j, j is equal to 1 to up to n, mu being an approximation to lambda l, modulus of mu minus lambda l is less than modulus of mu minus lambda j, j not equal to l. Now, from here a u j is equal to lambda j u j gives us a minus mu i u j is equal to lambda j minus mu u j. Assume that mu is not equal to lambda l. That means, it is not an Eigen value. So, a minus mu i will be invertible and then from this relation we obtain a minus mu i inverse u j to be equal to 1 upon lambda j minus mu u j. I apply a minus mu i throughout, a minus mu i inverse throughout. So, left hand side will be u j is equal to lambda j minus mu, a minus mu i inverse u j and take lambda j minus mu on the other side. Now, look at this. We have got modulus of mu minus lambda l to be less than modulus of mu minus lambda j, j not equal to l. So, if I take the reciprocal 1 upon mod of mu minus lambda l will be strictly bigger than 1 upon modulus of mu minus lambda j, j not equal to l, which will mean that 1 upon lambda l minus mu. This is going to be a dominant Eigen value of matrix a minus mu i inverse. So, the idea is now to apply our power method to this a minus mu i inverse. In fact, this is a minus mu i inverse. In fact, if we apply our power method, we need a dominant Eigen value. So, now, our a minus mu i inverse has dominant Eigen value 1 upon lambda l minus mu. So, modulus of 1 upon lambda l minus mu is strictly bigger than 1 upon modulus of lambda j minus mu. So, here is our iterates. Choose z not equal to 0 vector and arbitrary vector define z 0 to be equal to z upon norm z and then z k. The k th iterate will be a minus mu i inverse z k minus 1 divided by its norm. So, it is exactly the power method, which is applied to the matrix a minus mu i inverse. Then this z k is going to converge to say vector w. This vector w will be Eigen vector associated with matrix a minus mu i inverse and 1 upon lambda l minus mu being the Eigen value. Now, what we are interested in is Eigen values of matrix a and associated Eigen vectors. Now, what we are interested in is Eigen vectors. We have obtained approximation to an Eigen vector w or we have obtained approximation to an Eigen vector w, which is Eigen vector of a minus mu i inverse. But this w is also going to be Eigen vector of a associated with Eigen value lambda l. So, a minus mu i inverse w is equal to 1 upon lambda l minus w. That will mean that a minus mu i w is equal to lambda l minus mu into w. So, this mu w will get cancelled and then you are left with a w is equal to lambda l w. So, w is Eigen vector associated with lambda l. So, this is the Eigen vector associated with lambda l. So, thus the iterates when we define in the inverse power method, they involve matrix a minus mu i inverse. You get an approximation and you get a limiting vector is w. This w is going to be Eigen vector of a associated with Eigen value lambda l. So, if lambda 1 is the dominant Eigen value power method, in that case we obtained Eigen vector associated with lambda 1. When you apply power method to matrix a inverse, we obtained approximation to Eigen vector associated with lambda n. And if mu an approximation to Eigen value lambda l is available, then we obtain approximation to Eigen vector associated with lambda l. Where lambda l can be, it is an intermediate Eigen value. You need to have approximation mu available. Now, here there can be some problem or we want to see whether there is going to be a problem. Now, what is the problem? We are saying that mu is going to be an approximation to lambda l. Lambda l is Eigen value. So, our a minus lambda l into i, it is not an invertible matrix. So, now, when you are going to have mu to be a better and better approximation to your Eigen value lambda l, a minus mu i inverse can be ill conditioned. And then whether this is going to pose a problem. Now, we will see that it does not matter in this particular case. So, we have in the inverse power method, we need to find z k to be equal to a minus mu i inverse z k minus mu i inverse. So, this is a minus mu i inverse z k minus 1 divided by norm of a minus mu i inverse z k minus 1. Now, we are not going to calculate a minus mu i inverse. So, this a minus mu i inverse z k minus 1 will be obtained by solving a system of linear equations a minus mu i r k minus 1 is equal to z k minus 1. We want to calculate this. So, I am denoting it by r k minus 1. Now, this r k minus 1 is going to be solution of the system a minus mu i r k minus 1 is equal to z k minus 1. z k minus 1 is known. It is the coming from the iteration step. So, we need to calculate this r k minus 1 and then divide by its norm. So, that you get the next iterate. So, now what I was saying that whether we are going to have some problem about the stability. So, here if a is a invertible matrix and lambda and mu are eigenvalues of a, then modulus of lambda is less than or equal to norm of a. It follows from our basic inequality. You have got suppose you have a x is equal to lambda x, x not equal to 0 vector. Then, take norm of both the sides. So, norm of a x is equal to norm of lambda x. This will be modulus of lambda times norm x by property of norm. Hence, mod lambda will be equal to norm a x divided by norm x and this will be less than or equal to norm of a, where norm of a is maximum of norm a z divided by norm z, z not equal to norm z. So, this will be equal to 0 vector and thus we have got mod lambda to be less than or equal to norm of a. If you consider a y is equal to mu y, another vector, another eigenvalue mu and associated eigenvector. If a is invertible, a inverse y will be equal to 1 upon mu times mu times y. Hence, using again this similar inequality, you get modulus of 1 by mu to be less than or equal to norm a inverse. So, the condition number norm a into norm a inverse will be bigger than or equal to mod lambda by mod mu, where lambda and mu are any eigenvalues of matrix a. So, here we have a is invertible matrix, lambda and mu are eigenvalues of a, mod lambda to be less than or equal to norm a, 1 upon mu, mod mu will be less than or equal to norm of a inverse. So, take the quotient, this is the condition number of matrix a, it is bigger than or equal to this and if you consider eigenvalues are arranged in descending order of modulus, then we have mod lambda 1 by mod lambda n less than or equal to norm a into norm a inverse. So, let us go back to our a minus mu i inverse, mu is approximation to lambda i. So, norm of a minus mu i inverse is going to be bigger than or equal to 1 upon modulus of mu minus lambda l, if mu is approximation to lambda, a good approximation. So, good approximation will mean that denominator is small, that means 1 upon mod mu minus lambda l will be big and then your matrix a minus mu i is going to be ill conditioned. Now, we have considered the perturbation theory, in the perturbation theory if your matrix a is ill conditioned, that means if norm a into norm a inverse is big, then the solution is sensitive to the perturbation in the right hand side. We had looked at a x is equal to b, perturb b slightly. So, consider the nearby system a of x plus delta x is equal to b plus delta b, then even though norm delta b by norm b is small, norm delta x by norm x can be big. In fact, we had proved this inequality, norm delta x by norm x is less than or equal to norm a into norm a inverse into norm delta b by norm b. So, this can be small, but if you are multiplying by a big number, then norm delta x by norm x which is the relative error in the computed solution, x is the exact solution because of finite precision, instead of b it is going to be b plus delta b. So, x plus delta x will be computed solution, the relative error in the computed solution will be norm delta x by norm x. So, this can be big even though norm delta b by norm b is small. Now, this situation is going to occur in our case, we are calculating our z k. So, the k th iterate was given by a minus mu i inverse z k minus 1, the earlier iterate divided by its norm. So, we need to calculate a minus mu i inverse z k minus 1, which we denoted it by r k minus 1. So, that means we need to solve a minus mu i r k minus 1 is equal to z k minus 1. Now, this a minus mu i will be ill conditioned and hence our solution r k minus 1 will be sensitive to the perturbation. This is what in general like at each stage we will be solving a minus mu i r is equal to cube. Now, in practice instead of this you are going to solve a minus mu i r cap is equal to q cap, where q cap is a nearby vector. So, even though norm of q minus q cap divided by norm cube, even though this is small, because a minus mu i inverse has a big norm, norm of r minus r cap by norm r can be big. Now, this is something we need to worry about, because as such when we talk about the approximation mu should be available to lambda l, then the better is the approximation our task should be simplified. It should not face such difficulties that when you have better and better approximation of mu to lambda l available, then your problem becomes more and more ill conditioned. So, then the relative error in the computed solution becomes bigger and bigger. Now, in this particular application it does not matter, because we are not interested in the computed solution, but we are interested in direction. We are trying to find an eigen vector. So, what is important is the eigen direction. So, let me make it more specific. So, suppose our q the right hand side q. So, we are looking at q to be a minus mu i r is equal to q. So, we are looking at q to be a minus mu i r and this is the instead we are going to solve this. Our a has a basis of eigen vectors u 1, u 2, u n and hence our q will be c 1, u 1 plus c 2, u 2 plus c n, u n a linear combination q cap which is nearby vector will also have a linear combination c 1 cap, u 1 plus c 2 cap, u 2 plus c n cap, u n r is a minus mu i inverse q, a minus mu i inverse u 1 is going to be c 1 upon lambda 1 minus mu u 1 plus c l upon lambda l minus mu u l plus c n upon lambda n minus mu u n and here for r cap you are going to have c 1 cap upon lambda 1 minus mu u 1 etcetera. So, q and q cap are near. So, the distance between c 1 and c 1 cap c 2 and c 2 cap that is going to be small. When you look at r minus r cap look at this component here you have c l upon lambda l minus mu into u n, here you have c l cap upon lambda l minus mu u n. So, c l minus c l cap is small, but you are multiplying by a big number 1 upon lambda l minus mu. So, our r is approximately equal to c l upon lambda l minus mu u l because that is going to be the significant part r cap will be c l cap upon lambda l minus mu u l. So, norm of r minus r cap will be approximately equal to modulus of c l minus c l cap upon mod lambda l minus mu. Now, this can be big, but we are not interested in what r cap is, we are interested only in the eigenvector that is the direction. So, even when you calculate r cap then what we do is you normalize it. So, once the exact solution is r then you consider r upon norm r because when you are applying inverse power method you calculate a minus mu i inverse z k minus 1 and divide by its norm. So, a minus mu i inverse z k minus 1 is r k minus 1. So, you are dividing by its norm. Now, instead of r k minus 1 you are going to have some r cap k minus 1, where if you consider the values they may differ, but for both r k minus 1 and r cap k minus 1 significant part is going to be in the diagram. So, this is the direction of u l and we are dividing by norm. So, then the numerical value they do not really matter. What is happening is the direction for both the exact solution and the computed solution. The component in the direction of u l becomes significant and that is why there is no contradiction in our method that when you have better and better approximation to lambda l that will give us faster convergence to eigenvector corresponding to eigenvalue lambda l. So, so far we have been talking about approximation to eigenvector. In the power method we had approximation to the eigenvector associated with lambda 1. When you apply it to a inverse it is eigenvector associated with lambda n and in the inverse power method it is eigenvector associated with eigenvalue lambda l. So, I have obtained an approximation to eigenvector. Now, what about eigenvalue? When we talked about exact eigenvalue and exact eigenvector we said that if I know eigenvector then finding eigenvalue is easy. You have to just find constant of proportionality. If v is eigenvector of matrix A then look at two vectors A into v and v. These two they are proportional. So, I find what is the constant of proportionality between the two. Now, here our eigenvector is only approximate. So, what best eigenvalue approximation can I choose? That means suppose I give you a eigenvector approximation and I want to know this is eigenvector corresponding to which eigenvalue. So, the best way it can be done is by considering the Rayleigh quotient. So, the Rayleigh quotient has got some minimization property which we now consider. So, you have q to be an approximate eigenvector and the question is what should be chosen as an approximate eigenvalue. So, the answer is choose Rayleigh quotient which is q star A q divided by q star q which is equal to eta. Now, let me tell you what is the minimization property of Rayleigh quotient. So, we consider eta to be equal to q star A q. So, our q is approximate eigenvector then eta is q star A q divided by q star q. If I consider A q minus eta q it is inner product with q this will be nothing but q star A q minus eta q which is going to be equal to q star A q minus eta is a complex number. So, it is minus eta q star q. So, using this result this is going to be equal to 0 that means A q minus eta q is going to be perpendicular to vector q. Now, let z be any complex number and look at norm of A q minus z q it is two norm. Let me take it square. This will be equal to norm of A q minus eta q plus eta q minus z q. So, add and subtract here is one vector here is another vector our eta and z these are complex numbers. So, we have got vector eta minus z into q A q minus eta q is perpendicular to q. So, it is perpendicular to any multiple and which will mean that these two vectors they are perpendicular. So, we can use Pythagoras theorem. So, we have got norm of A q minus z q square A q minus eta q plus eta minus z q for any z belonging to C. Using orthogonality property this vector is perpendicular to this vector eta minus z into q. So, this two norm square will be nothing but square of norm of this A q minus eta q square plus norm of A q minus eta q plus norm of eta minus z q square. So, what does this relation tells us? It tells you that two norm of A q minus z q is going to be bigger than or equal to two norm of A minus eta into q. So, our vector q is perpendicular to this approximate again vector. So, we cannot hope to find a complex number lambda such that A q is equal to lambda q, but what we are saying is look at the Rayleigh quotient q star A q divided by q star q that we denote by eta. So, now consider two norm of A q minus eta q. This norm will be less than or equal to norm of A q minus z q where z is any complex number. So, the Rayleigh quotient eta minimizes two norm of A minus z i into q where z varies over the complex plane and that is why if you are given q to be an approximate Eigen vector, the best you can do is choose Eigen value approximation to be the Rayleigh quotient q star A q divided by q star q. If your q is equal to q star A q divided by q star happens to be exact Eigen vector, then your eta will be suppose your A q is equal to lambda q then q star A q will be q star lambda q. So, that means lambda times q star q because lambda is a complex number. So, you have eta to be equal to in the numerator lambda times q star q divided by denominator is q star q. So, you get eta is equal to lambda. So, the Rayleigh quotient associated with the exact Eigen vector is nothing but the Eigen value itself and if it is not an Eigen vector then it minimizes two norm of vector A minus z i into q. So, now next I want to consider q r decomposition. So, the reason I am going to consider q r decomposition is I want to describe what is the q r method for calculating Eigen values of a matrix A or finding approximations to Eigen values of matrix A. Now, the description of q r method is easy. You will see that we can quickly describe what is q r method. What is difficult is to show its convergence and also to show its convergence and also to show its convergence by it works. Now, unfortunately these things are involved. So, we are not going to do these things in detail, but I want to tell you what is a q r method which is the currently used method for or it is the best method available for calculating Eigen values of matrix A. There is a relation between power method and q r method. What we have done is we have considered power method to find approximation to one Eigen vector. Now, instead of one Eigen vector you can consider several Eigen vectors together. So, that gives rise to what is known as simultaneous iteration and the best implementation of simultaneous iteration is done by q r method. So, anyway let us first look at what is q r decomposition of a matrix A. For simplicity let me assume A to be a invertible matrix and also let us look at the matrix to be real matrix. q r decomposition is available for complex matrices also, but it is just for the sake of simplicity. Let us restrict ourselves to real matrices. So, our assumption is A is real n by n matrix and we want to write it as q into r, where q is a orthogonal matrix that means q transpose q is equal to identity and r is an upper triangular matrix. So, here is A n by n real invertible matrix. M is to write A is equal to q into r, where q is orthogonal that means q transpose q is equal to q q transpose is equal to identity and r is a upper triangular. Let me write it as q 1 q 2 q n. So, these are the columns of q, q is going to be n by n matrix and these are q 1 is the first column q 2 is the second column and q n is n th column. Now, what will be q transpose? So, the q transpose will be given by q 1 transpose q 2 transpose q n transpose. So, q n is n by 1 vector, when I take its transpose then it becomes a row vector. So, here q 1 is the first column of q, q 1 transpose is going to be first row of q transpose, then q 2 transpose and q n transpose multiplied by q 1 q 2 q n. So, when I consider q transpose q it will be first row into first column first row into second column and so on the usual matrix multiplication. So, the i j th element is of q transpose q will be given by i th row here. So, that will be q i transpose multiplied by j th column here. So, it is q i transpose q j and our notation is inner product of q j with q i, q transpose q is equal to identity. So, identity matrix means one along the diagonal and 0 elsewhere, q j q i is the i j th entry of q transpose q. So, q transpose q is equal to identity is equivalent to saying that inner product of q j with q i will be 1, if i is equal to j, 0, if i not equal to j, which will mean that the columns of q are orthonormal. We had seen similar relation for unitary matrices, when we considered q star q, q is equal to identity. So, q star means conjugate transpose. So, q unitary that means the columns of q are orthonormal, it is exactly it is a special case actually that q transpose q is equal to identity that means the columns of q are orthonormal. Now, orthogonal matrix also satisfies q q transpose is equal to identity. So, from that we can deduce that the rows of q they are going to be orthonormal. So, orthogonal matrix is that matrix, which has if you look at any column, its Euclidean norm is going to be equal to 1 and that column will be perpendicular to any other column. And similar property holds for rows of matrix q. So, we have got q transpose q is equal to identity. So, that means inner product of q j with q i is 1, if i is equal to j, 0, if i not equal to j. So, the columns of q are orthonormal and q q transpose is equal to identity that means the rows of q are orthonormal. Now, we are trying to write a as q into r where q is orthogonal and r is upper triangular. So, let me write columns of a as c 1 c 2 c n. So, c 1 c 2 c n is equal to q 1 q 2 q n and then here is upper triangular matrix so, below diagonal all the entries they are going to be 0. So, now, let me equate the columns. So, the first column c 1 will be nothing but r 1 1 times q 1, second column c 2 will be given by r 1 2 into q 1 plus r 2 2 into q 2 etcetera. So, this is just the first column of matrix multiplication. So, we have got c 1 c 2 c n to be columns of our original matrix a. Suppose we can write it as q into r, then first column of a which is c 1 will be r 1 1 times q 1. So, c 1 is r 1 1 times q 1 and c 1 is r 1 1 times q 1. The second column c 2 will be r 1 2 times q 1 plus r 2 2 times q 2. Now, look at the first relation c 1 is equal to r 1 1 times q 1. Now, q 1 just now we have seen that the columns of q they form an orthonormal set that means, Euclidean norm of q 1 is going to be equal to 1. So, we have we have got c 1 which is the first column of a. This is r 1 1 times q 1. So, norm of c 1 its 2 norm is equal to norm of r 1 1 q 1. So, this will be modulus of r 1 1 and then norm of q 1 2 norm and this is equal to 1. So, for modulus of r 1 1 we have got choice. You can either choose r 1 1 to be equal to norm of c 1 or r 1 1 is equal to minus norm of c 1 2. So, you get q 1 to be equal to c 1 divided by norm of c 1 2 if you choose this. So, the matrix A is given to us that is the norm of c 1. So, the norm of c 1 is given to us. So, the norm of c 1 2 is given to us that means, I know what is its column c 1. I am trying to write it as q into r, where q is orthonormal r is upper triangular. So, if you want to consider q 1 the first column it will be nothing, but the c 1 normalized like the first column of a which we are denoting by c 1. It may not have Euclidean norm to be equal to 1. So, you divide by its norm. So, this looking at c 1 is equal to r 1 1 into q 1 we get q 1 will be nothing, but. So, first of all for r 1 1 we have to make a choice either you choose it to be positive or you choose it to be negative. So, for the sake of definiteness let us choose r 1 1 to be bigger than 0. Then our r 1 1 will be nothing, but c 1 divided by its Euclidean norm. Now, that is for the first. So, that means, we have determined what should be the first column of q and what should be the entry r 1 1 in the upper triangular matrix. Now, in the second one you have the relation c 2 is equal to r 1 2 q 1 plus r 2 2 q 2 we have already determined q 1. So, we have c 2 minus r 1 2 q 1 minus r 1 2 q 1 minus r 1 2 is equal to r 2 2 into q 2. So, now, I need to determine what is r 1 2 what is r 2 2 and what is q 2 q 1 is determined. So, these are the three things to be determined. I make use of the fact that q 1 and q 2 are perpendicular to each other. So, inner product of c 2 with q 1 will be r 1 2 and inner product of c 2 with q 1 will be r 1 2 and inner product of q 1 with q 1. So, this is going to be equal to 1. So, this determines r 1 2 because c 2 is known q 1 is known take their inner product and that will be r 1 2. Now, after this go to this relation and then take norm of c 2 minus r 1 2 q 1 its two norm will be modulus of r 2 2 because norm q 2 is equal to r 1 2 q 1. So, once again choose r 2 2 to be bigger than 0. So, that you would have determined r 2 2 and then q 2 will be nothing but we have determined r 2 2. So, q 2 will be c 2 minus r 1 2 divided by r 2 2. So, in this manner we can determine the matrix q and matrix r. So, any invertible matrix can be written as product of q into r. So, in our next lecture we will consider the relation between a is equal to q into r and Gram-Schmidt orthonormalization process. Then I will describe what is the q r method and then we will consider an efficient way of finding q r decomposition matrix q r decomposition of a matrix by using what are known as reflectors. So, this we are going to do in the next lecture. So, thank you.