 Good morning. As discussed in the previous lecture, today in this lecture, we will start from this lesson, which is the first of a few lessons on the module of algebraic eigenvalue problem. I will again remind you that in order to follow the lectures in this segment, it is very important that the subject matter of this segment, this module is thoroughly immersed in your understanding. And therefore, it is very important that at this stage, you should have completed most of the exercises of this segment, because some of the background necessary for the following lectures is actually developed through the exercises in the book, in the text book that I have referred to you. In this tutorial plan, the problems of the book listed here, you must have completed by now, and that will help you in following the lectures in the coming module, which is chapters 8 to 14. Now, in this lecture, we will be studying eigenvalues and eigenvectors, in which we will talk about the eigenvalue problem as an introduction, and then generalized eigenvalue problem, which will also expose you to one of the practical problems from which eigenvalue problems emerge. Then, we will discuss some basic theoretical results, which will be utilized later for sophisticated methods of solving the eigenvalue problem, and then towards the end briefly, we will discuss a quick and easy method of solving the problem, which is power method. To begin with, I again draw your attention to the mapping A, which is from R n to itself, that is from n dimensional space to itself. That means, it is the corresponding matrix is n by n, it is a square matrix. Now, when we multiply a vector to a matrix to a vector, the vector gets mapped to another vector in the same space in this case, but then in this mapping, there are two effects produced on the vector. One is a magnification, which may be less than one, which means in that case, the actual vector will get reduced in size. The other than magnification, the other effect is turning rotation. Now, this is the general way in which a vector can get mapped through multiplication with a matrix. Now, some of the vectors for every matrix are special. They are special in the sense that they undergo only magnification or scaling, and do not rotate under multiplication with a particular matrix. These vectors are in some sense the own vectors of that matrix or some special vectors for that particular matrix, and these vectors are called Eigen vectors. The word Eigen in German means special or own. So, as if these vectors belong to this particular matrix. So, if you multiply the vector a matrix a to one such special vector, its own vector, then the result the mapping is nothing other than a pure scaling. So, in that case, we call this vector v as an Eigen vector, and the scale vector lambda is called the Eigen value or the characteristic value. Together, lambda and v Eigen value and Eigen vector are quite often referred to as the Eigen pair. They form a pair. Now, determination of all the lambdas and corresponding v is Eigen values and Eigen vectors for a given matrix is called the algebraic Eigen value problem. Now, how we can find the values lambda and the corresponding vectors v from only this much? The process the underlying concept is actually very simple. You can take this lambda v on this side though you cannot write it as a minus lambda into v, because a is a matrix and lambda is a scalar. So, but what you can do is that this v you can write as identity into v, and then take lambda i and a together in this manner. Taking a v on this side, then you get lambda i v minus a v lambda i is a matrix and a is also a matrix. Then you will have this system of linear equations. Now, you will note that this system of linear equations is n equations in this vector or n variables, and these equations are homogeneous equations that is the right and side is 0. Then you know that for a homogeneous system of equations for the existence of non-trivial or non-zero solution, the coefficient matrix must be singular. That is the coefficient matrix must have a null space, and v will be actually a member of the null space of this matrix lambda i minus a. So, for singularity of this matrix you must set its determinant equal to 0. Now, you find that we have reached the stage where from a large number of unknowns we have suddenly reduced to 1 unknown. In this particular equation you had 1 scalar unknown lambda and 1 vector unknown v, which was n plus 1 total number of unknowns. Now, the condition that the coefficient matrix is singular tells you the determinant of the coefficient matrix is 0. Now, you have got a single equation in a single unknown. In addition you know that this side is a polynomial in the unknown lambda polynomial of degree n. So, then the question boils down to finding the roots of that polynomial to begin with or to find the solution of this polynomial equation. And we know that it will have n roots including multiplicities. So, the polynomial on this side is called the characteristic polynomial of the matrix A. And therefore, the corresponding equation this equation is called the characteristic equation and its solutions are the Eigen values. So, characteristic equation or characteristic polynomial will give you n roots of this nth degree polynomial. These are the n Eigen values and for each of them you will try to find the corresponding Eigen vectors. That part should not be very difficult, because as you insert those Eigen values one by one for every Eigen value sitting here you will get a homogeneous system of equations in which the coefficient matrix is completely known. All that you need to do is to find the null space of that known matrix lambda i minus a, which we have studied earlier. Now, we have been just talking about the number of Eigen values. Total number of Eigen values from this will certainly be n, but that may be repeated. For example, suppose we have got a 3 by 3 matrix for which the Eigen values may turn out to be 2, 2 and 4 that is possible. So, here the Eigen value 2 is said to have an algebraic multiplicity of 2, because it is occurring twice in this polynomial. So, this polynomial will be lambda minus 2 whole square to appearing twice into lambda minus 4, 4 appearing only once. Now, we also talk of geometric multiplicity. That is when we take this Eigen value lambda and try to insert it here and try to find V. Now, in this particular example, if the Eigen values are 2, 2 and 4 then as you insert lambda equal to 2 here and you try to find the corresponding Eigen vector V, you would expect that there may be up to 2 such vectors. One Eigen vector belonging to lambda equal to 2 in the first instance and the second one belonging to lambda equal to 2 in the second instance. You may succeed in finding 2 such Eigen vectors or you may not that depends upon the particular matrix A. That means that if the algebraic multiplicity of a particular Eigen value is more than 1, then that may give you 1 Eigen vector corresponding to it or 2 Eigen vectors or 3 Eigen vectors up to the number which is the algebraic multiplicity. That means in a larger matrix, if suppose an Eigen value say this Eigen value lambda equal to 2 in a 7 by 7 matrix appears 5 times. So, the Eigen values are 2, 2, 2, 2, 2 something else and something further of this structure. In that case corresponding to 2 Eigen value 2, when you try to find out the Eigen vector, you may find only 1 Eigen vector, you might find 2 or 3 or up to 5, more than 5 you cannot get. That number corresponding to that particular Eigen value, how many Eigen vectors you could find out that number is called the geometric multiplicity. Now, note this one is algebraic, this one is geometric. Algebraic multiplicity is appearing from this polynomial. How many factors lambda minus a particular lambda is appearing in this polynomial? How many times that is appearing? That is coming from an algebraic source and that is why it is called algebraic multiplicity. On the other hand, the number of corresponding Eigen vectors will span a subspace in the space R n of the dimension which is equal to the number of linearly independent Eigen vectors that you can find corresponding to that Eigen value and this description of this subspace that you are talking about that is a geometric entity. That is why that number is called the geometric multiplicity of that Eigen value. Now, note that when we are talking about finding Eigen vectors, different Eigen vectors, then in that context linearly dependent Eigen vectors are not considered different. That means, if you find one vector as an Eigen vector, then it is obvious that twice of that will be certainly an Eigen vector. So, that is not counted as different from the first one. Similarly, if you have already found two Eigen vectors corresponding to a particular Eigen value, then a linear combination of these two will certainly be an Eigen vector with respect to or corresponding to that same Eigen value that is not considered anything different. So, that means, when we hunt for Eigen vectors, we look for linearly independent Eigen vectors. Now, when it happens that for a particular Eigen value, the algebraic multiplicity and geometric multiplicity have a mismatch between them. There is algebraic multiplicity is higher, geometric multiplicity is lower. In that case, we call that matrix as defective. In what sense it is defective, what is the defect and what to do in such a situation, that we will discuss in detail in the coming lectures. When it is so that algebraic multiplicity and geometric multiplicity for every Eigen value is same, in that case we can do certain interesting things very easily. We can diagonalize the matrix. That means, we can change the basis for representation of this mapping in such a way that the resulting matrix representation for the same mapping, the same linear transformation turns out to be diagonal. That means, the directions get completely decoupled. So, such matrices are called diagonalizable. To recognize a diagonalizable matrix, the direct straight forward thing is to check the algebraic and geometric multiplicity of every Eigen value. If they match all of them, then that matrix is diagonalizable. If a single Eigen value has a multiplicity mismatch between algebraic multiplicity and geometric multiplicity, then that is not diagonalizable. In that case, the Eigen vectors cannot be decoupled. The space cannot be decoupled in terms of individual Eigen values in the same way as diagonalizable matrices. So, actually the diagonalizability is that way not a property of a matrix as such. It certainly is a property of a matrix, but it is actually the property of a much more fundamental thing underlying the matrix that is the linear transformation. So, diagonalizability is actually the property of the linear transformation for which the matrix is just one representation. Now, considering these things apart, does this outline and try it tend to suggest that Eigen value problem solution method is complete? It may look so because finding the determinant of a matrix in terms of lambda is something which we can think of. Then, setting that equal to 0 and getting a polynomial equation is something which does not sound very dangerous. And then solving a polynomial equation also is something with which we are acquainted. After finding the lambda, putting that here and for every lambda finding the corresponding Eigen vectors, that also as a part problem is not very difficult problem. But does it mean that all the discussion in Eigen value problem gets completed here? Answer is no. The reason is that when the degree of the polynomial equation goes very high in that situation, solving this polynomial equation is actually not very easy. In fact, for solving a polynomial equation, one of the very popular, one of the very used methods says that try to solve the polynomial equation through the methods of Eigen value problem. So, therefore, for solving an Eigen value problem, the polynomial equation solving as a sub problem is not a very attractive proposition because as the degree of this polynomial goes high, it will be very difficult to computationally solve this problem. Therefore, people look for other ways of cracking this Eigen value problem directly without first making a recourse to this polynomial equation solving problem. And in that attempt, mathematicians have developed a method of interesting tools to handle matrices and express them in canonical formations and take a lot of advantage from these theoretical developments into several fields of applied mathematics. And these interesting developments will be studying in the coming lectures, including this one. So, in order to make the ground for that, I will need to develop some basic theoretical results first. Even before that, it will be a good idea to see a practical problem from which Eigen value problem appears. There are many such practical problems in almost all branches of science and engineering, but Eigen value problems certainly turn up. One such problem is the system of mechanical system with free vibration. For example, if you consider a one degree of freedom mass spring system for which the dynamic equation is just this, where m is the mass and k is the stiffness of the spring. And then, you try to write the assumed solution of this equation in the form because you know what kind of a solution this will have, this will have a sinusoidal solution. And so, you try to write it like this and then, you differentiate it twice and insert in this. So, you know that twice differentiation of this will produce minus omega square sign a is a constant. And from that very easily, you work out the natural frequency of vibration in which this mass spring system will undergo natural vibration. Now, when you try to formulate and solve the same problem for a multi degree of freedom system, we do not get such a nice simple scalar equation, but we get a matrix vector equation in this manner. So, free vibration of an n degree of freedom system will be governed by this equation, where m is the inertia matrix, k is the stiffness matrix, x is the vector representing the coordinate equation. And x double dot is certainly the acceleration corresponding to that. Now, in this problem, when you ask this question, what are the natural frequencies in which this particular mechanical system can execute natural vibration. And correspondingly, what are the vectors x along which those vibrations will take place. For example, in a 3 degree of freedom system, it might happen that x 1, x 2, x 3 give you a particular direction, a particular vector along which the vibration takes place in one frequency. There is another second direction in which the system may vibrate in a second frequency. Similarly, a third direction with a third frequency. So, what are these vibration modes and what are the corresponding frequencies? That becomes the problem for solution in this free vibration problem. Now, again in analogy with this equation, we now try to assume something vector x is equal to an amplitude vector into a term like this. So, there we assume a vibration mode first in this manner. The vibration mode x is a constant vector phi into sin omega t plus alpha. Again, we differentiate it twice with respect to time and insert that x double dot here. And that will tell us that this whole thing is equal to 0 because sin omega t plus alpha after twice differentiation will produce a factor of minus omega square. So, that minus omega square gets multiplied here. We have got this. Then, the same argument we use what we would use in this case that is for this to be equal to 0 for all time, this part has to be 0 because this one will not be 0 always. So, this has to be 0. When we do that, then we get the corresponding equation k phi equal to omega square m phi. Now, this resembles the identity problem that we discussed just now. In the earlier case, we got a problem of this manner k phi equal to lambda phi A x equal to lambda x or A v equal to lambda v. This is the kind of problem that we have been discussing just now. Now, here it is this problem is not exactly the same as this problem because in this location, there is a matrix sitting. Omega square you can identify with this lambda, but here there is a matrix sitting. That is why this problem is not called just identity problem, but it is called the generalized Eigen value problem as if in the original Eigen value problem, there was a matrix here which was identity which indeed we inserted when taking it on the other side. Now, in this case, in this particular case, it is generalized in the sense that in place of identity matrix, now there is a non trivial matrix sitting there. Now, how to solve this problem because if we take it on the other side, then I mean in place of i, if we have m sitting here, then as we take it on the other side, we will get k minus omega square m that will be the matrix, not the straight forward a minus lambda i as we would get in the ordinary Eigen value problem. Now, how to handle this? One might suggest that if we pre multiply both sides of this equation with m inverse, then immediately we get this problem m inverse k phi equal to omega square phi. Why not solve this problem? Because m inverse k we can take as a, we know m, we know k, we can evaluate m inverse k and then it becomes an ordinary Eigen value problem. Indeed it is possible to do that, but then it is not a good idea. Why doing this is not a good idea? The reason follows from the nature of these matrices that appear in these locations. This is not just some matrix and this is also not just some matrix. This is an inertia matrix and this is a stiffness matrix. Such matrices when appearing in practical problem have certain structure. A stiffness matrix is always symmetric and inertia matrix is always symmetric and positive definite. Now, if we evaluate this m inverse k that may lose the symmetry that was originally there in the original problem. Now, it is not a good idea to take a step in the solution of a problem which actually makes the original problem difficult. Later we will study in detail how solution of a symmetric matrix Eigen value problem is actually much simpler and much more straight forward compared to a general non symmetric matrix. Therefore, it would be a bad idea to take a step which will spoil the symmetry of the problem as originally given. Rather we should try to take a measure which will utilize this particular structure. So, what we do is that we take this symmetric positive definite matrix m and recall that for a symmetric positive definite matrix there exists a Cholesky decomposition L L transpose. So, if we conduct the Cholesky decomposition of this matrix m in this form L L transpose and then conduct a coordinate transformation. The original coordinates phi are now transformed to this phi tilde through this L transpose nu basis. In that case when we insert this here then see how this will look like we have a phi equal to omega square m phi. First of all in place of this m we will write L L transpose. The moment we do that we get this L transpose phi which we are going to define as phi tilde. So, L transpose phi we are defining as phi tilde. Now on this side also we would like to have phi tilde because we are applying that coordinate transformation. So, if phi tilde is L transpose phi then what is phi in terms of phi tilde that will be found through the pre multiplication of L transpose inverse. Now when we do that we get L transpose inverse phi tilde. Now we say that we can get rid of this L by pre multiplying both sides with L inverse. As we do that from here L inverse L gives us identity and we have got this. Now notice that the original generalized Eigen value problem like this has been transformed to this problem k tilde call this whole thing as k tilde. Then we have got the new problem as k tilde phi tilde is equal to omega square phi tilde. So, in the new coordinate system in which phi tilde is the vector we have got an ordinary Eigen value problem in which this matrix k tilde is actually symmetric. Because k was originally symmetric on this side we have multiplied it with L inverse and on this side we have multiplied it with the transpose of L inverse that will preserve the symmetry. You can just take that it is transpose is itself L inverse k L inverse transpose as you take the transpose of this whole thing you get the same thing back. So, the symmetry is preserved. Now note here that when we wrote L inverse transpose or L transpose inverse for this it is not clear whether we are talking about this or we are talking about this. Whether we are talking about the transpose of L inverse or whether we are talking about the inverse of L transpose that is not clear in this notation. Still this notation is valid because in these two cases the result will be same and therefore this L with minus t here actually means any of the two because these two are always going to be same. Now this is one practical problem from which you get an Eigen value problem. There are many other situations in all of science and engineering from which Eigen value problems suddenly appear. Now we will start with some of the basic theoretical results of the Eigen value problem over which we will build up later methods by which to solve the problem. Apart from that as a by product of this process the theoretical results will also provide us with tools to handle matrices in nice elegant and canonical ways which is useful in many areas of applied mathematics wherever matrices appear. Now first is the first important result that we should always keep in mind is that Eigen values of the transpose of a matrix are the same as those of the original matrix. This is very easy because we know that determinant of a transpose is the same as the determinant of an original matrix and the characteristic polynomial is found just by the expansion of a determinant. So these are obviously the same of course Eigen vectors need not be same in general they are different. Next important point that we should remember is the situation for a diagonal matrix and a block diagonal matrix. You know what is a diagonal matrix? So suppose we have got a 3 by 3 matrix in this manner. These are all 0 these are all 0 and this is a diagonal matrix and it is very clear that these diagonal entries are actually the Eigen values of this matrix and the corresponding Eigen vectors are the natural basis member. For example, if you multiply 1 0 0 with this then obviously you will get a 1 0 0 which can be written like this. So that shows that you have a v equal to lambda v. v is 1 0 0 that means a 1 is an Eigen value and this vector e 1 the first natural basis member is the corresponding Eigen vectors. Similarly, a 2 and a 3 will be the other Eigen values with corresponding basis members corresponding Eigen vectors as e 2 and e 3 the natural basis member. Now this is obvious. Now if you say that this is actually a much larger matrix this block this a 1 is replaced with a matrix a square matrix this a 2 scalar is replaced with a square matrix and similarly this a 3 then what you get is not a diagonal matrix because this square matrix may have of diagonal entries that will not be a diagonal matrix but what you call it is block diagonal matrix which will look like this in which this matrix a 1 is filled up quite a bit. Now when you talk of Eigen values of a block diagonal matrix then there is a very interesting situation that the the Eigen values of this large matrix is the Eigen values of a 1 and the Eigen values of a 2 and the Eigen values of a 3. So if this is r by r this is s by s this is t by t and everything else outside these blocks is 0 then the r Eigen values of this s Eigen values of this and t Eigen values of this separately obtained can be all put in a list and this r plus s plus t numbers will be the Eigen values of this large matrix. And the corresponding Eigen vectors they are also very easy to find they are just coordinate expansion for example if suppose this small matrix a 2 has an Eigen value lambda 2 with the corresponding Eigen vector as v 2 then just above v 2 you put as many zeros as as required to fit the size of this matrix and below that you put as many zeros as required to fit the size of this matrix. And then as you multiply this you find that this gives you lambda 2 into that same old vector that means that the a an Eigen value of a 2 is the Eigen value of a also and the corresponding Eigen vector of a can be found through a coordinate expansion over v 2 as many extra zeros are required above and below you can put that and then you get the big vector which is an Eigen vector of this matrix large matrix corresponding to that same Eigen value. For diagonal and block diagonal matrices the situation is very simple the matter gets a little complicated when you talk of triangular matrices a triangular matrices a triangular matrix will have non zero entries here. But still the diagonal entries are the Eigen values because below that everything else is zero. So, when you try to write the characteristic polynomial you write lambda i minus this. So, you will get lambda minus a 1 lambda minus a 2 lambda minus a 3 something something something here, but below you have got everything zero. So, when you try to expand this fellow determinant you get you expand from the first column then you get lambda minus a 1 into something plus all zeros then that something again gives you lambda minus a 2 into something plus all zeros and so on. So, for a triangular matrix you will find that obviously the characteristic polynomial will emerge as a product of these factors that means that you have got the characteristic polynomial already in factorized form that immediately gives you a 1 a 2 a 3 etcetera the diagonal members of the original matrix as the Eigen values. But Eigen vectors is a different question for that you have to do a lot of calculations to find Eigen vectors Eigen vectors are not so obviously visible here. So, when we handle triangular matrices we talk directly in terms of the Eigen values only not Eigen vectors Eigen vectors can be found with some further processing they are not so obviously visible. Now, when we take a block triangular matrix that is if these scalars are replaced with matrices and there are big blocks of zeros sitting below that and big blocks of other entries perhaps non-zero many of them will be non-zero are sitting here then we have a block triangular matrix which will look like this. This is a block triangular matrix with 4 blocks block a square block b not necessarily square block 0 which is also not necessarily square it will be the size of the transpose of b and then block c which has to be square then you say that the Eigen values of this is the same as the Eigen values of a and the Eigen values of c. Now, for this matrix this statement the Eigen values of this large matrix is the collection of Eigen values of the matrix a and the Eigen values of the matrix c can be easily seen in a similar way in which we saw just now the result related to the diagonal matrix. However, here the statement is made only for the Eigen values and not about the Eigen vectors. So, if the matrix a has an Eigen value lambda with an Eigen vector v that is this then we can apply the complete matrix h over a coordinate extension of v 0 and then we find that the product gives us this. That means v 0 the coordinate extension of v turns out to be an Eigen vector of the complete matrix h with the corresponding Eigen value lambda. However, when we try to ascertain verify that the same holds for c also then we cannot immediately apply it on a coordinate extension because that will create the confusion with this b because in the product the way this 0 helped in this case it will not help in the other case. So, in this particular situation what we do is that we take mu as an Eigen value of c and then argue then it is also an Eigen value of c transpose and then c transpose w turns out to be mu w for that mu for some vector w. And then we apply not h, but h transpose on the appropriate coordinate extension of w in this manner and then we find at the end that we get mu into this vector 0 w. That means mu turns out to be an Eigen value of h transpose then that will mean that mu is an Eigen value of h as well. Now, apart from these results there are a few points which we need to keep in mind which will be very useful in many of the methods. One is that if we add a scalar times identity to a matrix then all the Eigen values get shifted by that scalar value and this is called the shift theorem. This is very easy to verify and so I am not going into that I am leaving it for you. Then the other important issue that we must keep in mind is actually applicable only for a symmetric matrix that is for a symmetric matrix a which mutually orthogonal Eigen vectors. A fact that we will verify in the next lecture for an Eigen value lambda j with corresponding Eigen vector as v j. We find that if we construct another matrix b in which from a we have subtracted this part then this resulting matrix b has exactly the same Eigen structure as a. Eigen structure means same Eigen values with the corresponding same Eigen vectors except that the Eigen value corresponding to that particular Eigen vector v j is no more lambda j but it is reduced to 0. That means the information worth of that Eigen value only has been removed from a. All the rest of the information of the Eigen structure remains as it is. Now, this is an important issue to which we will come back after studying the symmetric matrices in detail in the next lecture. Before that I will try to expose you right now to an important quick and easy method for solving the Eigen value problem and that is called power method. This helps you in finding the Eigen values of a matrix when you are not interested in finding all Eigen values of a large matrix but you are interested in finding only a few largest magnitude Eigen values or perhaps the largest magnitude and the lowest magnitude Eigen value. Now, this is very quick and easy method easy to understand easy to implement but note that it will work only for those matrices which have a full set of n Eigen vectors that is which are diagonalizable and for which there is a single Eigen value which has the largest magnitude. That means that the largest magnitude Eigen value has a magnitude which is larger than all the rest that is not two are at the top only one Eigen value is at the top. In that case power method gives you the largest magnitude Eigen value very easily. What we do for that is first to understand the way it operates you consider that if the matrix A possesses a full set of n Eigen vectors then these Eigen vectors will span the entire space R n and that means any other vector x that you can think of can be expressed as a linear combination of these vectors in this manner. Now, it is a different matter that given a vector x we can choose any vector x that will have a representation as a linear combination of the Eigen vectors with alpha 1 alpha 2 alpha 3 etcetera representing the corresponding coefficients. Now, even though we do not yet know those Eigen vectors and the corresponding coefficients but we know this much that any vector x that we can think of that we can have picked up will have some representation like this with alpha 1 alpha 2 etcetera and v 1 v 2 etcetera currently unknown to us. Now, if on both sides we multiply with A the matrix then what happens on this side x is a known vector which we have picked up. So, we multiply A x we can work out the result on this side we do not know what is happening exactly the numbers we do not know in detail but we know this much that A v 1 will be lambda 1 v 1 A v 2 will be lambda 2 v 2 and so on that means through a multiplication of A whatever was the representation here now in the coefficients we will get any other additional factor of lambda 1 lambda 2 lambda 3 etcetera. If we go on multiplying the vector the resulting vector with A once more once more once more then after p such multiplications on this side we will have A to the power p x which is known which is the result of multiplying A p times over x on this side we will have alpha 1 into lambda 1 to the power p v 1 plus alpha 2 into lambda 1 lambda 2 to the power p v 2 and so on. If we take that lambda 1 to the power p outside then this will remain inside. Now, under the assumption that the lambda 1 eigenvalue is the largest magnitude eigenvalue and the next one is a little below that what will happen is that as p goes too high many many times it has been multiplied then that will mean that in that case lambda 2 by lambda 1 lambda 3 by lambda 1 all being of magnitude less than 1 after raise to large power all of them will tend towards 0 when p is sufficiently large and that will mean that after many such multiplications we will have a vector sitting inside this which is in the same direction as v 1 and then after that process has stabilized after that direction has been stabilized one more application of that same multiplication with A will mean that on this side an eigenvector is being multiplied with A and that will give you lambda 1 into that vector and that gives you the vector in the direction and the lambda 1 as the scale between two successive values. So, as p tends to infinity this fellow tends to this lambda 1 to the power p as for 1 v 1 then you find that after the process has converge then you will find that the result a p x compared to the result in the previous iteration previous step are two vectors which are in the same direction that means the ratio between the first component and the ratio between the second component and the ratio between the third component will all be same and that ratio is lambda 1 that convergence all n ratios will be same in fact that is the test that convergence has taken place. So, this way you quickly get the largest eigenvalue largest magnitude eigenvalue note that it may be negative for that matter it does not matter. So, you will get the largest magnitude eigenvalue and the corresponding vector will be the eigenvector. Now, we will make two points here one is that other than the largest if you need the least magnitude eigenvalue also then how to do that for this purpose we can use the shift theorem. So, how to find the least magnitude eigenvalue what we can do is that after finding this largest magnitude eigenvalue we see it is sign this is a ratio which may have a sign. So, whether it is positive or negative that it has been found here. So, if for example suppose that lambda 1 turns out to be positive say the largest magnitude eigenvalue is 23 then what we can do from the original matrix we subtract 23 from all the diagonal entries that is application of the shift theorem. That is we subtract 23 i from the original matrix that will mean that all the eigenvalues have got shifted leftward by 23 that is whatever was 23 earlier that becomes 0 now whatever was 21 earlier that becomes 19 and so on. In that case the smallest magnitude smallest algebraically that turns out now as the largest magnitude eigenvalue largest magnitude then we can apply the same power method once more and then we will find that which is the largest magnitude eigenvalue and then as we shift the thing back 23 steps on the right side then we will get the appropriate correct eigenvalue for matrix A with the corresponding eigenvectors. So, this is one way to find the largest and least magnitude eigenvalues which has a lot of practical significance. Now one more possibility of an important question may be that for example if we are not interested in finding all eigenvalues, but we are interested in finding a top few the largest magnitude once lambda 1, lambda 2, lambda 3, lambda 4 etcetera some say 6 of them 6 top eigenvalues we want to find out and the corresponding eigenvectors. There also for example the matrix suppose is 100 by 100 we are not interested in all the 100 eigenvalues and their eigenvectors, but only top 6 or a few top ones with some conditional requirements. Then what we can do after finding the largest one we can use deflation. This will work in the case of symmetric matrix which is quite often encountered in practical situations. By deflation what we can do is that we can subtract the part which is contributed by this particular eigenvalue lambda 1 and the corresponding eigenvector. Then the resulting matrix will have the largest magnitude eigenvalue as lambda 2 which can be found through power method and so on. Now this is a very straight forward method which can be applied if you are sure that the matrix does satisfy these requirements. Otherwise the process may not operate as expected or as desired. Now apart from these things there are two important concepts which will go long way in our discussion in the coming lecture. One is the eigen space. This is a term to in use for representing a subspace of R n which is composed by the eigenvectors of a matrix corresponding to the same eigenvalue lambda. For example suppose A has an eigenvalue lambda corresponding to which there are k eigenvectors v 1, v 2, v 3 up to v k. Then that will mean that any linear combination of these eigenvectors is also going to be an eigenvector. You can verify that very easily. Suppose corresponding to eigenvalue lambda there are two eigenvectors v 1 and v 2. That will mean that A 1, A v 1 is lambda v 1 and A v 2 is lambda v 2. Then if we apply A on a linear combination of these two eigenvectors, then we will find that this will turn out to be A 1 is scalar. So, we can take it out and then we will have A 1 into A v 1 which is lambda v 1 plus A 2 into A v 2 which is lambda v 2 from here. And taking lambda scalar outside this common we find that we have got this. That means the matrix A multiplied over this vector gives us lambda into this vector. That means if v 1 and v 2 are two eigenvectors corresponding to the same eigenvalue lambda, note that it is applicable for same eigenvalue. Then any linear combination of them is obviously going to be an eigenvector with respect to further particular matrix A. Now, this is not a linearly independent eigenvector, but this is certainly an eigenvector. It does not come in the counting of eigenvectors, but whenever required this vector does operate like an eigenvector. And that means that if these k eigenvectors are corresponding to the same eigenvalue lambda then the complete subspace spanned by these vectors gives you a subspace in which every vector is an eigenvector. And therefore, this particular subspace is also called the eigenspace of A corresponding to that eigenvalue. There is another important theoretical point that will be quite in our discussion in coming lectures that is similarity transformation. This is something which we have earlier seen once and here we look at some important properties of it. If we decide to represent the vectors of a space R n in a different new basis S and therefore, the matrix representation of a linear transformation changes from A it becomes B, where B is S inverse A S. This we have seen earlier. Now, note that determination of lambda i minus A which is the characteristic polynomial of the matrix A. Now, we already know the determinant of a matrix and the determinant of it inverse are reciprocals of each other. That means that if we multiply this with determinant of S and also with determinant of S inverse we are actually making no change because this will be reciprocal of this. We also know that determinant of the product of three matrices of the same size is same as determinant of P into determinant of Q into determinant of R. Now, what we have got here is determinant of P into determinant of Q into determinant of R. That means this is same as determinant of P Q R. That means a single determinant with S inverse inserted from this side and S inserted in this side will be the same as this. Now, when S inverse and S are inserted on from the two sides on this they cancel each other because identity is sitting inside. That is why this is lambda i. On this it will have the effect which is different that is S inverse A S which is B. That means we have got this whole thing same as determinant of lambda i minus B. What is that? That is the characteristic polynomial of the matrix B. Now, that shows us that through the similarity transformation the matrix might have been changed but its characteristic polynomial remains same as earlier. The characteristic polynomial of A and the characteristic polynomial of B turn out to be the same. If the entire polynomial is same for A and B then all the roots will be same. That means that Eigen values remain unchanged through a similarity transformation because similarity transformation comes out only as a result of a change of basis. No geometrical entity is being changed only its representation is being changed and Eigen values are the property of the underlying linear transformation not of the basis. And therefore, Eigen values remain constant through all these similarity transformations. How do Eigen vectors change? Geometrically even Eigen vectors do not change but then their representation in the new basis will change as the vectors as any other vector would change its representation in the new basis through the multiplication of S inverse which we have already studied in the same manner and Eigen vector of A will transform to S inverse V in the new basis which is given by A. So, if V is an Eigen vector of A the corresponding Eigen vector of B will be S inverse V because there the new basis S has appeared. So, the basis change of vectors takes place through this relationship and the same will apply to Eigen vectors as well. Now let us quickly summarize what are the points that we have discussed in this particular lecture. First important point is that meaning and context of the algebraic Eigen value problem that we have discussed. Second is that we have studied the fundamental relationships deductions which are vital for the solution of the algebraic Eigen value problem. And third we have been exposed to a quick and easy method called power method as an inexpensive procedure to determine the extremal magnitude Eigen values only the largest or largest and lowest or the largest few. In all these situations we can use the power method with a little bit of help from the shift theorem or the deflation technique. But then while applying power method you must be careful that the power method does not apply to arbitrary matrices but on certain matrices having particular kinds of Eigen structures. If your matrix falls in that category then power method will be very handy for you in many situations but otherwise it may not operate as desired. So, in the next lecture we will build up on what we have developed till now and see the detailed discussion on the theoretical development on Eigen value problem which will be then used in different categories of methods for solving the algebraic Eigen value problem. Thank you.