 Welcome, in this lecture we will study diagonalization and similarity transformation. First we will discuss the issue of diagonalizability which I initiated in the previous lecture and then we will go through two very important topics in the algebraic identity problem. One is the canonical form and the other is the special advantages that we can take off the symmetry of a matrix first diagonalizability. Consider an n by n matrix which has n linearly independent eigenvectors v 1 to v n with corresponding eigenvalues lambda 1 to lambda n some of these may be repeated for that matter that is lambda 2 and lambda 3 can be equal need not be all different. These are all linearly independent then only we will talk about n different eigenvectors. If we have the matrix which has n linearly independent eigenvectors then consider this if we pack all these n eigenvectors into one n by n matrix with these vectors as columns then we have this n by n matrix here which we are denoting by s and then we examine the product a s. Now, what will be the first column of this product a into the first column of this matrix s that is a v 1 since v 1 is an eigenvector with eigenvalue lambda 1. So, a v 1 is lambda 1 v 1 similarly the second column of the product will be a v 2 which is lambda 2 v 2 and so on till this. Now, we claim that the same product we would get if we multiply the matrix s from the right side with a matrix with a diagonal matrix having the eigenvalues as the diagonal entries. Let us verify this a little carefully what will be the first column of this product it will be this matrix multiplied with the first column of this that will be v 1 into lambda 1 which is this plus v 2 into 0 plus v 3 into 0 and so on. Thus we find that the first column of this product is also lambda 1 v 1 similarly the second column of this product will be v 1 into 0 plus v 2 into lambda 2 which is this plus v 3 into 0 and so on. That means the second column is also same similarly we will find that all the columns till the n s column will be found identical to this. And this matrix this diagonal matrix with the eigenvalues of a sitting in the diagonal positions we will call as lambda and then this gives us s into lambda that means we have got a s equal to s lambda. Now, if we post multiply this equality with s inverse then from this side s gets cancelled here we get s lambda s inverse that is this that means that a can be expressed as s lambda s inverse for lambda is the diagonal matrix with eigenvalues sitting at the diagonal locations. Similarly, if we pre multiply both sides of this relationship with s inverse then we get s inverse a s is equal to lambda that means that the matrix a in the new basis s gets diagonalized. And this process of changing the basis of a linear transformation so that its new matrix representation is diagonal is called diagonalization. In this process the transformation the mapping gets decoupled among its coordinates. What was the necessary condition for this to happen the necessary condition was just this that is the matrix has n linearly independent eigenvectors or we can say the matrix possesses a full set of n linearly independent eigenvectors. So, this was the only requirement for this diagonalization to be possible. So, we can say this about diagonalizability of a matrix a matrix having a complete set of n linearly independent eigenvectors is diagonalizable that is the n by n matrix can be diagonalized if it has a full set of n linearly independent eigenvectors. The converse is also true that is a diagonalizable matrix possesses a complete set of n linearly independent eigenvectors. If you want to prove this then you will say that we already know that there exists a basis s in which the matrix representation will be diagonal. And in that case what you do you pre multiply with s and then you say that we already have this relationship a s is equal to s lambda. That means this and this will turn out to be equal and from that you can figure out that a v 1 is lambda 1 v 1 a v 2 is equal to lambda 2 v 2 which will mean that all these linearly independent vectors v 1, v 2, v 3, v 4 etcetera are indeed eigenvectors that will convince you about the existence of n linearly independent eigenvectors. So, this statement and its converse both are true that is if a matrix possesses a full set of n linearly independent eigenvectors then it is diagonalizable with that same eigenvectors sitting as the columns of the similarity transformation matrix giving the new basis. On the other side if the matrix is diagonalizable then you can claim that it does have n linearly independent eigenvectors a full set. Now, a few important reminders, a few important points which are actually quite simple, but they are sometimes confused. So, note these small simple statements and do not get confused. One is all distinct eigenvalues will directly imply diagonalizability because if all the eigenvalues are distinct then for every eigenvalue there must be an eigenvector that is necessary for the definition of eigenvalue itself that means an eigenvalue must have one eigenvector associated with it if not more. So, if all eigenvalues are distinct that means n distinct eigenvalues are there and each eigenvalue will have associated with itself one eigenvector. So, that means n eigenvectors are guaranteed and that implies diagonalizability, but on the other side diagonalizability does not imply distinct eigenvalues because it is possible that the matrix is diagonalizable even with repeated eigenvalues. In that case a repeated eigenvalue will have that many eigenvectors associated with it as its algebraic multiplicity. So, from diagonalizability we cannot conclude distinct eigenvalues, but from distinct eigenvalues diagonalizability can be directly concluded. However, if the matrix is not diagonalizable then from the first statement itself we will know that there is certainly some multiplicity mismatch and for the multiplicity to have mismatch the algebraic multiplicity must be greater than 1. So, these points we need to remember when we deal with matrices. Now, we note that diagonalizability is not possible for all matrices and that gives rise to two questions. What simplified representation is possible for all matrices and for what kind of matrices diagonalizability is guaranteed to be possible. The first is the issue of canonical form and the second is the question of symmetric matrices. These are the two important topics that we study in this lesson. First canonical forms, there are three forms to which a matrix can be reduced for linear transformation can be expressed through change of basis. They are known as the canonical forms, Jordan canonical form, diagonal canonical form and triangular canonical form. First the Jordan canonical form which is always possible. A Jordan canonical form is the simplest possible form, simplest possible matrix representation of a linear transformation which is possible for all transformations, all matrices and the form is like this which is composed of diagonal blocks. These are blocks having small square matrices. Above these diagonal blocks and below these diagonal blocks all the other entries of the matrix are 0 and these diagonal blocks are also specialized in their shape. Along the diagonal in such a block say the r th block j r we have the eigenvalues, same eigenvalue that is one Jordan block is associated with a single eigenvalue. Though for a particular eigenvalue more such blocks are possible. Now one such Jordan block looks like this in which along the diagonal entries the corresponding eigenvalue will be there and just on the super diagonal there will be ones. Everything above the super diagonal and everything below the diagonal will be 0. So, this is the typical shape of a Jordan block. Such blocks sitting as diagonal blocks in this block diagonal matrix will make this Jordan canonical form. Now unlike the previous case for diagonalization which we were studying earlier where we look for a suitable matrix S such that we could say this in which this lambda is diagonal. Here we will look for this j sitting in place of this lambda. So, the reduction will not be always possible to this diagonal form, but it will be possible always to this Jordan canonical form. So, the associated similar transformation matrix S will have bunches of its columns. If j 1 is 3 by 3 then the first bunch of 3 columns in the matrix S will be called S 1 which will have 3 columns. Similarly, if j 2 is 5 by 5 then corresponding bunch of vectors here will be denoted as S 2 which will have 5 columns and so on. So, that way if there are k Jordan blocks here then there will be k such bunches with appropriate vectors clubbed together. And then we will have a into this entire matrix S is equal to that same matrix S multiplied with this Jordan canonical form of the matrix A like this. Now here if we try to see what is there in this bunch of vectors S R then we will have a form like this in which the number of column vectors in S R will be the same as the number of columns or rows in this square block J R. First of these entries first of these columns will be an eigen vector of the matrix corresponding to eigen value lambda. And next we will have other vectors W 2, W 3, W 4 etcetera the requisite number to fill up the number of columns which are called generalized eigen vectors. They are not eigen vectors, but they are in some sense resembling to the eigen vectors and they are called generalized eigen vectors. Now what are those vectors we will find out for that what we do is that we consider the product A S R that will be a bunch of columns taken from the correct location from this product. The same will be on this side S R into J R why because if on this side we will try to look for the r th block of columns then we will have S 1 multiplied with a 0 block plus S 2 multiplied with another 0 block and so on till we reach here when we get S R multiplied with this non 0 J R block plus again S R plus 1 into 0 block and so on. So, the non 0 component of this will be only S R J R. So, A S R will be S R J R. So, block by block if we equate like this then we will find A S R equal to S R J R then using the expression for S R from here with columns V W 2 W 3 etcetera we can write like this A V A W 2 A W 3 etcetera in the product for A S R on this side S R is this and J R shape is this that will give us V multiplied with lambda as the first column. In the second column there will be two terms V multiplied with 1 and W 2 multiplied with lambda. So, that way if we equate column by column then the first column equality will give us A V equal to lambda V. Second column equality will give us A W 2 is equal to 1 into V plus lambda into W 2 that is this. Third column will give us A W 3 equal to 0 into V 1 into W 2 and lambda into W 3 that is this and so on. So, from here the first one is already familiar to us this is from where we determined the Eigen value then once Eigen value sorry Eigen vector. So, Eigen vector we determined from this equation once Eigen vector V is determined the next generalized Eigen vector that is the first generalized Eigen vector actually is after immediately after the Eigen vector that will be found as A minus lambda I into W 2 equal to V. Once W 3 is found we can find out W 3 like this from this relationship and so on as there are number of columns in S R. So, compared to that number one less generalized Eigen value Eigen vectors will be found because the first plot is taken by the Eigen vector itself. After that if we try to find one extra generalized Eigen vector we will find that the system of equations that come out will be inconsistent. So, that exactly that many generalized Eigen vectors we will find as many are really required to fill the block. Now, with these Eigen vectors and generalized Eigen vectors sitting in columns we will have the full matrix S which in this kind of a transformation basis change will not transform A to diagonal form in general, but it will reduce it to the Jordan canonical form and this canonical form is always possible for all matrices all square matrices. The second canonical form is the diagonal canonical form which we have already seen, but now we will have another quick look at it to see what is its relation to the Jordan canonical form. In the Jordan canonical form which is like this if all Jordan blocks are of one by one size then what happens if this is not such a big matrix, but if it is a one by one matrix that means it will be just a lambda and there will be no place for that super diagonal one and that is the case of the diagonal canonical form. If you associated points we can quickly note first is that the diagonal form is a special case of the Jordan form with each Jordan block of one by one size. This will immediately mean that with the absence of those super diagonal ones you have only diagonal entries in that canonical form and that means that the matrix is diagonalizable and that will also mean that each SR bunch will have a single vector the Eigen vector itself sitting there. So, that will mean that the similarity transformation matrix S is composed of all Eigen vectors that is n linearly independent Eigen vectors as columns and that means all linearly independent Eigen vectors exist in this case when the Jordan blocks are of one by one size or the matrix is diagonalizable. And in that case if you try to find generalized Eigen vector corresponding to an Eigen vector already found then you will find that none of the Eigen vectors will admit any generalized Eigen vectors. The corresponding equation will turn out to be inconsistent. This will also mean that for every Eigen value the geometric and algebraic multiplicities are same. There is a third canonical form which is of a very important practical significance and that is the triangular canonical form. We have already come across triangular matrices in our study of systems of linear equations. Now here when we said triangular canonical form of a matrix then we are actually referring to the triangular form of the linear transformation. That means that we are talking about converting a given matrix to the triangular form through similarity transformation. Now special significance of triangular form arises from the triangularization which is always possible. What is triangularization? Triangularization of a matrix or of a linear transformation is basically the change of basis of the linear transformation in such a manner that its matrix is in the triangular form. That is apply some suitable similarity transformation on the matrix such that the form of the resulting matrix in the new basis turns out to be like this with all zeroes here. Non-zero entries only on the diagonal and above below that everything else is 0. This is the triangular form. Now the practical significance of it is that this is always possible and this is possible always through orthogonal similarity transformation. The Jordan canonical form is always possible but for Jordan canonical form the S that you need is not necessarily orthogonal. Triangularization you can always conduct with orthogonal S and change of basis through orthogonal transformations has a lot of practical advantage. The advantage is both analytical as well as computational. So we find that the triangular form is always possible. In particular if the eigenvalues are all real then it is always possible through orthogonal similarity transformation. Even if eigenvalues are not real even if you have complex eigenvalues for the real matrix even then you can take recourse to complex arithmetic in your calculations and you will be always able to triangularize the matrix with unitary similarity transformation. Whatever holds for orthogonal similarity transformation in the case of real eigenvalues in case of complex eigenvalues the same will be true with unitary similarity transformation that is S in that case will not be necessarily orthogonal but it will be unitary which is the complex analog of orthogonal matrices. Now if you insist on working with real arithmetic only and orthogonal similarity transformations only even though there are complex eigenvalues then you can almost do the triangularization except that for a pair of complex eigenvalues you may be left with real diagonal blocks of 2 by 2 size like this. Because this is actually equivalent to the triangularized version which is this and you will never be able to reduce this matrix till this point unless you allow complex arithmetic in your calculations. But then with this kind of diagonal block setting you will be able to recognize that you have a pair of complex eigenvalues there other than that the rest of it you can do even if there are complex eigenvalues. Now if you can reduce through orthogonal similarity transformation a matrix to this shape you have not completely solved the eigenvalue problem in the sense that you have not been able to determine the eigenvectors completely but eigenvalues are all there along the diagonal. So that way if you are first interested particularly in the determination of eigenvalues then with much less amount of computation you can reduce it to triangular form and get the eigenvalues. Once eigenvalues are determined there are some methods which help you in finding the eigenvectors with less cost. Now other than these 3 canonical forms there are 2 other forms which are actually not canonical forms but which have some important advantages when we talk about computational methods for solving the eigenvalue problem. One is the tridiagonal form and the other is the Hessenberg form. These forms are advantages in the sense that that can be obtained through a pre determined number of arithmetic operations. That is reduction to these 2 forms is not iterative they can be done with a constant number of calculations. The reduction to other forms that is Jordan canonical form or diagonal form or triangular form they might need iterative iterations and there is a question of convergence. But these 2 forms though not canonical forms but they are useful important forms of matrices to which we try to reduce the matrix through similar transformations and these 2 forms are of advantage because the reduction to a tridiagonal form or Hessenberg form can be accomplished with a pre determined number of arithmetic operations a straight forward operation applied only once not iterative. So tridiagonal form as the name suggests has non zero entries only along 3 diagonals the main diagonal, the super diagonal and the sub diagonal and everything else is 0. So this is the tridiagonal form. The reduction to this form for any matrix is that is for those matrices for which we apply this is a matter of a fixed number of calculations depends only on the size of the matrix and there is no iteration involved there is no question of convergence. Similar situation is there with Hessenberg form in which other than the upper triangular matrix one sub diagonal is extra to this stage reduction can be done in a fixed number of arithmetic operations and after that to apply further similar transformations. So as to reduce these entries to 0 that may take a lot of iterations. So Hessenberg form is used typically for handling non symmetric matrices on the other hand for symmetric matrices we will typically try to handle it through tridiagonal form. Next we come to the most important topic of this lesson and that is the topic of Eigen value problems of symmetric matrices. Central to this issue is this very important result and that is that a real symmetric matrix has all real Eigen values and it is diagonalizable through an orthogonal similarity transformation. A similar result is there for complex Hermitian matrices for which this is actually a special case. Since in our course we will be mostly concerned with real matrices that is why in all the discussions I am trying to concentrate on the real versions of the theorems rather than going into the complex version. But the complex Hermitian matrices version is also very similar that would read as a Hermitian matrix has all real Eigen values and is diagonalizable through a unitary similarity transformation and no other change. The steps through which you establish that result is also similar to the one that we are going to go through right now. Now this one sentence actually has got built into itself several smaller statements. First is a real symmetric matrix has all real Eigen values that is first issue is that Eigen values must be all real. How do we convince ourselves of the truth of this statement? So, let us consider this itself as the independent proposition that is Eigen values of a real symmetric matrix must be real. For this what we do? We assume that there is a matrix A n by n matrix A which is symmetric that is for which A is equal to A transpose and its Eigen value is one of its Eigen values is lambda which is h plus i k. Now what do we need to prove? We need to prove that the Eigen value must be real that means this lambda must be real that means its imaginary part k should be 0. That means this is the hypothesis and this is what we want to establish what we want to show. So, what you do? You say that since A has an Eigen value which is lambda that will immediately mean that lambda i minus A is singular. Lambda i minus A into v equal to 0 that means it has an space in which there is a vector v and so on. So, lambda i minus A is singular. If lambda i minus A is singular if this matrix is singular then any other matrix multiplied to it the product will also be singular. So, this product is also singular that is B is singular. Now note what we have put here to multiply with lambda i minus A lambda bar i minus A because we want to establish something in terms of real quantities we want to kill whatever complex imaginary stuff is here and therefore we bring in the complex conjugate. Now use lambda is equal to h plus i k. So, if we insert lambda equal to h plus i k here and expand this. So, lambda is h plus i k lambda bar will be h minus i k. So, this gives us h i plus i k identity minus A and here we will have lambda bar i that means h i minus i k i minus A. Now you note this h i minus A into h i minus A that is h i minus A whole square i k identity and minus i k identity that will give you minus i square small i square k square identity i square small i square is minus 1. So, that minus i square is 1. So, you get only this. Now we found that since A has an Eigen value lambda. So, lambda i minus A is singular and therefore B is also singular which is a product of this with something else this is also singular. If B is singular then it has an all space which has at least one vector in it that is if B is singular then there must be a vector to which B multiplies and gives 0. So, let us consider that vector as x some non zero vector x will be there to which B will multiply and give us 0 and then if we pre multiply both sides with x transpose that also will be 0 that will be a scalar 0. So, you get x transpose B x equal to 0 in this relationship in place of B we insert this and here at this insertion point we have used the symmetry of A x transpose this whole thing into x. So, the second part is very easy here k square is scalar and rest is x transpose identity into x that is it. The first one is x transpose h i minus A into h i minus A x the second h i minus A we have left as it is the first h i minus A we have replaced with its transpose this is valid because A is symmetric because identity is any way symmetric if A is also symmetric then h i minus A and its transpose is same. Now, note from here to here we have h i minus A multiplied with x and from here to here we have exactly its transpose that means we have got this fellows transpose multiplied with this fellow that means the norm square that is this part is norm square of h i minus A and this part is norm square of k. So, we have got this equal to 0 now you see norm is a positive quantity norm square is certainly a positive quantity now you have got the summation of two positive quantities or rather non negative quantities two non negative quantities is equal to 0 then since neither of them is like going to be negative. So, for the sum to be 0 each of them must be individually 0 and that means x is a non zero vector so k must be 0 and we come to the conclusion of our proof k is 0 which will mean that lambda is k. So, what does that show that shows that only due to the symmetry of the matrix which has been used at this step we come to the conclusion that k must be 0 that means lambda is real. Now, the first point of the proof is established other than this real Eigen value issue what else is there in the statement in the statement it is that the matrix is diagonalizable that means it has a full set of n Eigen vectors a complete set of Eigen vectors exists for a symmetric matrix we consider this statement separately a symmetric matrix possesses a complete set of Eigen vectors for that what we do is that we consider a repeated real Eigen value lambda of A because if all Eigen values are distinct then we already know that each distinct Eigen value will be associated with at least one Eigen vector which will immediately tell us that n Eigen values will give n distinct Eigen vectors. So, there is nothing to examine there. So, what we do is that we consider a repeated Eigen value which might create some trouble. So, we consider repeated real Eigen value of the matrix and examine its Jordan blocks what we want to establish we want to establish that all Jordan blocks will be of one by one size there will be no place to write that super diagonal one. So, if all Jordan blocks are of one by one size then the result will be a diagonal matrix. So, this is what we want to establish right. So, what we do is that corresponding to that Eigen value lambda suppose Eigen vector is v then we will have a v equal to lambda v and we try to find out we try to determine the first generalized Eigen vector w which must satisfy this relationship. Now, if it has to satisfy this relationship then by pre multiplying both sides of this relationship with v transpose that is by taking dot product or inner product of this with v we will get v transpose this whole thing is equal to v transpose v. Now, we open this v transpose a w here in place of a let us write a transpose that is valid because a is symmetric this is the place where we are utilizing the symmetry of the matrix v transpose a w has been written as v transpose a transpose w minus lambda v transpose identity w lambda v transpose w. That is equal to this. Now, the left right side v transpose v is norm v square that is obvious here v transpose a transpose is the transpose of a v. So, I have got a v transpose w and here it is taken as it is, but then since v is Eigen vector corresponding to lambda Eigen value that will mean that a v is the same as lambda v and what is this? This is lambda v transpose w minus lambda v transpose w which is 0 and that means v norm square is 0. What does that mean? That means v is a 0 vector, but that cannot be the case because v is the Eigen vector for which linear independence is a must to qualify as Eigen vector. So, in which direction is the only information. So, you cannot have a I have an Eigen vector which is 0. So, this is absurd that means it is not only absurd it basically means that this gives rise to an inconsistency that means there will be no w no generalized Eigen vector which will satisfy this. That means that the Eigen vector will not admit any generalized Eigen vector that will mean that all Jordan blocks are of one by one size which means that it is diagonalizable. So, we have established two points from this complex statement as real symmetric matrix has all real Eigen values and it is diagonalizable, but till now the matrix S, matrix S related to the diagonalization the similar transformation matrix and be anything the further statement says that it is diagonalizable through an orthogonal similarity transformation that is in the diagonalization process we can use a matrix S which is orthogonal. That means the matrix S which houses the Eigen vectors should have all mutually orthogonal columns. Do the Eigen vectors have to be mutually orthogonal? We say that in two parts first we say that Eigen vectors corresponding to distinct Eigen values are necessarily orthogonal. So, here the proposition is that Eigen vectors of a symmetric matrix corresponding to distinct Eigen values unequal Eigen values are always orthogonal they must be orthogonal to show this. We take two Eigen pairs two Eigen values lambda 1, lambda 2 and their corresponding Eigen vectors v 1 and v 2 with the statement that lambda 1 and lambda 2 are not equal and we want to show that v 1 and v 2 are orthogonal they must be orthogonal. So, for that we take a very simple means to establish this. We evaluate v 1 transpose a v 2 in two different ways. In the first case we simply take a v 2 as lambda 2 v 2 because lambda 2 and v 2 are the Eigen value Eigen vector pair. So, in place of a v 2 we write lambda 2 v 2 lambda 2 being a scalar comes out and we get lambda 2 into v 1 transpose v 2. In the second case in place of a we use a transpose that is the place where we use symmetry and in that case v 1 transpose a transpose is the transpose of a v 1, but a v 1 from here is lambda 1 into v 1 and that will tell us that this turns out to be lambda 1 into v 1 transpose v 2. Now note that the same expression evaluated in two different ways without utilizing symmetry and utilizing symmetry gives us two different expressions. So, we subtract on this side subtraction will give us 0 on this side what it will give it will give lambda 1 minus lambda 2 v 1 transpose v 2 that is 0. We have already taken the assumption that lambda 1 and lambda 2 are not equal that means this factor cannot be 0. So, only way this can happen is that this factor must be 0 that means v 1 and v 2 are necessarily orthogonal. So, this is the case for distinct Eigen values what will be the situation for equal Eigen values that is an Eigen value appearing twice giving us two Eigen vectors does do they also have to be necessarily orthogonal not necessary, but then see this what we want to see here we want to establish that corresponding to a repeated Eigen value of a symmetric matrix and appropriate number of orthogonal Eigen vectors can be selected what is the idea behind it. If lambda 1 and lambda 2 are same unlike this case then the entire subspace v 1 and v 2 is an Eigen space. So, if there are two vectors which are Eigen vectors corresponding to the same Eigen value then it is not necessary that they are orthogonal, but then the entire plane formed by them is an Eigen space that means any vector in that plane is an Eigen vector. So, if in a plane we have infinite possible Eigen vectors available then two mutually orthogonal Eigen vectors we can always pick up. So, that means we can select any two mutually orthogonal Eigen vectors for using in the basis that is for filling up the appropriate columns of this matrix s that is corresponding to repeated Eigen values orthogonal Eigen vectors are available the Eigen vectors that we pick up do not have to be orthogonal, but if we want we can always get orthogonal Eigen vectors. So, that is why it says is diagonalizable through an orthogonal similarity transformation that is it is possible to work out an orthogonal similarity transformation matrix with which we can diagonalize the matrix symmetric matrix further what we get. So, you see that in all cases of a symmetric matrix we can form an orthogonal matrix v such that in place of v inverse in this case we were writing s inverse now since we are talking about orthogonal matrix v in place of s. So, for orthogonal matrix v inverse is same as v transpose which is a lot easier v transpose a v is lambda which is a real diagonal matrix further try to pre multiply this equation with v and post multiply with v transpose then you get this. The matrix a can be represented in this manner v lambda v transpose where v is an orthogonal matrix and lambda is the diagonal form of a that diagonal form is always possible and that is always possible through an orthogonal similarity transformation v this gives us a lot of facilities. And this is greatly helpful because symmetric matrices appear in many many locations in the analysis in applied science and engineering and it helps that so nice properties. So, interesting and useful properties of symmetric matrices are there and symmetric matrices appear in most of the applications again and again. So, most of our problems are comparatively easy and the enormous amount of facilities that this representation gives us is here. First of all this expression a equal to v lambda v transpose can written in expanded form like this. If you try to multiply these three matrices and open it in the form of an expression you will find that you get lambda 1 v 1 v 1 transpose because you see if you first multiply these two then you will get what you will get lambda 1 into v 1 transpose plus 0 into this plus 0 into that and so on. So, here in rows you will find lambda 1 v 1 transpose lambda 2 v 2 transpose lambda 3 v 3 transpose and so on with which when you multiply this then you will get v 1 into lambda 1 v 1 transpose plus v 2 into lambda 2 v 2 transpose and so on. So, finally, you will get this summation lambda 1 v 1 transpose v 1 v 1 transpose plus lambda 2 v 2 v 2 transpose and so on you can express it like this. This gives rise to a further lot of possibilities. One is that if a particular matrix huge matrix 4000 by 4000 matrix has Eigen values which are organized in descending values descending absolute values the first 10 are large and compared to them the next ones are extremely small. Then what you can do for that for the storage of that 4000 by 4000 matrix you can simply store the first 10 Eigen values lambda 1 to lambda 10 and throw away lambda 11 to lambda 4000 and the corresponding first 10 Eigen vectors you store other Eigen vectors also you can throw away. Later when you need to reconstruct the matrix these 10 Eigen vectors with their corresponding Eigen values will be able to reconstruct the matrix completely by a summation of not 4000 items like this, but just 10 because the contribution of the rest of them will be extremely small. This is one advantage efficient storage with only large Eigen values and corresponding Eigen vectors rest of the things you do not have to store. In this lesson technique we have already seen the application of this expression that is if the and this works only for symmetric matrix. If the matrix is symmetric then there is a representation like this and after we have determined v 1 then from a if we subtract this then what remains has the same Eigen structure as a except that its Eigen value corresponding to Eigen vector v 1 turns out to be 0 rather than lambda 1. The rest of the Eigen values and Eigen vectors are unchanged that is the deflection technique which helps us in finding a few top Eigen vectors a few top Eigen values and corresponding Eigen vectors. Apart from that the orthogonal diagonalizability of the matrix in the case of symmetric matrices helps us in working out practical algorithms which are stable in which the numerical errors do not grow very fast as iterations proceed. Therefore, whenever there is a choice between applying a general similarity transformation and applying an orthogonal similarity transformation computationally we always apply we always prefer to apply orthogonal similarity transformation and in the case of symmetric matrices orthogonal similarity transformations alone suffice to reduce the matrix completely to diagonal form. In the case of non symmetric matrices first of all diagonalizability is not always guaranteed even when the matrix is diagonalizable reduction to diagonal form is not always possible through orthogonal similarity transformation in fact that is not possible. So, you have to take the help of similar transformations which are not orthogonal in the case of non symmetric matrices in the case of symmetric matrices you can conduct the entire operation with only orthogonal similarity transformation. This is why solution of symmetric matrices Eigen value problem is a lot easier compared to general non symmetric ones. The complete picture of different forms some the raw form and some the final desired form that we have we can see here and any block here in this schematic diagram which is on the right or which is below is typically easier to handle for the Eigen value problem compared to corresponding other blocks which are on the left or above. So, in that understanding we find that compared to a general matrix all other matrices general means which may be non symmetric all other forms are easier to handle and the diagonal matrix is the one in which the Eigen value problem is actually already solved. So, that means if we have a matrix in one of these forms on this side then any algorithm any part algorithm which helps us to move from this end to the right side or below south east side then that is one contribution to the solution of the algebraic Eigen value problem that is from the general matrix we might try to reduce it to either the Hessenberg form or the symmetric form. If we have a symmetric matrix then we try to reduce it to symmetric tri diagonal form. If we have it already in Hessenberg form then we try to reduce it to tri diagonal form which is comparatively easier or we will try to reduce it to triangular form which is more often the case from triangular or tri diagonal or symmetric tri diagonal another round of reduction will take us to diagonal form if a diagonal form exists for that matrix. In this case it will certainly exist and any movement along the arrows will mean that we have accomplished one more stage one more stage in the solution process of this Eigen value problem and all these steps all these reductions must be carried through similarly transformations only that is we must multiply the matrix A on the right side with one matrix and with the left side with its inverse then only it is a similar transformation and that will mean that it is basically the expression of the same linear transformation in a new basis the basis S. So, through similarly transformation only we must do all these transformations straight forward deduction like Gaussian elimination only from one side will damage the Eigen structure. So, the similarly transformations should be applied like this and through this any step is preferable if it helps in the direction like this like this or like this or anyway like this. So, the question arises how to find suitable similarly transformations which help us in moving from this direction to this direction in general that is reduction of the problem how to find suitable similarly transformations there are four standard ways of working out suitable similarly transformations they are based on rotation reflection matrix decomposition or factorization and element transformation. These four methods of finding suitable similarly transformations we will study in the coming lectures or coming lessons for the time being to summarize the important issues that we have seen in this discussion is that generally possible reduction which is possible for all matrices is up to Jordan's analytical form. Condition of diagonalizability and the diagonal form we have studied and we have studied the form which is triangular form which is possible with orthogonal similarity transformation. Note here that in the previous chapter in the book there is an exercise which gives you the steps necessary to show this important result to establish this important result that any square matrix can be reduced to a triangular form with only orthogonal similarity transformation. This is an important result and I will strongly advise that this particular exercise which gives you the steps to establish this important result you must go through. The other useful non-canonical forms are tridiagonal and Hessenberg forms that we have come across briefly and the most important lesson of this particular chapter of this particular lecture is that orthogonal diagonalization of symmetric matrices is always possible and all these reductions must be carried through similarity transformations only. So, I would also advise you to go through some of the exercises of this chapter before proceeding further to the next lecture. Thank you.