 Last time we discussed this result I want to make an emphasis the result is a self-adjoint operator on a finite dimensional inner product space has an eigenvalue I want to just mention that every self-adjoint operator on a finite dimensional inner product space has an eigenvector it is a same thing showing that something an operator showing that an operator has an eigenvalue is the same as saying there exists a vector x not equal to 0 such that if the operator is T Tx equals lambda x what is important is in this result is that this is for a finite dimensional inner product space for a finite dimensional vector space we have already proved that for a finite dimensional complex vector space we have already proved that any operator has an eigenvalue okay but if it is a real vector space there are operators which do not have eigenvalues okay for example the rotation matrix okay rotation matrix does not have eigenvalues if the rotation is not 90 or 270 okay. So this result has been proved earlier that is what I want to emphasize for a complex vector space an operator T having an eigenvalue is a simple application of the fundamental theorem of algebra fundamental theorem of algebra says that the roots of that polynomial are the roots exist they are either real or complex okay so there is no guarantee that the roots are real so we have this general result for a complex vector space so this is more result for the real finite dimensional inner product space than for complex finite dimensional inner product space okay this result is more for the real case more important for the real case than for the complex space complex case has been settled already okay there are also 1 or 2 comments that I need to make one is this says if you have a complex finite dimensional inner product space and a self adjoint operator on it now for a self adjoint operator you can look at the matrix corresponding to that operator relative to some orthonormal basis then that matrix is a Hermitian matrix if A is the if T is the operator and if A is the matrix of T relative to some orthonormal basis then this A is equal to A star okay I am still in the complex finite dimensional inner product space so the entries of A could all be complex okay but this theorem says that the characteristic polynomial has only real coefficients because it has only real roots if it has only real roots then it can be written as the characteristic polynomial can be factorized with linear factors lambda minus lambda 1 lambda minus lambda 2 etc lambda minus lambda n where each of these lambda 1 lambda 2 etc lambda n are real okay so it may be a completely complex matrix but if it is self adjoint then it is characteristic polynomial is real now this is not a trivial observation this is a consequence of the previous the proof of the theorem and finally finite dimensionality is important if the space is not finite dimensional and if the operator is self adjoint then we could we need not have Eigen value so I will give that example so I am saying that this is not true in the case of an infinite dimensional inner product space so again for us the familiar infinite dimensional space will be C 0 1 this time I will take real value so it need not be complex value real valued continuous functions real value continuous functions on 0 1 with the inner product with inner product f g being 0 1 f of t g of t dt this is an inner product space let us okay real inner product space I am not taking the complex conjugate let us look at the operator T on V defined by T f this must be a continuous function so T f acting at T is T times f of T multiplication operator we have encountered this before obviously it is continuous because it is a product of two continuous functions so this is well defined T is an operator on V T is linear that can be verified T is also self adjoint that is an exercise simple exercise T is a self adjoint operator okay okay suppose that I want to show that this T does not have an Eigen value suppose that there exists an f such that T f equals lambda f okay just look at the definition of T f then then it means T f minus lambda f is 0 I can write this as T minus lambda f of T this must be 0 for all T in 0 1 if this equation holds for some lambda then that lambda must satisfy this equation for all T okay lambda is if this equation holds for some fixed lambda so lambda is fixed when T is not equal to lambda this means f T is 0 lambda is just one number provided of course lambda belongs to 0 1 okay but for if a continuous function it is 0 at all points except at 1 point in 0 1 then what must be the value of the function at that point also be 0 you take either the left limit or the right limit depending on the situation depending on whether you are to the left of lambda or to the right of lambda so it simply follows that f must be identically 0 so f cannot be an Eigen function Eigen vector it is a function here continuous function that we are seeking so there is remember the condition on the Eigen vector is that x not equal to 0 T f equals lambda f T x equals lambda x x not 0 f is 0 is the only function that satisfies this equation so T does not have an Eigen value okay so T has no Eigen values but we have proved that in the finite dimensional real inner product space also if it is a self adjoint then it has Eigen values so finite dimensionality is important okay. The next result is how is given an invariant subspace of corresponding to a linear transformation how does the orthogonal complement of that subspace behave this question comes for the following reason if see all Eigen spaces corresponding to a given Eigen value are invariant subspaces okay we have seen this before if you are in an inner product space what more can be said if W is a subspace invariant under a linear transformation T then W perpendicular will be invariant under T star okay this result will prove useful and only for finite dimensional spaces so let T be a linear operator over a finite dimensional inner product space let W be finite dimensional inner product space I will call it V let W be a subspace of V invariant under T for instance you could take the Eigen spaces then W perpendicular is invariant under T star W perpendicular is invariant under T star the proof is really straight forward proof is as follows so all that I want to show is given T W contained in W it follows that T star W perpendicular contained in W perpendicular this is what we want to show okay W is invariant under T W perpendicular invariant under T star so let us take Y in W perpendicular and T star Y to be X so this X belongs to this left hand side subset I must show that that is perpendicular this vector X is perpendicular to W I will rewrite it as X perpendicular to W okay so take an arbitrary W okay let us say U let U belong to W and consider the inner product of X with U I must show that this is 0 I want to show X is perpendicular to W X is taken from the left hand side subset X is T star Y Y belongs to W perpendicular look at inner product of X with U it is T star Y with U and this is Y with T U the proof is through right see this T U U is in W T of U must be in W so this is in W so I can write this as Y, U prime where U prime belongs to W but Y has been taken from W perpendicular so this is a dot product of a vector in W perpendicular and a vector in W which is 0 by definition so X is perpendicular to U and so X belongs to W perpendicular okay in particular we will apply this result for the case of a self adjoint operator so for a self adjoint operator if W is invariant under T then W perpendicular is invariant under T okay we will make use of this that is the next result. So the next result is an important cornerstone let T be a self adjoint operator self adjoint operator on a finite dimensional inner product space see we have shown that V has sorry T has real Eigen we have shown that all Eigen values of T are real okay what we want to mention further is that there exists an orthonormal basis self adjoint operator on a finite dimensional inner product space there exists an orthonormal basis for V such that each basis vector is an Eigen vector such that each basis vector is an Eigen vector remember that we proved already the converse of this result that is how we started this section if T is a is that agreeable we started with the following assumption let T be a linear operator on a finite dimensional let us say real or a complex inner product space let us say T is a finite T is a linear operator on a finite dimensional inner product space with the property that there exists an orthonormal basis B such that the matrix of T relative to this B is a diagonal matrix okay then we have seen that T must be in the real case we have seen T must be self adjoint in the complex case we have seen that T must be normal T T star equals T star T okay in the real case in the real case normality is not possible in the real case only self adjointness is possible that is only for self adjoint so all that I am saying is this is a converse of that result the question that one could ask is in the complex case there is normality of the transformation T in the real case there is self adjointness of T so I am saying that for the self adjoint case the answer is yes can you see that the matrix of T relative to this basis must be diagonal if this happens there exists a basis B each of whose vector is an eigenvector so the matrix of T relative to that basis is a diagonal matrix so this is a converse of that result okay let us take up the complex case a little later but let me mention presently that the equation similar to normality that is if T T transpose let us say A A transpose equals A transpose A does not necessarily imply that A is diagonalizable okay this is the real this is the real case for the definition of normality definition over complex is A A star equals A star A okay the claim is that if you have a complex matrix that satisfy if you have a normal complex matrix then it can be diagonalized okay that is the claim that is the claim that I am making now I told you that this is the converse of the question that we started with which we will see is true we are only looking at the real case for real case remember that normality when you replace star by transpose does not hold okay example is again the rotation operator the rotation operator for theta not equal to pi by 2 or 3 pi by 2 satisfies the equation A A transpose equals A transpose A equals identity in fact okay but the rotation operator we know that for these two values does not have Eigen values okay so no question of even asking for Eigen vectors okay so let us prove so this is a result both for real and complex case right I have not mentioned anything about the underlying field you have a self adjoint operator then it is diagonalizable by means of a unitary matrix or an orthogonal matrix depending on whether it is a complex space or a real space okay that is what this theorem says so the proof will make use of the two results that we proved earlier for a self adjoint operator we have shown that there are all Eigen values are real we have shown that a self adjoint operator has Eigen values okay these two results are important of course I will also make use of this result the proof is by induction so let us take the case proof is by induction let us take the case when dimension of V is 1 I know that T has an Eigen value and so an Eigen vector T has an Eigen as an Eigen value and of course an Eigen vector okay what I mean by this is that if you are in the complex case of course this makes sense if you are in the real case let us just remember once again that we have shown for a self adjoint operator that there exists a real Eigen value and which actually means the corresponding Eigen vector can be taken to be real okay so T has an Eigen value and an Eigen vector let us take see if dimension V is 1 so let me call it okay let us say T x equals lambda x lambda is Eigen value x is Eigen vector in this case let us call x 1 as x by norm x x is not 0 so norm x is not 0 call x 1 as x by norm x then just look at the basis be consisting of this vector alone the matrix of T this is a basis for V and this is an Eigen vector by construction. So the induction the first step of induction principle that is satisfied okay V is 1 dimensional this is a basis this vector by construction is an Eigen vector so let us assume that this result is true for all finite dimensional vector spaces of dimension less than n okay that is I have a self whenever there is a self adjoint operator on a finite dimensional vector space of dimension less than the dimension of V then there is an orthonormal basis each of whose vector is an Eigen vector okay so let us now look at this construction can be done in any case T has an Eigen value real Eigen value in the real case x is a real Eigen vector so this construction can be done what I will do is to look at W as a subspace spanned by this vector x 1 okay then this is an Eigen space Eigen vector so obviously T of W is contained in W W is invariant under T by the previous theorem so T of sorry T star of W perpendicular is contained in W perpendicular but T star is T self adjoint operator so T of W perpendicular is contained in W perpendicular the dimension of T self adjoint the dimension of W perpendicular is 1 less than the dimension of V because the vector remember V is equal to W plus W perpendicular for finite dimensional vector space V is W plus W perpendicular the dimension of W is 1 so dimension W perpendicular is 1 less than the dimension of V so now I will define an operator U on W perpendicular using the operator T let us set U from W perpendicular to W perpendicular so U must be a linear operator the spaces must be the same set this defined by not set now it is let U be defined by U is T restricted to W perpendicular the restriction of T to W perpendicular that is my operator U and remember you need to verify that see when you look at U as T restricted to W perpendicular it means you are restricted your attention in the domain the domain is W perpendicular you are making sure but what is the guarantee that the core domain is W perpendicular because I am saying U is an operator from W perpendicular to W perpendicular that comes from this okay see this comes from this will tell you that T takes that element in X that element X and W perpendicular to W perpendicular again it will not go to W and so this is well defined okay this that U is an operator on W perpendicular is well defined because of this okay now U is an operator on okay T is self adjoint implies U is self adjoint I am going to leave that as an exercise T equal to T star implies U equals U star okay this is an easy exercise you have to again use the fact that V is W plus W perpendicular that is all okay so U is a self adjoint operator on a finite dimensional vector space W perpendicular whose dimension is less than dimension V so by induction hypothesis see this is another induction principle that I am using okay so U corresponding to this U there is an orthonormal basis so I am sure you will agree when I write that there exists an orthonormal basis I will call it B prime because I have already used B there is an orthonormal basis B prime I will call the elements X2 X3 etc Xn there exists an orthonormal basis B prime for W perpendicular it is a space W perpendicular that we are concerned about for W perpendicular which also has the extra property that such that each such that okay you tell me if this is okay such that T X i equals some lambda i X i for 1 sorry 2 less than or equal to i less than or equal to n X2 I have used for the first vector this is an orthonormal basis so they are mutually perpendicular and norm of each of these vectors is 1 each vector must also be an eigenvector sorry corresponding to U you should have objected corresponding to U U is the operator that we are talking about such that U X i equals lambda i X i for each of these vectors so i varies from 2 to n so the natural thing is to ask whether these vectors are also eigenvectors for T if they are eigenvectors for T then I am through there is one eigenvector X1 already these are n minus 1 eigenvectors the dimension must add dimension 1 there the dimension of this is n minus 1 this must add to the dimension of V so that this union will give me a orthonormal basis for V and the matrix of T with respect to this basis will be a diagonal matrix each of each vector of this basis is an eigenvector okay so does it follow that each X i is an eigenvector for T also from this that is by definition see these X i belong to W perpendicular and U is T restricted to W perpendicular okay so it follows immediately that T X i equals lambda i X i some of these lambdas may repeat but does not matter to us what we are interested in is the vectors do I have a basis orthonormal basis okay so I have these vectors together let me say X1 together with B prime gives an orthonormal basis basis for V with the desired property I have repeated this too many times so the story stops for the real inner product space because you must take this theorem along with the rotation operator to conclude that you need self adjointness in order to conclude that there is an orthonormal basis each of whose vector is an eigenvector for the rotation operator there are no eigenvalues it is normal with regard to a real inner product space the rotation operator T satisfies T T transpose equals T transpose T equals identity but T cannot be diagonalized in I mean it fails in the worst possible case it fails in the worst possible case in the sense that it does not even have real eigenvalues okay so T as a rotation operator on R2 on the real space does not have eigenvalues so for real space this is the result and remember the question of diagonalize the question of diagonalize it ability has been specialized here see the original question of diagonalization is for a finite dimensional real vector space there you are interested only in a general basis but if it is an inner product space it is only natural to require something extra from the basis which is orthonormality okay. So for orthonormality you need A equals A star okay for if you want orthonormality then the operator must be self adjoint especially if it is a real inner product space the matrix version as we always do the matrix version is the following the matrix version is a corollary of this result let A be see in the case of complex self adjoint the word Hermitian is used let A be a Hermitian operator Hermitian matrix of order n then okay let me also emphasize that it is complex be a complex Hermitian matrix of order n then there exists a unitary matrix there exists a unitary matrix I will call it P such that P inverse AP equals D where D equals diagonal lambda 1 lambda 2 etc lambda n lambda 1 etc lambda n being the eigenvalues of A if A is real if we say A is real symmetric then there exists an orthogonal matrix I will call it Q different from P there exists an orthogonal matrix Q when I say an orthogonal matrix it is a real orthogonal matrix because if it is complex then we will call it a unitary so there exists an orthogonal matrix Q such that Q inverse AQ equals D where D is diagonal as before diagonal entries of D being the eigenvalues of A okay so here I need to only emphasize that P inverse is equal to P star because P is unitary similarly here P inverse is P transpose what is the proof this is a corollary of the previous result Q Q inverse Q transpose okay this is a corollary of the previous one so we can appeal to the previous result you are given a complex Hermitian matrix A so you can define a linear transformation through this so you define Cn with a usual in a product define P on V by P of x equals Ax you have a matrix through which you can define a linear transformation then this definition means that the matrix of T relative to the standard basis is A the matrix of T relative to the standard orthonormal basis is the matrix A, A is complex Hermitian so A is A star so T is T star so I have a self adjoint operator on a complex inner product space then I know that by the previous theorem there is an orthonormal basis for Cn satisfying the property that each vector in that orthonormal basis is an eigenvector for T eigenvector for T means Tx equals lambda x but Tx equal to Ax so Ax equals lambda x collect all these eigenvalues collect all the yeah collect all these eigenvalues arrange them as a diagonal matrix then we know that this is the same as writing down the matrix of T relative to the new orthonormal base that we have construct okay so I will simply say appeal to the previous theorem appeal to the previous result to construct an orthonormal basis this time I will call it B so I have x1 x2 etc xn for Cn for V what I know is that each of these vectors is an eigenvector for the operator T so if I look at the matrix of T relative to this basis then I know that that is a diagonal matrix lambda 1 etc lambda n okay the proof is complete if I tell you what must be P just give one choice for P okay let us call P as a matrix whose first column is x1 second column x2 etc xn you have these vectors constructed by the previous theorem existence not construction so collect those vectors so this is something that we have done even in the ordinary case without the inner product space think set P equal to this then this P this matrix P has a property that its columns are mutually orthogonal and the norm of each column is 1 so this is a unitary matrix that is P star equal to P transpose so then P is unitary finally this equation must be verified but as before this equation we have seen before look at AP AP by definition is A into x1 x2 etc xn we know that this A can be brought inside to write A x1 etc A xn each of these is an eigenvector so I have the eigenvalues coming now lambda 1 x1 lambda 2 x2 etc let me just write down the last step which is a little exercise for you verify that this is equal to P times D yes which is almost obvious you first write P and then D okay so AP equals P D you know that P is invertible so you can pre multiply by P inverse and then you get this equation okay real case is similar in the real case you know that the eigenvalues are real corresponding vectors can be taken to be real so this will be a basis considering of real vectors now real vectors giving you Q for instance then it is an orthogonal matrix right it will be an orthogonal matrix and rest of the proof is as before okay so this is just version matrix version of this important theorem the last part is really for normal operators that I will do in the next class okay so what it means is that an operator is diagonalizable by means of an operator on a complex vector space this time just complex vector space is diagonalizable by means of an orthogonal transmission by means of a unitary matrix if and only if it is normal okay so there is a significant difference between the question of real symmetric matrix and the complex symmetric matrix that is if A is real and if A is equal to A transpose then this theorem says A can be diagonalized okay take A to be complex A equal to A transpose there is no theorem which can guarantee that A is diagonalizable okay whereas you take A complex A equal to A star the conjugate transpose then A is diagonalizable okay so the question is really about what is the corresponding operation for transpose in the complex case the corresponding operation for transpose in the complex case is conjugate transpose okay so remember that this statement is wrong a complex symmetric matrix is diagonalizable is wrong okay a real symmetric matrix is diagonalizable a complex Hermitian matrix is diagonalizable okay so let me stop here.