 We are discussing unitary operators more generally we will discuss what are called as normal operators but before that there is one large result I told you briefly about this last time I will prove this result what is the relationship between the matrices of a linear transformation corresponding to two orthonormal basis okay so this is the framework let V be a finite dimensional in a product space let us take two orthonormal basis say V1 I will use W1, W2, etc. Wn this is an orthonormal basis I have another one I will use V1, V2 suppose these are ordered orthonormal basis for V okay now for any two bases we have derived one or two relationships for instance if you look at a fixed arbitrary vector x then if you look at the matrix of x relative to B2 this is related to the matrix of x relative to B1 by means of the following formula the matrix of x relative to B2 is equal to identity matrix between the bases B1, B2 that is I write down the elements of the basis vector B1 that is W1, W2, W1 let us say W1 that is a linear combination of these vectors that will be the first column of this matrix etc write down W1 in terms of V1, etc Vn that will be the last column of this in general if B1 is not equal to B2 then the matrix of the identity transformation is not the identity matrix okay it is never the identity matrix unless B1 is equal to B2 but in any case this matrix is invertible okay note that this matrix that is the matrix of the identity operator matrix of the identity operator with respect to these two bases this is invertible okay I will call it P inverse set P inverse to be the matrix of the identity transformation I know that P is invertible I know that this is invertible I am just calling it P inverse let us now define yes what is the reason you should tell me okay suppose you take this suppose you take this matrix let us call it Q okay suppose Q into X is equal to 0 Q into X is equal to X is a vector in V okay I am talking about a matrix so let us take X to be in Rn Rn Rcn okay for simplicity let us take Rn call this matrix Q okay X belongs to Rn Qx equal to 0 does it mean X is 0 yes over this is square matrix homogeneous system Qx equal to 0 has a unique solution 0 so it must be invertible okay see all this we have seen before I am just trying to quickly recall these things so that we will be able to go to an inner product space what we know all these things for an ordinary finite dimensional vector space we are trying to specialize these results for the inner product space okay I will define I will define now a linear map U let U belong to L of V be defined by defining a linear transformation U using these two bases the definition is U of W I define a linear map between two bases one can define a linear map then it is completely determined so this is the map U now remember that the bases B1, B2 are orthonormal so this U is unitary from the results that we have seen before U is of course linear this U is unitary U is unitary it takes one it takes a particular basis particular orthonormal basis to another orthonormal basis so this must be unitary so I am defining a linear operator from B1 to B2 like this UWI is VI and I am trying to look at the matrix of U relative to just B1 that is this is by definition the matrix of U relative to B1 just this basis okay same basis not different then this will be B1 etc B1 which is by definition V1 relative to B1 V2 relative to B2 etc VN relative to B1 there is only one basis okay okay what is VB1 from this equation V1, B1, B2, B1 etc I can write this as V1, B1 from this equation V1, B1 from this equation is this inverse into VB2 okay but this inverse is P so tell me if this is correct the first vector is P P is a matrix remember into the matrix of V1 relative to B2 so I am using this equation from this equation VB1 is what I want to write V1, B1, B2, B1 etc VB1 is what I want to write VB1 is the inverse of this matrix into VB2 but the inverse of this matrix is P because P inverse is this so VB1 is P times VB2 that is what I have written etc the matrix P times VNB1 sorry this is B2 now it is B2 from B1 I have moved to B2 by making use of this equation and the notation that the inverse of this matrix is P now you see what is going on V1, B2 etc they are the second basis element so the representation in terms of B2 is just E1, E2 etc so this is P E1, P E2 etc P EN but you know that for matrices if this is what I have for a single matrix P then this can be written I can take the P outside so this is P into E1, E2 etc EN but that is identity so this is just P this is just a matrix P so if you look at the matrix of U relative to B1, B1, B1 then it is P does it follow that P is a unitary matrix yes because U is a unitary matrix the matrix of a unitary matrix of a unitary operator relative to any two orthonormal basis that is a unitary matrix so this P is a unitary matrix P is unitary P is a unitary matrix U is a unitary operator and finally recall how the matrix of a linear transformation corresponding to two basis look like take a general matrix general operator let T element of L be then the matrix of T relative to B2 can be written as a matrix P inverse into the matrix of T relative to B1 P okay I remember we used the notation M T M inverse but M T M inverse but M is M inverse is equal to this matrix okay so you can go back to that notation and verify that this is what we have got instead of M we have P inverse it must be M T M inverse instead of M we have P inverse okay but P is unitary so this is P star T B1 P so this is a relationship then the matrix of a linear operator corresponding to one basis one orthonormal basis it is related to the matrix of the same linear operator with respect to another orthonormal basis in this manner so this matrix P is unitary this is the for the complex case for the real case for the real inner product case the matrix of T relative to B2 is P transpose the matrix of T relative to B1 into P real orthogonal matrix so in this case P will be an orthogonal matrix okay so this is a relationship there is a definition coming out of this relationship this is called unitary equivalence this is called orthogonal equivalence that is two matrices B and A are said to be unitary equivalent if there exists a unitary matrix P such that B can be written as P star AP similarly two matrices two real matrices B and A are said to be orthogonally equivalent if there exist an orthogonal matrix P such that B equal to P transpose AP okay this is more specialized than the usual equivalence of diagonalization which comes from the similarity transformation matrices A and B are said to be similar if B can be written as P inverse AP for some invertible matrix P in that case this is what leads to the definition diagonalization if there is if A is a matrix such that A can be written as P inverse DP where D is a diagonal matrix then A is said to be diagonalizable okay similarly here if the a matrix A is said to be diagonalizable by means of an orthogonal transformation or by means of a unitary transformation if there exists either an orthogonal matrix P or a unitary matrix P such that A can be written as either P transpose DP or P star DP second case when A is complex okay we will be interested in this question diagonalization was discussed when we discussed the operators on finite dimensional vector spaces we will discuss unitary equivalence okay given a complex matrix when does it happen that there exists a diagonal matrix D such that A can be written as P star DP where P is unitary okay we will address this question this is for unitary equivalence let us look at now more general operators called normal operators okay but little more particular is a self-adjoint operator I will discuss self-adjoint operators and then move to normal operators. So for these are the operators which will give you the correct answer for this question given a complex matrix A when is it unitary equivalent to a diagonal matrix okay but before that let us look at this problem see this is really the final problem in finite dimensional inner product spaces the problem is given a linear transformation T given T element of L V where the framework is V is a finite dimensional inner product space given a linear operator T on a finite dimensional inner product space V when does it happen that there exists an orthonormal basis B such that orthonormal basis B for V such that each vector in B is an eigenvector for T this question we asked for the usual vector space not the inner product space we will try to answer this for the inner product space where you will see that the notions of self-adjoint normal operators come naturally. Now before answering this question let us see what happens if T satisfies such a property suppose there is an orthonormal basis B such that each vector of that basis is an eigenvector for T okay then what happens so we are really looking at the necessary condition if this happens what this question is really sufficient when does it happen that this holds suppose this happens suppose that there exists a basis let me call the elements V1, V2, Vn so there exists an orthonormal basis orthonormal basis B such that each vector in the basis is an eigenvector so that means T Vi equals lambda I Vi this holds what is the meaning of this the meaning of this is this is related to the problem of diagonalization. So if you look at the matrix of T relative to the basis B for which this happens T must be a diagonal matrix just look at the right hand side T V1 is lambda 1 V1 so the first column is lambda 1 all other entries 0 etc so T B is lambda 1 0 0 0 lambda 2 etc all entries 0 the last entry is lambda that is you get the diagonal matrix whose diagonal entries are the eigenvalues here some of these eigenvalues may repeat I am just assuming that they are lambda 1 etc lambda and some of these may repeat that is possible. So the matrix of T relative to the basis for which this happens that matrix is a diagonal matrix what about the matrix of T star relative to B this question we can ask because we are in a product the matrix of T star relative to B is lambda 1 bar see in general it is a complex vector space all other entries 0 okay this is from one of the results that we have discussed if A is a matrix of T relative to an orthonormal basis B B is a matrix of T relative to another normal sorry the same orthonormal basis matrix of T star then A is equal to B star so T star has this representation if V is real then T is equal to T star that is T is self adjoint if V is real then T is equal to T star I can even write T transpose okay if V is complex then T is equal to T star is not true but T T star equal to T star T two diagonal matrices any two diagonal matrices commute T star T equal to T T star an operator T that satisfies such an equation is called a normal operator is called a normal operator so if I have a complex finite dimensional vector space V which has an orthonormal basis satisfying this property that is each of the vector coming from the basis is an eigenvector for the operator T then T must be a normal operator okay T must be a normal operator now what is interesting is that see this is only a necessary condition I told you but interestingly this condition is also sufficient that is what we will show is that if I have a complex finite dimensional inner product space V and if T is a normal operator then T is diagonalizable by means of a unitary matrix okay by the way can you see the unitary matrix say I have not written down the unitary matrix here explicitly when you can you see what that unitary matrix must be I will just written the diagonal form can you see that unitary matrices are hidden in this just think about it whenever we say that a matrix of a linear transformation relative to a basis is a diagonal matrix it means there is a matrix P such that a equals P inverse D P that is what I have written down here there is also see if you look at if you look at the matrix see this matrix a this is the diagonal matrix D then a will be equal to P star D P that is hidden in this equation okay so this is unitary equivalence this is unitary equivalence of this matrix with this diagonal matrix okay what we will see is that the converse is also true that is if T is a normal operator then there is a diagonal matrix D such that this equation holds for some orthonormal basis B okay that is why I wanted to first write down this implication the necessity part this is also a sufficient condition and for the real case we will show that if T is equal to T star then T is unitary I am sorry T is orthogonally equivalent to I mean the matrix of T relative to a basis an orthonormal basis is orthogonally equivalent to a diagonal matrix okay but before that let us look at some properties properties of self adjoint operators okay so just to summarize we have seen that for the real space real inner product space case if an operator is self adjoint then there exists an orthonormal I am sorry for the real case what we have seen is if there is a basis B which has a property that each vector from the basis is an eigenvector for the operator T then T is self adjoint in the complex case we have shown that if this happens then T must be normal we will prove the converse also okay but as I told you some properties before we prove these results. Let us look at the self adjoint case for the self adjoint case we have the following so this is in general this is a dimension free result so let T be a self adjoint operator self adjoint operator on a finite dimensional inner product space then we have the following all the eigenvalues of T are real numbers eigenvectors of T are real numbers eigenvectors corresponding to distinct eigenvalues or can you make a guess here we have seen the general case we have studied the general case earlier T is a linear operator on a finite dimensional vector space V then eigenvectors corresponding to distinct eigenvalues are do you remember the property that we proved for any operator eigenvectors corresponding to distinct eigenvalues they are linearly independent for a general vector space if you have an inner product space they are orthogonal something more if you have a self adjoint operator and an inner product space then you can say something more eigenvalues are real in the first place eigenvectors corresponding to distinct eigenvalues are orthogonal proof first I want to show that eigenvalues are real so let us take Tx equals lambda x that is x is an eigenvector corresponding to the eigenvalue lambda remember an eigenvector by definition as a non-zero vector okay I want to show that lambda is real so I will look at the inner product lambda x, x this is lambda x, x I have taken lambda into the first argument I will use this lambda x equal to Tx so there is inner product Tx, x Tx, x can be written as x, t star x that is a definition of the adjoint but T star is T so this is x, Tx again use Tx equals lambda x this is inner product x, lambda x, lambda comes in the second argument so it goes out with a complex conjugate so this is lambda bar x, x so lambda x, x is equal to lambda bar x, x, x is a non-zero eigenvector so the dot product of x with itself cannot be 0 in fact that is a positive number so lambda must be equal to lambda bar okay so lambda is real all eigenvalues of a self adjoint operator must be real numbers lambda or non-zero? See look at Tx equals lambda x that can be written as T minus okay look at the matrix case A minus lambda i of x is equal to 0 this has a non-trivial solution if and only if the determinant of A minus lambda i is not 0 okay but we want non-trivial solutions that happens only if determinant of A minus lambda i is 0 so lambda could be 0 that is not a problem why what is the problem if A is a singular matrix lambda could be 0 it is a condition on x, x must be non-zero lambda can be 0 there is no problem okay we are seeking non-trivial solutions of a homogeneous equation that happens if and only if the determinant of that coefficient matrix is 0 that gives rise to this definition that is A minus lambda i is singular for a matrix for an operator T minus lambda i is singular for an operator T minus lambda i is singular you can show that this happens if and only if the matrix of T minus lambda i with respect to any basis is singular that is why that is why this question of Tx equals lambda x just gets passed on to the equation Ax equals lambda x where A is a matrix of T okay so this proves lambda is real eigenvectors corresponding to distinct eigenvalues are orthogonal so let us take Tx equals lambda x Ty equals mu y where of course x is not 0 y is not 0 lambda is not mu I have eigenvectors corresponding to distinct eigenvalues I must show that they are orthogonal so consider again lambda x, y as before this is lambda x, y that is lambda x is Tx Tx, y this can written as x, T star y T is self adjoint x, Ty x, Ty is x, mu y is entirely similar to the previous proof mu bar comes out mu bar x, y but mu is real because of what we proved in the first part so this is mu x, y lambda x, y equals mu x, y lambda not equal to mu these are numbers lambda not equal to mu so this means the product x, y is 0 this is something more than saying that they are independent this of course does not say anything about the existence of eigenvalues it says if the eigenvalues exist then they must be real for a self adjoint operator and they must be orthogonal if they correspond to distinct eigenvalues. Let us prove that for a self adjoint operator first eigenvalues exist then T has eigenvalues the proof will use fundamental theorem of algebra any polynomial of degree n must have at least one root real or complex, real or complex okay. So let us take let B be a basis an orthonormal basis ordered orthonormal basis for V and let us call A as the matrix of T relative to B then A is self adjoint A is Hermitian matrix A is equal to A star orthonormal basis so A is equal to A star consider the equation A minus lambda i of x equal to 0 I want to show that this equation has a non trivial solution okay since determinant of A minus lambda i equal to 0 so consider this equation determinant A minus lambda equal to 0 see in general this happens in a complex vector space okay I have made this I have already made this assertion the determinant of A minus lambda i is equal to 0 so I will use this explanation to show that this equation has a non trivial solution but then can you justify this statement why is determinant of A minus lambda i 0 I will just specify for some lambda element of f f is r or c depending on whether it is a complex or whether it is a real or a complex space for some lambda element f can you justify this fundamental theorem of algebra again see this equation is a polynomial equation of degree n okay it will have at least one root real or complex can you see that okay suppose there is a lambda equals lambda not for which determinant A minus lambda not i is 0 consider the general complex case lambda not is a complex number if I am in the complex space setting then this equation will have a solution for that lambda not non trivial solution x for that lambda not okay but forget about it what we want is more importantly there exists a lambda not such that A minus lambda not i x is equal to 0 for x not equal to 0 so we have a complex number in that case okay but in general it is a complex number in the real case also that will be a complex number okay see if it is a complex space there is no problem the proof is there proof is over if it is a real space then this number lambda not could be complex but if the number lambda not is complex then can you see that that needs a little explanation but you can write it down if the number lambda not is complex the vector x will also be complex okay first can you see that lambda not cannot be complex okay I think that needs an explanation okay so what I want to say is then the following. In the complex case in the complex case that is the underlying field is complex this means in the complex case since there exists lambda not element of C such that determinant of A minus lambda i is equal to 0 by fundamental theorem of algebra we conclude that A has an eigenvalue by the way this has to be real because we have just now shown that if you have a self-adjoint operator then eigenvalues must be real okay so just to specify I will write lambda not belongs to R also okay this is a complex case in the real case again lambda not is real in the real case lambda not in the real case also there exists lambda not in R such that A x equals lambda not x x not equal to 0 then what is the problem the only problem is that x could be complex x could be complex the problem is that x could be complex but remember A is real lambda is real if x is complex then for each coordinate of x I take the real part imaginary part real part forms a vector imaginary part forms another vector so I will have something like A into z plus I y equals lambda not into z plus I y where z and y will have real entries equate real imaginary parts you will get real eigenvalues real eigenvectors okay the only thing in this case is remember the fact that lambda not is real comes from the previous result okay but before that I have the equation A x equals lambda not x this x could be complex this x could be complex but then take the real and imaginary parts of x and make use of the fact that the entries of A are real and the fact that lambda not is real you can take out the real parts of x take out the real part of x imaginary part of x in fact each one will be an eigenvector corresponding to the eigenvalue lambda okay. Now I will just conclude by saying by taking the real part for instance we conclude that A has an eigenvalue take the real part of x that will be an eigenvector either the real part or the imaginary part will be non-zero both cannot be zero either the real part or the imaginary part will be non-zero so one of them will be at least one of them will be an eigenvector and so that number lambda not is an eigenvalue okay that is the explanation for the real case okay think of the other properties I will prove in the next class.