 Ok so I will state and prove the cyclic decomposition theorem ok I have mentioned this before what the statement is I will make the precise statement a little later but I need to tell you what is the problem. The question is can we write a finite dimensional vector space V as follows Z X 1 T direct sum Z X 2 T etc Z X K T that is can I decompose a finite dimensional vector space V such that into a sum of you know into a direct sum of subspaces such that each subspace is cyclic cyclic with respect to the operator T. So can I find vectors X 1 X 2 etc X K such that this decomposition is possible ok the answer is yes it is related to the following problem this is related to the following problem. See this is the reason why one must look for such a decomposition is that dealing with operators over cyclic subspaces is easier than dealing with operators over the general space. So one would like to look at the restriction operators the restriction of the operator T on the cyclic subspace then we have already derived some consequences for example if you look at the matrix of the restriction of T over the subspace that is a companion matrix etc ok I have not mentioned that is a restriction operator but it is essentially that. So there are some simplifications possible when you study an operator T by restricting the operator to certain subspaces in this instance the cyclic subspaces this problem is related to another problem which is the following. Given a finite dimensional vector space V there are subspaces W and W prime such that V is a direct sum of these two subspaces now for a finite dimensional vector space this is true even though we will not prove it in this course this is true now what is possible is that given a subspace W of V which is not the whole of V in the finite dimensional case there are infinitely many choices for W prime given a subspace W there are examples where given a subspace W there are infinitely many choices of W prime ok I can give a simple example motivated by the geometry of R2 this gives a decomposition the horizontal axis the vertical axis this gives a decomposition take the horizontal axis and look at the subspace generated subspace of all points lying on this line passing through the origin horizontal space and this slanted space this subspace has the property that the sum is a direct sum decomposition of R2 you can verify this easily unit vector is 1 0 you can take this is the line Y equals X so unit vectors 1 by root 2 comma 1 by root 2 then any these two vectors are independent so these two vectors from a direct sum decomposition of R2 so in fact any line so take the horizontal and take any slanted line set of all points lying on that line that will be a subspace these two together will give rise to a direct sum decomposition of R2 this can be done in Rn also. So given a subspace it is possible that there are infinitely many subspaces W prime that satisfy this condition we call W prime as a subspace complementary to W W prime is called a complementary subspace is called a complementary subspace complementary to W complement to W it is called a complementary subspace complement to W the question is if you have an operator T can I also look for T invariant subspaces can we extend this to a problem where suppose I have suppose the T of W is contained in W that is W is invariant under T can I get a W prime such that W prime is also invariant under T does there exist W prime a subspace such that W prime is also invariant under T this is rather too much to expect the answer is in general no okay but we will give a condition under which this holds we will we can impose okay the general answer to this question is no general answer to this question is yes given a subspace W in a finite dimensional vector space given a subspace W can I find a complementary subspace W prime this is always possible given a subspace W such that given a subspace W that is invariant under T can I find a subspace W prime which is also invariant under T answer is no I will give an example so that you will be convinced to get an to get an affirmative answer you need to impose something more on W that is what I will discuss next but I will give an example to show that the answer in general is no look at look at the following operator T the matrix of T I will write okay I will write T straight away let us look at this diagonal matrix 1, 2, 0, 0, 0, 3 you look at the space which is nulls 2, 2, 3 are the eigenvalues look at null space of 2 minus T I call that W I am not going to show but I am going to leave this as an exercise show that this see this W is an eigen space so T W is contained in W this is invariant under T that is not a problem okay but there exists no W prime such that W plus W prime is R 3 together with the condition that T W prime contained in W prime okay so if you are seeking an invariant subspace if you are given an invariant subspace W if you are seeking an invariant subspace W prime the answer in general is no you need some more conditions on W so that this will be satisfied what is that condition that condition is called T admissibility that condition is called T admissibility so let me give this definition condition on a subspace being T admissibility so this is a framework V is a finite dimensional vector space T is an operator on V W is a subspace this subspace W is called T admissibility if the following two conditions are satisfied the first condition is that it must be invariant under T the second condition is that if F T of Y belongs to W where F is any polynomial if F T Y belongs to W then there exists Z in W such that F T Y equals F T Z for any polynomial F this is T admissibility okay where does this come from for one thing the questions how is this related to the notion that we discussed just now how is this related to seeking a subspace W prime which is also invariant under T given that there is a subspace W which is invariant under T with the assumption that W plus W prime is the whole space W comma W prime gives a direct sum decomposition where does it come from if I have a subspace W given a invariant subspace W so let me make the statement this is easy to see a little lemma maybe let V be W direct sum W prime with the following T W is contained on W T W prime is contained in W prime then W is T admissibility then W is T admissibility this is very easy to see the converse is not at all easy the converse is non-trivial the converse is non-trivial consequence of the cyclic decomposition theorem what is the converse is the question that I asked you to begin with okay how does this follow this is very easy let me prove this quickly I want to show that this condition is satisfied by W okay see this I want to show that W is T admissible okay these conditions do not involve W prime okay I want to show W is T admissibility so let me start with F T Y in okay I will start like this this is take an arbitrary vector in the vector space then I can write that as Y 1 plus Y 2 Y 1 is in W Y 2 is in W prime in a unique way because of the direct sum decomposition V is a direct sum decomposition so this representation is unique for any polynomial F I look at F T Y F T is linear so F T Y is F T Y 1 plus F T Y 2 both these subspaces are T invariant so this belongs to W this belongs to W prime because both these are invariant subspaces if this belongs to W if this belongs to W then what is the consequence this has to be 0. So F T Y belongs to W this statement will imply that F T Y 2 is 0 this is in W F T Y 2 is 0 if F T Y 2 is 0 it means F T Y is F T Y 1 that is F T Y equals F T Y 1 with the extra provision for us that Y 1 belongs to W this is the condition 2 this is the second condition if F T Y belongs to W then there must be exist Z such that F T Y equals F T Z in this case Z is Y 1 okay so this is a simple consequence of the fact that both W and W prime forming a direct sum decomposition of V are invariant under T okay the converse is not that easy it is a consequence of the cyclic decomposition theorem. So this is a notion that is relevant to the statement of the decomposition theorem T admissibility of a subspace so let me write on the statement T is a linear operator on a finite dimensional vector space V let W be a T admissible proper subspace of V W is a T admissible proper subspace of V so W is not the whole of V but W could be single term 0 T admissible proper subspace of V then what we want to show is that there exist non-zero vectors I will call them X 1 X 2 etc X R non-zero vectors in V such that the following conditions hold condition 1 is that V is the direct sum of I will start with W not W 0 is the invariant subspace T admissible subspace that I start with then V can be shown to be the direct sum of W not and these subspaces Z X 1 T X 2 T etc X Z X R T I also have another condition there exist T annihilators there exist T annihilators there exist T annihilators I will call them P K 1 less than or equal to K less than or equal to R T annihilators of what of the X K's T annihilators P K corresponding to the vector X K that is P K is a T annihilator of X K etc for 1 up to R K running from 1 up to R such that I have this condition P K divides P K minus 1 for all K K running from 2 to R this time this is P K divides P K minus 1 ok maybe I will just write P K divides P K minus 1 for K equal to 2, 3 etc R the last part says that last part I will write here itself further the integer R that is what is the number of vectors X 1 etc X R that integer R and P K the integer R and P K for which 1 and 2 hold for which 1 and 2 hold are unique ok that is the complete statement as I mentioned before this has 4 steps the last step is the uniqueness I am going to skip the last step uniqueness is not very important so I will skip the last step I will take the other 3 steps and prove this theorem ok you could ask this question how does this answer how does this decomposition answer the one that we started with Z X 1 P etc Z X R T I mentioned that you could start with W not to be singleton 0 so this will not be there so V is a direct sum of this is called the cyclic decomposition of the vector space V we also have extra things about the annihilators and how they are related ok ok there are 3 steps here as I mentioned the proof has 3 steps first step is to show the following step 1 we show that there exists vectors Y 1 Y 2 etc Y R in V there exist nonzero vectors there exist nonzero vectors such that such that V is not the direct sum it is just the sum W not plus Z Y 1 T etc Z Y R T ok note that this is not the direct sum just the sum this is the first condition second condition if W K is W not plus Z Y 1 T etc Z Y K T W K is the subspace I get by adding these K subspaces to W not that is W K if W K is this for 1 less than or equal to K less than or equal to R then the conductor then the conductor P K I will use S Y K W K minus 1 this is a notation I have got to explain this notation ok I will do it a little later P K is a polynomial it is a conductor this conductor has the following property has the maximum degree so I must tell you that the statement is complicated but the proof of this is easy first step has a maximum degree among all T annihilators into W K minus 1 what is the meaning of this see W K is this subspace I look at a particular polynomial this polynomial is denoted by P K this P K has a property that among all the T annihilators into W K minus 1 this one has a maximum degree so let me write down the formulation P K is P K is maximum over all maximum X element of V S of X comma W K minus 1 I have still not defined what this little S is I will do this now and then prove this first step once I define S the second part should be clear. So what is this S you recall this subspace S T X W for a subspace so this is really to recall this notion this is the T conductor of X into W that is the set of all polynomials G such that G T X belongs to W this is the T conductor of X into W this is S T we know that this is an ideal this is an ideal in the principal domain F D F D so this is generated by a unique monic polynomial that monic polynomial I will denote it by little S okay to be specific this S for me will be X comma W K minus 1 this is the this is generated by the polynomial S that unique monic generator S to denote that it depends on X and the subspace W sorry and the subspace W I will denote it like this S X W so little S always denotes the unique monic generator of a particular ideal of polynomials in this instance it is X W S T X W so it is determined by X and W okay so now go back and check this go back and see what what this definition is look at all see look at S X W K minus 1 I told you what this is you look at that ideal S T X comma W K minus 1 fix an X and then your little S X W K minus 1 is a unique monic generator of that ideal you vary X in V and take the maximum of the degree of all those okay then I must mention degree P K degree P K if P K see this is this is an infinite set okay X belongs to V I look at the maximum of the degrees of I must also write maximum degree here so please make this correction also maximum of the degrees of these polynomials the polynomial is S I look at the degrees of those and I will maximize that degree what I am saying is that maximum degree will be equal to degree of P K where what is P K P K is this particular polynomial P K is this particular T annihilator that is you look at the unique monic generator of the ideal capital S Y K W K minus 1 that is this little S if that is denoted by P K then P K has this maximum property okay so I look at this ideal take the unique monic generator I am calling that as P K what is the property that P K has with W K minus 1 what is the property that P K has with W K minus 1 in relation to W K minus 1 this is that property among all those vectors X which are taken for that for that ideal I compute those polynomial unique monic polynomials take the maximum of those degrees that degree that number will be equal to degree of this polynomial okay only those numbers coincide so proof of step 1 let us say I want to start with the invariance of space W not I want to start with the invariance of space W not and then construct this V I would rather start with an invariance of space W and then apply W not for that let W be an invariance of space of V that is T of W is W contained in W I take an arbitrary vector Y in V if W is a proper invariance of space of V then we have the following inequalities two inequalities I look at maximum S X, W X and V okay if W is not V there exists Y in V such that Y is not in W so I want to show check this step all that I want to say is if W is not equal to V then I have the following look at okay if W is not equal to V then maximum of S X, W X and V can this be 0 can see this is the degree of a polynomial see this S X, W is the unique monic generator of S T X, W can this be 0 if the maximum is 0 can you see that W has to be the whole of V so this cannot be 0 for one thing it is strictly positive and for the other it cannot exceed the dimension of V no it can be equal to V the dimension can be equal to V the degree can be equal to the dimension of V that is possible because this maximum could happen could happen for the characteristic polynomial this maximum could happen for the characteristic polynomial so in which case it could be equal to dimension of V but it is strictly greater than 0 it is strictly greater than 0 otherwise this W will be the whole of V I will take a particular vector Y which attains this maximum let Y be a vector in V for which this maximum is attained in principle this Y can be found out there is a Y there is a Y that attains this maximum all that I will do is consider a new subspace W plus Z Y of T by the way this Y cannot be in W I have not mentioned that this Y cannot be in W note if Y is in W then that degree is 0 if Y is in W that degree is 0 so I am looking at maximum of all those S Y W so Y cannot be in W the degree will otherwise be 0 but it is strictly positive so consider this subspace now since Y does not belong to W remember that we could write down a cyclic basis for this subspace Z Y T we could write down a cyclic basis for this subspace so okay now that cyclic if Y belongs to W then this subspace will be contained in W but Y is not in W so the dimension of this subspace the dimension of this subspace will be strictly greater than the dimension of W 1 dimension of this subspace W that we started with there is at least this is at least one dimensional and that the vector in any basis the vector in that in particular the cyclic basis is independent with I am sorry not W 1 just W the vector in Z Y T in that cyclic basis will be independent with W because it does not belong to W so this dimension is strictly greater than dimension of the subspace W that we started with so what I do now is this is true for any invariant subspace W in particular W 0 I am given an invariant subspace so I will remove the statement we are proving applying W 0 to W what we have is that there exists a vector instead of Y I call it Y 0 there exists Y 0 which does not belong to W 0 such that dimension of W 1 is strictly greater than dimension of W 0 where for me W 1 will be the subspace W 0 plus Z Y 0, T I will call it Y 1 so that I am consistent with my notation if there are K subspaces here that will be W K there is only one subspace here okay what is W 1 is given here what I do now is look at W 1 W 1 for one thing W 0 is invariant under T this is invariant under T cyclic subspace this is invariant under T so the sum will also be invariant under T the W 1 is invariant under T if W 1 is the whole of the space we are done otherwise I can apply this step to W 1 if W 1 is not equal to V then we construct Y 2 such that that is we apply the previous that little result to W 1 Y 2 such that W 2 equals W 0 plus Z Y 1 T plus Z Y 2 T where the dimension of W 2 is strictly greater than the dimension of W 1 this strictly greater than dimension of W 0 every step the dimension increases by at least one V is finite dimensional so this procedure has to terminate okay this procedure terminates at some point because V is finite dimensional and every time we are increasing the dimension by at least one this procedure has to terminate and so I will simply say since this process must terminate we have at after at most dimension V steps we have V equals W 0 plus there is no direct sum just the sum W 0 plus to begin with okay in step 1 W 0 plus Z Y 1 T Z Y 2 T etc Z Y R T I must show that these polynomials satisfy those conditions okay that is easy but this is the first part where we have used W K to denote for any K 1 less than or equal to K less than or equal to R W K is the subspace W 0 Z Y 1 T etc Z Y K T so we apply this we apply the procedure that we started with to this subspace W K to get this formula V is just the sum of these subspaces is it now clear that this these P K's P K's have been chosen like this is it clear that P K must divide okay that comes later we will do that later so is it clear that is it clear that what is the condition that that we have imposed on Y 1 for instance okay you go back go back to this step we have started with the invariance subspace W what is the condition that we have imposed on Y Y is that vector for which this maximum is obtained Y is that vector in V for which this maximum is attained so if you go to first step if you go to the first step Y 1 is the vector for which that maximum is attained okay so if you look at if you look at S I will write that here if you look at little S Y 1 W K minus 1 that is this time W 0 Y 1 W 0 I am calling this P 1 right. So by definition this is a maximum degree among all T hanay letters into W 0 is that not how I see I am applying this for W 0 I am applying this for W 0 among all those among all those among all those X so among all those X and V I look at the subspace W 0 I look at the polynomial that generates that S T and take the maximum I do that for Y 1 I get P 1 P 2 similarly so this is really a consequence of how we have chosen Y 1 Y 2 etc okay so I will just write here that choice of Y K's ensure that P K satisfies the maximum property so as I told you this is an easy consequence of the construction of the vectors I just illustrated for the first vector that this P K satisfies the property that it has maximum degree among all T hanay letters into W K minus 1 comes from the construction of the vectors Y 1 etc Y K okay that is step 1 real is that fine let us move to step 2 I will proceed from step 1 let Y 1 Y 2 etc Y K be non-zero vectors the fact that these are non-zero I have not mentioned but these cannot be 0 otherwise the dimension cannot increase so I will skip that none of these vectors can be 0 because if one of them remember that the Z 0, T we know it is just single term 0 okay so dimension cannot increase if this is the crucial step right if Y is 0 the dimension cannot increase so in none of these vectors can be 0 so I have not mentioned but that is easy to see that Y etc Y 1 etc Y K be the non-zero vectors coming from step 2 non-zero vectors from step 1 satisfying just emphasizing the conditions 1 and 2 satisfying the conditions 1 and 2 for a fixed K let me set F as I fixed K I fixed a K and then I am looking at I am looking at that sub that sub ring that ideal S T Y K comma W K minus 1 my little S is the unique monic generator of that ideal for this step I am calling that polynomial as F for simplicity instead of writing this whole thing I am denoting it by F okay then what do I know about this F this F has a property that this F has a property that if you look at F T Y K thus must belong to W K minus 1 see I have not yet written down what is it that we are going to prove in step 2 okay I have not yet written down what we are proving in step 2 I am just fixing a notation F is this polynomial then by definition this is the polynomial coming from that S T so that is F T Y K must belong to W K minus 1 now it is in W K minus 1 and from the previous step I know what W K is so I know what W K minus 1 is so I can write this F T Y K in terms of these subspaces if okay so I have a representation if F T Y K can be written as so first one is W naught I will call it Y naught it is in W dot I am calling that Y naught plus F T Y K belongs to W K minus 1 W K minus 1 has this plus K minus 1 terms here K minus 1 subspaces so I will use this notation I equals 1 to K minus 1 now I do not know what these I do not know what these vectors are but for one thing I know that these are cyclic subspaces so there is one possibility of what is the basis for this for instance we know that it is Y 1 T Y 1 T square Y 1 etc it is a polynomial in Y 1 so I will write each term as a polynomial in Y 1 I will call that G I I will call it G I T Y I this is the most general expression for any vector in Z Y K, T and remember that each of these cyclic subspaces is invariant under T so is it okay I have I have written I have given a representation for F T Y K I know that it belongs to W K n minus 1 I look at the formula for W K minus 1 the first term is in W naught the rest of the terms are in those K minus 1 subspaces this is the this is the most general formula that one can write down for those terms then what happens what is it that I want to stay mention in step 2 if this happens then F divides each G I okay that is a remarkable property F divides each G I and there exists Z naught in W there exists Z naught and W naught such that there exists Z naught and W naught such that F T Y naught equals F T Z naught this should remind you of the T admissibility property so this is a immediate consequence of T admissibility of W naught the rest we have to show that is quite non-trivial step 2 is probably the most non-trivial part of this proof and even in step 2 the second part is easy consequence of T admissibility of W it is this part that F divides G I that is the most non-trivial let me see so I want to prove step 2 proof of step 2 say I have the polynomials F and G I by Euclidean algorithm there exists polynomials H I such that H I, R I such that I can write the polynomial G I as F I into H I plus R I by Euclidean algorithm where either R I is 0 or degree of R I cannot exceed degree of F here I varies from 1 etc K minus 1 K minus 1 okay I want to show that R I is 0 I want to show that R I is 0 degree of I am sorry it is not F I just F this is F into H I F is the polynomial that I started with G I's are the polynomials that come from the general representation of F T Y K I want to show that each R I is 0 okay we show that each R I equal to 0 if you show that each R I is 0 then it means that F divides G I for all I okay and second one as I mentioned is an easy consequence of T admissibility of W not we need to show that each R I is 0 the proof is by contradiction suppose R I is not 0 we will get a contradiction the proof is by induction on K for proving by induction I need a basis step basis step is K equals 1 basis step is K equals 1 for K equals 1 what do I have what is given what do I need to prove for K equals 1 what I have is that F T Y 1 belongs to W not the question is does F divide G 1 I am sorry just G 0 no for K equal to 1 this is vacuous for K equal to 1 I have just this F T Y K is Y not for K equal to 1 it must be in W K minus 1 that is W not right for K equal to 1 this is W not F T Y 1 is just Y not this simply does not figure F T Y 1 is equal to Y not so the only thing I need to demonstrate is whether this condition is satisfied is verify is this condition is satisfied this is satisfied because W not is T admissible okay this condition is satisfied because W not is T admissible so K equal to 1 is really T admissibility of W not I think I have to stop here and continue tomorrow assume that it is true for K greater than 1 and prove it for prove it for K plus 1 okay and stop here step 2.