 Okay proof of the primary decomposition theorem I have rewritten the statement there are slight changes you please note that this notation W i has been introduced okay these are moniker reducible polynomials P 1, P 2, etc P k are distinct okay proof is to really construct projections and then identify the range spaces of the projections with these subspaces W i okay. So I need to really construct projections what I would need is the extended division algorithm for polynomials okay let me introduce these polynomials first I will call f i to be the polynomial that is obtained from the minimal polynomial after removing the factor corresponding to i after removing the factor P i to the r i from the minimal polynomial okay I am defining these polynomials f 1, f 2, etc f k f 1 for example is P 2, r 2, P 3, r 3, etc P k, r k f 2 is P 1, r 1, P 3, r 3, etc so it is a minimal polynomial divided by the polynomial power that you have for i okay there is another notation for this product i is fixed so I will take j to be the running index product j not equal to i P j r j this is f i product j not equal to i P j r j then look at look at these polynomials f 1 this is for 1 less than or equal to 1 less than or equal to k consider this collection this set of polynomials f 1, f 2, etc f k these polynomials remember that the polynomial the set of all polynomials is a principal ideal domain you can think of that as generalizing the set of positive integers okay so if you could do Euclidean algorithm there in the principal ideal domain you could do that in this particular case the space of all polynomials in this P i d these polynomials are relatively prime can you see that these are relatively prime numbers A and B are said to be relatively prime if the greatest common divisor is 1 numbers A 1, A 2, etc A k are relatively prime if their greatest common divisor is 1 is that the case with these polynomials that is the case because f 1 misses P 1 to the r 1 f 2 misses P 2 to the r 2 f k misses P k to the r k so if you take any prime power if you take any what are the only devices of the minimal polynomial P i to the r i these are P 1 to the r 1 P 2 to the r 2 etc P k to the r k these are the only devices of the minimal polynomial now any of these prime powers if it divides they divide any you take any prime power P i to the r i this divides k minus 1 polynomials in this but leaves 1 it does not divide the other one and so these are relatively okay these are relatively prime polynomials similar to what happens for integers we can write down the following statement so this is really Euclidean algorithm extended so I will not get into the details of how this is done these are relatively prime so there exist polynomials I will call them G 1, G 2, etc G k such that I will write down an equation similar to what I told you for integers polynomials G 1, etc G k such that f 1 G 1 plus f 2 G 2, etc f k G k this is 1 if A and B are relatively prime then there exist numbers C and D such that A C plus B D equals 1 same thing happens here how do you get those numbers C and D repeated division algorithm remember that on the left you have polynomials polynomial addition product of polynomials and then you are taking addition okay so remember that the first term is f 1 T into G 1 T second term f 2 T G 2 to etc f k T G k T that is equal to constant 1 for all T that is equal to constant 1 okay using these I will define a new polynomial H i of T H i of T is f i of T into G i of T then summation J equals 1 to k H J equals 1 just a different notation now let me call H J of capital T so this is another notation set E J to be H J of capital T we will show that these E J's are projections and we know that projections give rise to direct some decompositions the only thing that will be left finally is to see what the range spaces of these projections are we will show that the range spaces of these projections turn out to be these hence the proof okay what will also follows that each this is you do not need to really prove this each W is invariant under T does not need any proof because if you look at the definition of W i it is a polynomial power but this commutes with T so we know that the null space and the range space the polynomial P i capital T R i commutes with T so we know that any such operator U will have the property that it is null space and the range space are invariant under T so second part there is really no need for the proof it is only the first part that we need to prove okay H J of T is E J so from this formula it follows that E 1 plus E 2 plus etc E k is equal to identity that is immediate because E J is H J of T and what happens to E i E J we must show that the product E i E J is 0 okay but look at this product H i H J the product H i H J okay let me write okay I will just keep it without writing those T's H i H J is F i G i F j G j this can be rewritten as F i F j G i G j now look at this product F i F j F i misses P i to the R i F j misses P j to the R j but the product is divisible by M do you agree the product is divisible by M the minimal polynomial this product in fact has more factors but I am not worried all that I want to observe is F i F j the product is divisible by M M must have P i to the R i for all i the only problem with F i is that P i to the R i is not there but that will be there in F j whenever i is not equal to j say I am doing all this when i is not equal to j I want to show E i E j is 0 so when i is not equal to j this product will have M and other factors also this is divisible by M and so if you look at E i E j this is by definition H i H j this is F i T F j T G i T G j T and for what I told you just now since the minimal polynomial divides this product it is an annihilating polynomial so this must be 0 is that clear the product is 0 because the minimal polynomial for one thing is an annihilating polynomial M of capital T is 0 and M of capital T sits inside some in this it has other factors also so E i E j is 0 summation E j is identity E i E j is 0 multiply this equation by E i you get E i square equals E i it follows as before that E i is E i square that is multiply this equation by E i then all terms vanish except the i th term the i th term is E i square on the left on the right I have E i so E i's are idempotent the product is 0 summation E j is identity so I know that so these are projections I know that the projections give rise to a direct sum decomposition all that I need to finally show is that we only need to show that the range of E i is W i okay which is null space of P i T to the R i once you show this it will then follow that this decomposition is valid okay so we need to only show okay we have two subspaces we must show that each is contained in the other let me start with X in range of E i then X is E i X I want to show that X belongs to null space of P i T R i. Look at look at P i T R i of X I want to show that this is 0 this is P i T R i X is E i X this is P i T to the R i E i by definition is H i this is P i T to the R i H i by definition is F i G i capital T everywhere now look at this product P i R i into F i is M P i R i into F i is M so this is the minimal polynomial into G i T X but this is a polynomial in T so these two operators commute so I can flip this around the same in fact the same argument holds earlier also it is not necessary there may be so I can move these two around because they are polynomials in T they commute and so this is 0 M T of X G i into M T of X M is a minimal polynomial so this is 0 this operator is 0 identically so I have shown that if X is in range of A i E i then X is in the null space of X is in the null space of P i T to the R i and it show the converse conversely suppose converse is suppose I have the vector X taken from null space of P i T R i I must show that X is equal to E i X I must show that X is equal to E i X I must show that X belongs to range of E i that is the same as saying X equal to E i X is it clear that we will then it is enough to show that E j X is equal to 0 for all j not equal to i we show that E j X is equal to 0 for all j not equal to i because if you show that E j X is equal to 0 for all j not equal to i then it means if you look at that equation E 1 plus E 2 plus etc E k equals i it says that X can be written as E 1 X plus E 2 X etc E k E i X plus etc E k X all terms except the i th term are 0 and so X is equal to the i th term i th term comes from range of E i and so I am through is that clear this argument we have seen before. So we will show that E j X is equal to 0 for all j not equal to i there is one consequence that so I want to look at the following particular case of the previous theorem a particular case of the primary decomposition theorem suppose that suppose that each P i is a linear polynomial that is P i of T is T minus lambda i suppose each P i is a linear polynomial for example this happens if the underlying field is the field of complex numbers any algebraically closed field this will happen okay in this case let us observe that W i is null space of T minus lambda i to the r i these are W i null space of P i to the r i P i is T minus lambda i let us now call D as the operator lambda 1 E 1 plus lambda 2 E 2 plus etc lambda k E k say I am assuming that the minimal polynomial factors into product of powers of linear polynomials so I know these numbers lambda 1 etc lambda k and I also know how to construct these projections okay remember in fact that the projections are constructed as in this manner so these projections are polynomials in T this is an important observation coming from the previous theorem the projections are polynomials in T okay suppose I said D to be this then let us look at this I have a finite dimensional vector space I have the operator D defined in this manner where E 1 E 2 etc E k satisfy summation E j equals identity E i square is E i E j is 0 then from one of the results that we proved earlier it follows that D is diagonalizable lambda 1 etc lambda k are the distinct eigenvalues of D and the range of E i's will be those subspaces W i's we then have the following so I am appealing to that result D is diagonalizable lambda 1 etc lambda k are the only eigenvalues or the eigenvalues of D and range of E i equals W i. Let us call remember we started with an operator T we are defining a new operator D we started with the operator T we are defining a new operator D I will now define another operator N by N to be T minus D N is T minus D then it is easily seen that T is D plus N right that is by definition T is D plus N the operator T has been decomposed into a sum the first one is diagonalizable what is the property that the second operator has what is the property of N D is diagonalizable N is what is called as a null potent operator that is look at the definition of N that is T minus D T is lambda 1 E 1 etc lambda k E k I am sorry this is only for D T I can write it as T times identity minus D T times identity I write it as T E 1 plus E 2 etc T into identity identity can be decomposed in this manner minus D D is lambda 1 E 1 etc lambda k E k this is my N so I can write this as T minus lambda 1 I E 1 plus etc T minus lambda k I E k this is my N this is my N I want to look at powers of N I want to look at powers of N let me write this using the summation notation J equals 1 to k T minus lambda J I E J that is what I have for N I want to look at N square N cube etc let us look at N square for N square I need to multiply N with N that is I need to do this many multiplications T minus lambda 1 I E 1 into this plus T minus lambda 2 I E 2 into this etc I will do one of these operations and then observe the pattern I want to calculate N square in general N power R to calculate N square I consider the following consider T minus lambda 1 I E 1 into N this is the first term of N square I will observe the pattern and write down all those terms so this is T minus lambda 1 I E 1 into summation J equals 1 to k T minus lambda J I E J this can be brought inside E I E J is 0 when I is not equal to J J is a running index when J takes a value I I get E I square that is E I so this is simply T minus lambda 1 I E 1 square but that is E 1 this is the first term this is the first term of N square I am sorry it is not through this will remain right okay I should write like this T minus lambda 1 I E 1 this is how I get the simplification E 1 I observe just now it is a polynomial in T so this commutes with this so this can be brought here E 1 square is E 1 so this whole thing is T minus lambda 1 I square E 1 that is what I wanted to say the first term of N square T minus lambda 1 square E 1 so in general I can write this N square is summation J equals 1 to k T minus lambda J square E J by induction it can be shown that N to the L is summation J equals 1 to k T minus lambda J I to the L E J this can be shown for any positive integer L this holds suppose I choose this L to be greater than or equal to all the numbers R 1, R 2, etc R k what are these numbers R 1, etc R k these are the exponents of the primes occurring in the minimal polynomial if L is greater than or equal to R i then what happens look at say I want to look at N L x N to the L x this is summation J equals 1 to k T minus lambda J to the L E J x okay look at E J x this is in the range of E i the range of E j this is the range of E j this is the range of E j and look at what we have here if L is greater than or equal to R i then let us say L is R i plus 1 let us take L to be I will just take the first term R 1 plus 1 okay suppose L is R 1 plus 1 for me then I can keep T minus lambda 1 into T minus lambda 1 to the R 1 into E 1 of x just the first term suppose L is R 1 plus 1 I am looking at the first term the first term will then be T minus lambda 1 to the R 1 plus 1 E 1 x this is T minus lambda 1 into T minus lambda 1 to the R 1 into E 1 x but T minus lambda 1 to the R 1 has this E 1 has a property that range of E 1 is in the null space of T minus lambda 1 to the R 1 do you remember this property range of E i is null space of P i T to the R i in this case we have taken the linear polynomials right so for me P i of T is T minus lambda 1 so anything in the range of E i is in the null space of that so range of E j that will be in the null space of T minus lambda j to the R i to the R j when L is greater than or equal to each of them each term is 0 and so this is 0 when L is greater than or equal to each of those R i's n to the L x is 0 that is the same as saying this is true for all x so n to the L is a 0 operator such an operator is called a nilpotent operator see it is possible that n to the L is 0 for a lesser integer that is possible but if I choose an integer that is greater than or equal to all those positive integers R 1 etc R k then for that integer this must be true such an operator is called a nilpotent operator n is called nilpotent now this is the best one could do for any operator that is one could write T as D plus n where D is diagonalizable and n is nilpotent what also follows is that look at the definition of n, n is defined as T minus D D to begin with is defined in terms of E 1 etc E k, E 1 etc E k in particular R polynomials in T so n is a polynomial in T do you agree E 1 etc R polynomials in T so D is a polynomial in T so n is a polynomial in T D is a polynomial in T n is a polynomial in T n into polynomials in T commute so I have this property D n equals n D okay so this is what I wanted to say this is sometimes referred to as a Jordan decomposition Jordan decomposition of an operator T okay so what let me summarize what we have shown is what we have shown is that if T is an L V then there exists a diagonalizable operator D and nilpotent operator n such that T equals D plus n and n D equals D n this is what we have shown just now. Now what can also be shown which I will not do is that if I have any two other operators D prime n prime such that T is D prime plus n prime D prime n prime commute then D prime will be equal to D n prime will be equal to n so it is unique in this respect this follows from the fact that two operators are simultaneously diagonalizable if and only if they commute now since this result I have not proved I am not going to state that but it is only for your information that if there is any other decomposition satisfying these two conditions then it has to be the decomposition that we started this is unique in that respect okay so now you can see that if T is diagonalizable if T is diagonalizable this D will turn out to be T so that n is 0 if T is not diagonalizable this is the best one could do okay what is the use of this n D equals D n have you studied matrix exponentials in your differential equations okay may be then I have to skip that this is useful there but I can give a quick review in solving systems of linear differential equations you need the notion of E to the A the exponential of a matrix the exponential of a matrix there is a formula similar to exponential x exponential x the Maclaurin series expansion is 1 plus x plus x square by 2 factorial etcetera there is a similar expression for E to the A for any finite matrix A. Now in this expression this is what is used in solving a system of differential equations so you need to compute powers of A when you compute powers of A the ideal situation is the matrix is diagonalizable if the matrix is diagonalizable computing powers is easy let me quickly mention this if the matrix is diagonalizable P D P inverse then A square is P D P inverse P D P inverse that is P D square P inverse so what you need to do is to compute A square you need 2 multiplications really P into D square and the product into P inverse P and P inverse will be kept at a place you will have to do these 2 multiplications D square since D is diagonal it will be let us say D 1 D 2 etcetera D n D square will be just D 1 square D 2 square etcetera D n square this you can do for any power A to the R is P D to the R P inverse this greatly simplifies the computation of powers of a matrix one of the places where it is useful is in computing the exponential of a matrix if it is not diagonalizable what happens if it is not diagonalizable then A can be written as D plus n again you can compute the powers computing powers it will not be this simple but it will still not be bad because after some stage since n is nilpotent by the way there is this advantage of so you want to compute powers of D plus n it is the computation becomes simpler if these 2 commute that is the place where this is important this is useful practically so you want to compute D plus n to the L then you will see that you have a kind of a binomial expansion then after some stage this n power L contributes nothing because it is nilpotent and the computation will not be as simple as a diagonalizable case but still not be bad so in computing powers especially when solving simultaneous differential equations it is useful to know if the matrix is diagonalizable even if it is not diagonalizable this is good enough I wanted to illustrate I want to illustrate this result and the previous result this is the Jordan decomposition and the result that primary decomposition theorem that leads to the Jordan decomposition by means of an example. So I am looking at the matrix A let me see that is 2 0 0 minus 1 2 0 0 0 minus 1 I think it is 1 2 0 please verify that the characteristic polynomial of this matrix is by the way what are the Eigen values upper triangular lower triangular so 2 to 1 so that is T minus 2 the whole square into T plus 1 the Eigen value lambda 1 is 2 I should write lambda 2 equal to minus 1 because I am always writing lambda 1 etcetera lambda k as distinct okay so lambda 2 is minus 1 I will only mention that this comes twice sometimes we write lambda 1 equals lambda 2 equals 2 this is lambda 3 so I will simply say this 2 that comes twice okay I want to compute E j so I want to compute D I want to compute N what is yeah in this example what happens what is F 1 by the way this is not diagonalizable so you can verify that the minimal polynomial is this itself you can verify it is not diagonalizable by looking at the Eigen space corresponding to the Eigen value 2 the nullity is only 1 so it is not diagonalizable okay please verify those facts so the minimal polynomial is same as this okay F 1 is let me write this is equal to M of T the minimal polynomial F 1 is this by this power remote so T plus I am taking 2 as the first Eigen value minus 1 as the second Eigen value so F 1 is T plus I F 2 is the other one T minus 2 whole square T minus 2 I the whole square I know that there exist polynomial G 1 G 2 such that F 1 G 1 plus F 2 G 2 is 1 I will give you one choice G 1 I am really writing from memory Phi I minus T by 9 G 2 is identity by 9 then F 1 G 1 plus F 2 G 2 equals identity operator okay please verify these from this I can calculate E 1 for instance E 1 is F 1 G 1 please verify that E 1 turns out to be this 1 0 0 0 1 0 0 0 E 2 is F 2 G 2 use this calculation and then verify that E 1 plus E 2 is identity it will be 0 0 0 0 0 0 1 so E 1 plus E 2 is I by definition D is lambda 1 E 1 plus lambda 2 E 2 that is 2 E 1 minus E 2 this is my D in this example remember D is diagonalizable only but in this example it turns out to be diagonal matrix right these are already diagonal matrices E 1 and D 2 are already diagonal matrices 2 E 1 minus E 2 2 E 1 minus E 2 2 0 0 0 2 0 0 0 minus 1 this is already diagonal it does not happen like this but in this example because it is upper triangular a lower triangular D turns out to be diagonal so this is diagonalizable obviously D is diagonalizable what is N? N is T minus D that is A minus D in this case A minus D so it is really this matrix 0 0 0 1 0 0 0 0 this is N you can verify that N square turns out to be 0 N must be nilpotent N square turns out to be 0 okay let me stop.