 So, now we have seen that each of these factors contributes x minus lambda i to the power di to the characteristic polynomial and x minus lambda i to the fi to the monic polynomial. But we do not yet know about the comparison between the numbers di and fi these positive integers which one is greater which one is lesser and so on right. Let us look at a ii minus lambda ii and define this as ni. What do you think is the monic polynomial for ni? Remember lambda i is now given fixed number the ith eigenvalue exactly. What do you think is the monic polynomial for ni? Let us make it simple. What do you think are possible eigenvalues for ni? What kind of a property does this have ni? I am now going to claim that this ni is a nilpotent matrix. What is a nilpotent matrix? A matrix which when raised to higher and higher powers eventually at some level it becomes a 0 matrix. So, the claim is that ni is a nilpotent matrix. It is not that difficult to see you see. Why? Because what is the minimal polynomial for aii? It is x minus lambda i to the fi right. So, mu aii is equal to x minus lambda i to the fi. So, the eigenvalues of aii are what? Only lambda i. So, what are the eigenvalues of ni? So, this implies eigenvalues of aii is equal to lambda i and nothing else right. We have just seen that because it is minimal polynomial of that form. So, that can be it is only eigenvalue. So, what are eigenvalues of ni? For any eigenvalue lambda i of aii and some eigenvector you must have aii minus lambda iii times v is equal to 0. That means, ni v is equal to 0. So, the only eigenvalues that this fellow can have are 0's 0 you agree? The eigenvalues of ni are 0. Then minimal polynomial of ni can be some x raise to some power some arbitrary power. What power exactly do you think it is? Let us now get into that question even. Can this be a power more than di? What is the size of ni? It is di cross di. So, consider the set with any vector. So, for any vector v in v i v n i v ni square v until ni to the power di v. This is not v I am sorry. This is actually in f to the di right. So, how many vectors are here? di plus 1 di plus 1 vectors sitting inside a di dimensional vector space. So, they must be linearly dependent and that linear dependence is exactly how we cook up minimal polynomials. But here we already know a priori that the minimal polynomial can be of some hi let us just say. Actually it is going to be just fi. We will see that hi is equal to fi. Nonetheless therefore, one of these fellows in fact the linear independence in this case is lost in a very special manner. Why? Because if this is the structure of the minimal polynomial at some stage you will have exactly 0. So, does not matter what vector v you choose you do not have to go up to powers as high as di at most rather higher than di. As high as di is the maximum it gets. So, ni raised to some power less than or equal to di must vanish. Please follow the stream of reasoning again I will repeat. What am I saying? First we saw that this fellow has a minimal polynomial like so. Therefore, the eigenvalues of ai are nothing but lambda i's. If the eigenvalues of ai is nothing but lambda i then the eigenvalue of ni defined like so has to be only 0. If all the eigenvalues of ni are 0 then its minimal polynomial must have a form like this some hi right. So, the minimal polynomial has a form like this we want to find out what this form must be what this number must be. So, now we see that this number cannot be any greater than di because no matter what vector you take how do we find the minimal polynomial just go back to the construction choose any basis for every element of the basis you keep hitting it until you arrive at a linearly dependent set. Now here at the linear dependence will be very special kind you do not actually need multiple fellows because the minimal polynomial has a specific form here. So, it means that it will be ni raised to some power acting on that particular fellow is 0. So, you take the highest power among all the fellows in the basis set. So, first let it act on v 1 then on v 2 then on v 3 then till v di and look at the highest power to which ni has to be raised until it loses linear independence which is when it becomes 0 exactly that is why it is a nilpotent operator right. So, now can this number hi be any more than di it cannot right. So, this means that this hi is less than or equal to di, but what is this hi? Hi is nothing, but this fi right fi is the original monic polynomials factor right original minimal polynomials factor. Now, we have constructed this ni if this hi is any different from fi then we will have a contradiction it means that ni has ni when raised to this power hi vanishes and now before does it vanish yeah that means x minus lambda i yeah for ai i this would have also vanished at that same hi yeah. So, I did just use this as an artifice, but really what we have is hi must equal fi. So, essentially what we have is fi is less than or equal to di is this part clear it is very important if not I will repeat what am I saying that when we transferred the argument from ai i to ni I just said for argument sake let us take another variable hi, but really this hi cannot be anything other than fi because ni is related to ai i after all look at this the moment you raise this fellow to some power you are basically raising ni to that same power because they are equal. So, if hi were different from fi then you had better replace fi with hi there instead right do you see that I just chose hi on the on the go like that randomly about because I did not want to prolong the argument at that point, but the point is hi must be equal to fi if hi must be equal to fi and hi cannot be bigger than the algebraic multiplicity therefore, fi also cannot be bigger than the algebraic multiplicity which means what exactly in the context of this whatever power you have raised every factor x minus lambda i in the characteristic polynomial that is the algebraic multiplicity you need not raise it to a power greater than that in the minimal polynomial that means the minimal polynomial definitely divides the characteristic polynomial yeah so let me erase this part now. So, the minimal polynomial divides the characteristic polynomial which leads us to the celebrated Cayley Hamilton theorem that is the characteristic polynomial definitely belongs to the annihilating ideal of a. So, this must be 0 in fact the role of the minimal polynomial is not highlighted much because it takes a lot more work as you see of linear algebra to understand how to even get the minimal polynomial characteristic polynomial is rather easy because you know how to find a determinant like an algorithm, but it suffices you see. So, for example, some of the applications people say of the characteristic polynomial is in finding or the Cayley Hamilton theorem. So, this is so I am not written out step by step, but I have argued why the Cayley Hamilton theorem must be true this is exactly nothing but right this is exactly the Cayley Hamilton theorem the utility of this one of the utilities is if a matrix is invertible you can just find the inverse of the matrix without those determinant and all those computations why because you have okay let me now erase this because we know that the Cayley Hamilton theorem is true suppose you have a to the n plus alpha 1 a to the n minus 1 plus alpha 2 a to the n minus 2 plus so on till alpha not oh sorry alpha n rate yeah i is equal to 0. If the matrix has an inverse which is to say that it is not singular that means it has no zero eigen value if it has a zero eigen value it cannot be inverted. So, if it has indeed got an inverse then you will have a inverse a to the n plus alpha 1 a inverse a to the n minus 2 plus dot dot dot till alpha n a inverse is equal to 0 and now you have a inverse is equal to 1 by alpha n rate times summation alpha i a to the n minus i minus this is n minus 1 yeah sorry yeah that is right yeah yeah n minus i minus 1 right i going from 1 okay the last fellow will actually remain plus so there has to be a minus sign here plus a to the n minus 1 so the last fellow will be what 1 through n rate yeah no what should be the number oh the a inverse will go right so n minus 1 yeah that should be it so i is equal to 1 is what but we have this constant 1 here right so it's a monic polynomial so it's this one I'd rather keep as free from this so is this right yeah so this is without doing any determinants or anything provided the matrix is invertible if it's not invertible there's nothing you can do even with determinants right but if it's invertible then just keep multiplying the matrix and you can get an inverse for the matrix by dint of the kelly hamilton theorem right so now you might say okay there's been enough work done on this we've got it down to its smallest possible ingredients so we've achieved quite a lot apparently right because we've started with an n cross n matrix we looked at these a invariant subspaces constructed through co prime factorizations of the minimal polynomial along the way we saw how to cook up the minimal polynomial as well right so once you have these co prime factors you can actually have we prove the kelly hamilton theorem without using determinants yeah and we've also seen that any matrix can be brought down to this block diagonal structure each block diagonal sizes di cross di where di happens to be the algebraic multiplicity of lambda i where the lambda i's are the distinct eigenvalues of the particular matrix right so why are we still not satisfied what do we want to look for further you see what is sitting inside each individual block diagonal what if each individual block diagonal doesn't look very nice so we are talking about say a 100 cross 100 matrix in which suppose one eigenvalue is repeated 10 times then you still have a 10 cross 10 block diagonal matrix and there's a lot of coupling between these 10 variables and that may not be to our liking we might still want to disintegrate this down so hereafter what we are going to look at is the following we will zoom in on that matrix so let's we forget the original representation we have gotten up to this far a 11 a 22 a kk where a ii is of size f di cross di and there's nothing further we can do to this overall thing here so this is what a looks like yeah subject to the choice of basis based on the kernels of those irreducible or co-prime factors of the minimal polynomial now we want to take a closer look at each individual a ii and we've already seen certain things what have we seen we have seen that the minimal polynomial of a ii is going to be of the form x to the power fi sorry x minus lambda i to the power fi while the characteristic polynomial of a ii is of the form x minus lambda i to the power di the question is can we do something more about this we know that ni defined as so these are some of the observations we've made already a minus lambda i or sorry a ii minus lambda ii and ni to the power fi is 0 and ni to the power um let's say k is not equal to 0 for k less than fi that is to say that the minimal polynomial of ni is x to the power fi we've seen all this this is a summary of what we have seen so far based on this we now want to get further down deep down into the structure of a 11 and see if we can get it down to a nice looking form yeah you consider that an endeavor worthy of certain merit right worth our while why because again obviously if the di is pretty large in itself then we may not have achieved too much we still have to solve a large size system okay so let us get down to the business of studying these ni like objects so now from here on we will completely focus our attention on mil potent matrices whose minimal polynomials happen to be of the form x to the x raised to some power is equal to 0 and therefore of course the characteristic polynomials also happen to be x raised to some other power probably is equal to 0 yeah so that is our attention because if we study this we can then fit this back in here like a puzzle so this is already a basis now for this now based on this can we also choose some smart choice make some smart choice of basis after this to transform this further yeah and that will lead us to the so-called Jordan canonical form okay so what I will do is in whatever time we have left I will just first give you a statement of the Jordan canonical form all right and maybe we will not have time for the proof but we will try to understand the implications of Jordan canonical form and if again if time permits we will see some applications of the Jordan canonical form okay maybe in the next module when we have time or the next lecture when we have time we are going to a detailed proof because the proof takes quite a while okay but at least that is the reason for us to avoid the statement of the theorem and some of its interesting implications so here is the statement of the Jordan canonical form so now our basic vector space is of size di but we will really consider that our entire space is now n dimensional and the n dimensional these are all of size n cross and I mean when we are fitting it in here these are of surely of size d1 cross d1 but if this is all that we are interested in there is no point in carrying the subscript and all of this right which is to say that suppose okay let us give a numerical example to motivate this better suppose you have this matrix 0 1 0 0 0 1 minus 1 minus 3 minus 3 okay what do you think is its characteristic polynomial this could be by the way one small little matrix sitting here but for me now this is nothing but my entire vector space so I might better as well consider this to be my three dimensional vector space so I shall be using n to denote the vector space so in fact you can check that this one is x my x plus 1 whole cubed you can just check so therefore its eigenvalue is minus 1 so its minimal polynomial must look like either x plus 1 or x plus 1 squared or x plus 1 cubed nothing more surely yeah because we have already proved Kelly Hamilton theorem and in that proof already we saw that the degree to which every factor gets raised in the minimal polynomial is at most equal to the algebraic multiplicity and no more you don't need to raise it to any higher degree than that so we can also check this here so you take a plus i what does that look like 1 1 0 0 1 0 minus 1 minus 3 minus 2 surely this is not 0 so x plus 1 is ruled out but try a plus i squared is that 0 well the first check itself tells you it's not 0 the 1 1 entry this multiplied with this leaves you with a 1 so this is also not equal to the 0 operator and now of course there's no magic in it you can just go ahead and check that a plus i cubed must be 0 that's essentially nothing but the Kelly Hamilton theorem yeah so just check this as an exercise so my point behind trying out this exercise is to now tell you that forget about this di fi and all this i dependence and all this business look at our basic operators to only belong to that one kind which is operators of repeated eigenvalues operators with repeated eigenvalues because if they are not repeated we exactly know what to do with them we are now zooming in on each individual block so with this motivation we will not forget about all these index terms and the subscripts and superscripts and we will only consider operators a such that it maps from v to v chi of a is equal to x minus lambda to the to the n where the dimension of v is n and mu of a is x minus lambda to what power exactly let's give it's just some name shall we let's call it f well of course f is less than or equal to n so we are only going to look at this kind of operators now okay so with this in mind define n is equal to what do you think n is a minus lambda i yeah so surely this is nilpotent right in fact when you raise it to the power f it becomes just a zero operator right and all its eigenvalues must be zero right just as a has all its eigenvalues at lambda n has all its eigenvalues at zero so then here is the with this setting here is the statement of jordan canonical form there exist v1 v2 until vk belonging to v such that n raised to the power m v1 v1 n raised to the power m v1 minus 1 v1 n v1 v1 that ends the story with v1 next starts the story with v2 m v2 v2 n raised to the power m v2 minus 1 v2 n v2 v2 dot dot dot until n raised to the power m vk vk okay n raised to the power m vk minus 1 vk n vk vk is a basis for v with raised to the power m v i plus 1 is equal to okay I have just chosen v i so this i can be any number from 1 through k what do you think this is just a guess yeah so should it be just this or should I you know take this acting on the corresponding v i is equal to 0 which tells me what exactly that is look at these last fellows or the first fellows in these what do you think they are a basis for if I hit these fellows according to this premise if I hit these fellows with another n they get taken to zero so these fellows that I am starting each of these blocks from they form a basis for the kernel isn't it kernel of n yeah that is n to the power m m v1 v1 until n to the power m vk vk is a basis for kernel n okay it's almost like magical what does it say it says that you'll be able to find a certain number of vectors inside this vector space such that if you keep hitting them with n repeatedly until a point arises where it devolves to zero and then stack them up like this you'll end up with exactly n number of vectors of course how to choose those v1 through vk in a special manner may not be very obvious at this point but it says that there does exist and the proof of this so this is actually the statement of the existence of Jordan canonical form for any operator with repeated eigenvalues okay all its eigenvalues are repeated so if it's not diagonalizable at least you can get it down to the Jordan form okay and if you can get it down to the Jordan form then a basis that takes you to this Jordan form looks exactly like this okay so this n by the way need not look like any special nilpotent matrix just that matrix I had chosen right a plus i that was a nilpotent matrix for that a 0 1 0 0 0 1 minus 1 minus 3 minus 3 that matrix that I chose and then I took a plus i right so that's a nilpotent matrix this n could be just that nilpotent matrix so the only question that remains is of course the proof remains a big question we will not have time to address that in this module but the more important question is suppose you believe this is true how to choose these v's it turns out that this choice of v is critical to our ability to get this down okay so we will see that in the next module