 So, far just to quickly summarize what we have seen it is this, we have learnt how to choose an appropriate basis. So, that any matrix can be block diagonalized like so where each of these aii has what property exactly that it is minimal polynomial will be of the form right and it is characteristic polynomial will be of the form right. So, this is the overall picture for any operator a yeah we have seen that this can be done there after we argued that then it does make sense to take a closer look at these individual blocks because there is nothing more we can do apart from get a simplified structure for the individual blocks themselves. So, here after we decided that we shall look at only operators of this kind that is of the type aii yeah because all of these eventually stemmed from an overall characteristic polynomial that looks something like this right. So, this was the characteristic polynomial of the overall matrix of the operator and this was the minimal polynomial of the same right. So, from this given structure we brought it down to this no further simplification is possible apart from simplifications within these little blocks now. Now these little blocks though I say they are little blocks they can be pretty big which is why it makes sense to dive deeper into structure such as these which satisfy this property. So, that led us to this statement of the theorem for Jordan canonical form how? So, from here this aii since it has only a single eigenvalue repeated multiple times therefore, we will cast our attention on operators who have only repeated eigenvalues. So, that is the entire spectrum is set of all eigenvalues distinct eigenvalues possibly is just a singleton repeated multiple times how many times exactly as many times as the size of the matrix or the dimension of the ambient space on which the operator acts right. So, we are only going to look at such operators and it turns out that this aii minus lambda aii is exactly what we defined as ni and we thereafter saw that this is a nilpotent matrix right nilpotent because if you raise it to a certain power sufficiently large power then this entire matrix ni to some power r i let us say will become exactly the 0 matrix yeah what is the justification well it just stems from this right if you substitute aii here and take aii minus lambda aii then that raise to the power fi is exactly going to give you 0. So, it is only obvious so this nilpotent matrix now if we consider what is its minimal polynomial we argued that the minimal polynomial for this ni is going to be nothing but x to the power what fi right that is exactly what we saw. So, I am just recapitulating what we have seen so far quickly please ask if at any point you seem to have forgotten certain things or how the things fit into the place right. So, this is what we saw. So, this is the minimal polynomial. So, based on this nilpotent now I can get rid of this i index because now I am focusing on any individual block irrespective of whether it is 1 1 2 2 I am only going to be focusing on blocks that look like this. So, let me just get rid of this i index here it is well understood at this point that we have only such operators in our mind and not this general operator, but only such operators for which these conditions are true that is these following conditions are going to hold right make sense right. So, this is what we are going to consider and based on this nilpotent matrix we then said that there is a special choice of basis which gives such operators a very special look or appearance when represented as a matrix and that is exactly the Jordan canonical form we of course did not tell you explicitly what that form was we only made the statement of the Jordan canonical form. So, I will repeat the statement of the Jordan canonical form a little less formally I had stated it the other day I am just going to quickly summarize what it said it said that if you have an operator like so all right. So, of course this d I can replace with n when I am considering the entire operator to act on an n dimensional vector space then it is just n no doubts on this right. So, what are we saying we are saying that this a acts on this vector space v to itself and satisfies let us give it a star star is true then there exists v 1 v 2 until v k such that n raised to the power m v 1 v 1 n raised to the power m v 2 sorry v 1 minus 1 v 1 until v 1 ok. So, when I say this a of course you know that you can always choose an n like this. So, from the say you cook up this n and then for that particular n all right next block would begin from n raised to the power m v 2 v 2 so on till v 2 likewise you keep going until you arrive at n raised to the power m v k v k until v k is a basis for v and n raised to the power m v 1 v 1 n raised to the power m v 2 v 2 until n raised to the power m v k v k is a basis for remember what this was a basis for? Kernel of n right right what does this mean what does that tell you about these numbers m v 1 m v 2 m v k and so on what does it tell you how do you arrive at these numbers yeah the sum. So, how many entries good good good point so how many entries are here m v 1 plus 1 plus m v 2 plus 1 plus m v 3 plus 1 so that is summation m v i i is equal to 1 through k plus k because it starts from the power n n to the power 0. So, there are k such n to the power 0's so there are k vectors like v 1 through v k and then there are m v 1 m v 2 m v 3. So, of course from this a simple calculation leads you to conclude that n is equal to summation m v i i is equal to 1 through k plus k the next conclusion about from what can you draw from here look these are more parts of a basis. So, they must be linearly independent so none of these can be 0. However, the moment you hit this with one more n then they must be 0 because there are basis for the kernel of n right. So, up to the powers of n till m v 1 if you hit v 1 with such powers up to the power m v 1 none of them can be 0, but the moment you hit it with one more power that is m v 1 plus 1 then it becomes 0 similarly for m v 2 similarly for m v 3. So, those are the characteristic defining features of these numbers m v 1 m v 2 m v 3 so on right. So, this is what the statement is very loosely speaking I have given a more formal statement the previous day, but this is what more or less it means before we dive into the proof of this let us try and understand the consequence that is what is so special about such a basis assuming that I agree that this is my ordered basis for the vector space v what would this matrix A or rather first let us see what would this matrix n the matrix representation of n look like under this basis. Once we know what n looks like under this basis we can of course just add lambda i subject to a basis and we get the representation of A. In other words before we even invest our time and efforts behind proving this Jordan canonical form we should at least convince ourselves that it is a desirable form to have right and it is short of a diagonal form this is the best you can do apparently. So, let us test that claim out so n subject to this basis but this is the basis. So, let us define this as the basis subject to this basis what does it look like it is going to look like n acting on n to the power m v 1 v 1 its basis representation then n acting on n to the power m v 1 minus 1 v 1 subject to that basis you with me on this. That is what that is how you understand the representation of an operator as a matrix its action on members of the basis represented in terms of that same basis these are vectors right. So, it goes all the way up to n action on v k is it not what do you think happens here the first column it is all 0s right. So, the first column is entirely 0 what about the second column this turns out to be the first basis does it not because now minus 1 plus 1. So, this turns out to be 1 because the representation of this this becomes the first basis now the representation of the first basis as a coordinate is just 1 0 0 like this agreed let me just complete the first chain. So, that is n acting on v 1 of course there are others what about the second one by the same argument if you let n act on n raise to the m v 1 minus 2 v 1 it just becomes n raise to the m v 1 minus 1 v 1 which is now the second basis representation of the second basis as a coordinate is 0 1 0 0 with a 1 in the second position. So, this next fellow becomes 0 1 0 like so until you arrive at this right what is this in that in the order this is the if you if you look at it from this way 1 2 3 4 like this you have eventually gotten to how many the entry previous to this by action of n on v 1 you have obtained n v 1 which is the entry just preceding this so in terms of the ordering of the basis what is the number of this this is the first basis first element in the basis second element in the basis third element in the basis what is what element is this of the basis m v 1 minus 1 right. So, you have 0 0 everywhere until you arrive at the m v 1 minus 1 position and it will have a 1 there yeah everything else is 0 below it next element is you start with n raise to the m v 2 acting on v 2 and let n act on it again that is going to be 0 because each of these fellows each of these fellows belong to the kernel of n do they not. So, again you have another full column of zeros so what is the size of this fellow by the way so if I am allowed to look at it like so would not you agree that this is m v 1 cross m v 1 yeah. So, m v 1 minus 1 is this but if I am considering this I want to square right if I want to square then I would have to take only this but I am also considering one extra column. So, I have to go one row below so it is m v 1 cross m v 1 right by the same token if you carry on like this what form do you expect then you will have subsequently by the same argument although this does not look square this is m v 2 cross m v 2 like so in the sorry in the m v 1 cross m v 1 yeah. Because the diagonal elements of this have to be 0 this has a very special structure in the sense that the super diagonal the first super diagonal is all once yeah the diagonals are all zeros yeah of the counting is clear how this is going see the first one is 0 the second one is the first vector this one the third one is the second vector the fourth one is the third vector so on till this fellow is what there were how many generated here oh there was m v 1 plus 1. So, m v 1 plus 1 right yeah sorry about that so we didn't keep actually keep count of this. So, this is basically m v 1 plus 1 because you see how many are there this in this block 1 2 3 until this is m v this is n hitting it 0 times until n hitting it m v 1 times. So, total down m v 1 plus 1 fellows here and you are going to have to study the effect of the action of n on m v 1 plus 1 fellows here yeah so it is m v 1 plus 1. So, I think this will then be the m v 1 m v m v 1 ith position yeah yeah I was not very sure of the counting myself. So, this is m v 1 plus 1 yeah then it adds up because now this is m v 1 plus 1 the second one will be m v 2 plus 1 likewise if you keep adding it is just summation m v i plus k which is exactly what the size of the matrix should be because that is what m is right. So, the point I am making is that you have these so this block is of size m v 1 plus 1 times m v 1 plus 1 but the same holds for every other block right every time you start letting n act on one of these sequences generated by individual v k's this is exactly what you are going to get you follow the argument I am just showing it for the first one, but in general or rather in a special case for example just because it gets too messy ok. So, we are taking a special example here let us say n is equal to 7 and k is equal to 2 ok. So, there are two such vectors v 1 and v 2 all right now if the numbers have to add up let us just say that one of them generates of a size m v 1 is equal to what 3 and the other generates m v 2 is equal to 2 ok. So, let us say m v 1 is equal to 3 m v 2 is equal to 2 just taking a special case. So, what happens then 1 0 0 0 0 0 0 then 0 1 0 0 0 0 0 and then finally last term will be 0 0 1 0 0 0 0. Next you start with n raise to the m v 2 which is n squared v 2, but n cubed v 2 is 0 that is how m v 2 is defined right with respect to v 2 if you keep hitting it with n if you hit it thrice that is when you first encounter a 0 up to 2 times you still do not get a 0 right. So, when you hit let n act on n squared v 2 it just becomes 0. So, this will again be 0 0 0 0 0 0 0 right. Next when you hit it what do you get you get n squared v 2, but n squared v 2 is which element. So, let us just write down the basis here. So, the basis is n cubed v 1 n squared v 1 n v 1 v 1. So, this is 7 dimensional n squared vector space you need 7 elements in the basis then n squared v 2 n v 2 and v 2. So, by Jordan's theorem judge by the Jordan form this is apparently a basis. So, so far we have not proved, but let us agree that Jordan's theorem is true Jordan form is valid and this is a basis by our claim for such a matrix like this. So, now this is the action this is the result of action of n on this that leads to this. So, this sorry that leads to 0 this is the result of action of n on this which gives me n cubed v 1 that is the first element. So, the representation of this fellow is what 1 0 0 0 0 0 the representation of this fellow is what 0 1 0 0 0 0. Yeah you get the idea right until this last fellow as a representation 0 0 0 0 0 0 1 clear I hope this is clear or no with this example what we are doing. Yeah. So, not such abstract things anymore just giving you a numerical example to illustrate the case you see these numbers you can play around with I am just choosing a specific case. So, what happens this is 0 next what happens this is 1 2 3 4 5. So, now I have to study the effect on the sixth element 1 2 3 4 5 6. So, I am hitting this with n I get this, but what is this fellow's representation in terms of the basis 1 2 3 4 5. So, in the fifth position I must have a 1 and 0 is elsewhere. So, 0 0 0 0 1 0 0 next 1 similarly will be 0 0 0 0 0 1 0 yeah. So, that is it. So, you agree that this is going to be the representation of n under this basis. So, any nil potent matrix has a basis representation like. So, I can split it up now what is the size of this 4 cross 4 4 is mv 1 plus 1. So, it is mv 1 plus 1 cross mv 1 plus 1 3 is mv 2 plus 1. So, mv 2 plus 1 cross mv 2 plus 1 and the numbers indeed add up and give you 7 mv 1 plus mv 2 plus 2 is equal to 7 rate. So, this is the typical form now I have gotten a representation of n. So, what how does this help in getting a representation of how does this help in getting a representation for A. In other words what would any operator A look like such that the nil potent matrix was basically A minus lambda I. So, I am only looking at operators which have repeated eigenvalues all repeated eigenvalues right whose algebraic multiplicity is equal to the dimension of the space itself. So, that means I have A minus lambda I because after all this is n right that looks in this special form right. If I could have it multiple here I just have 2, but in general I will have k such blocks right. So, I could have split them up like this you know and only these blocks would matter and each of these blocks has a very special structure with only once in the super diagonals and 0s everywhere else. So, now if I wanted the representation of A in terms of this basis what do you think it is going to be just lambda is added to the diagonals is it not yeah because this is an operator this is an operator you can check that phi 1 plus phi 2 represented under a basis is nothing, but phi 1's representation under that basis plus phi 2's representation under that basis is it not phi 1 and phi 2 are linear operators on a vector space. So, if the sum of the operators under a basis is given by the sums of individual because this is after all the sum of the 2 matrices representing the 2 operators and this is the operators you have taken the sum already in the abstract sense and then you are representing it it does not matter. So, similarly here also A is the representation I want to find out I have A minus lambda is representation. So, let me just add lambda is representation to it, but lambda is representation under any basis it does not matter it is an identity map is just going to be the lambda sitting on the diagonals no right. So, therefore, the Jordan form for A would look something like this what is that lambda is on the diagonals and except for certain ones in super diagonals at some special locations not at every super diagonal by the way necessarily yeah because you see the point at which it branches off from one block to another there is a 0 in this super diagonal yeah. So, you will essentially have maybe a even a single to lambda is like this it is possible right. So, let us say you have 1 like this. So, this is basically a 5 cross 5 representation for instance oh not 5 3 4 5 6. So, the 6 cross 6 representation, but in general it could have any arbitrary shape. So, this is always going to be the best you can do because this is not exactly diagonal, but it is as close to diagonal as it gets right. So, that is the reason why we are investing on this Jordan form because once you have gotten it down to this form it is really good because if you think even in terms of the differential equation at least this this and this equation these 3 are decoupled completely right you think about the differential equation represented by this particular row and this particular row and this particular row they are decoupled and every other row it is only coupled with one other variable which you have already solved. So, it is like a triangular solution right you solve for the first one the one that is completely decoupled know its solution because it is after all the first order differential equation plug it back into the one preceding it get that one and so on and then you have solved for this entire block. So, even within those individual aii's look at the big picture now even within those individual aii's you have managed to break it down into simpler blocks and even each of those simpler blocks have the best possible representation you can think of not just any arbitrary triangular representation, but a very special triangular representation where nothing other than the super diagonal the first super diagonal is allowed to be non-zero that too the only once not just any arbitrary number right it is a fantastic result and so far as decoupling goes of course these days people do not invest too much time in studying this because people are more into the numerical part of it. But it is a very useful result because if you are trying to prove something about Eigen values you cannot assume them to be the matrices to be diagonalizable in general. So, the first assumption you make is ok let us start with the Jordan canonical form of this matrix which looks like so and then you test out your hypothesis if you can prove it for the Jordan canonical form it means you have proved everything there was to prove about it right. I will just summarize quickly a couple more things about this before we get into the proof during the next module first. So, this was this was clear right I will retain the proof of the Jordan canonical form I mean the statement of the Jordan canonical form. You see nilpotent matrices can go to zero in any arbitrary way, but not these fellows you take for instance ok let us why am I making like complicated for myself let me take a small size 1 1 0 0 0 0. So, every nilpotent matrix does not have to look like this it is only when we do it in the Jordan canonical form that a nilpotent matrix will look like this. So, let us say this is n what do you think is n squared going to be if we just try it out you will see it is just shifted once one super diagonal up yeah and if you hit it once more it just goes off. So, it is just knocked off one super diagonal at a time. So, it is very clearly visible what is going on here right and that is another great thing about this particular form of a nilpotent matrix just do well to remember that. Now, if I tell you that this a matrix which has all its eigenvalues identical repeated yeah has a representation like this under the Jordan canonical form that is the Jordan basis. So, I can sort of split it up into let us say 3 it could be 5 10 any number I am just considering 3 suppose this is the largest Jordan block. So, each of these by the way starting with n to the n to the m v 1 v 1 through till v 1 this is the Jordan chain or sorry the Jordan block generated by this v 1 this entire thing is the Jordan chain ok. So, the Jordan block generated by v 1 is of size m v 1 plus 1 cross m v 1 plus 1. So, if you look at the largest of those fellows and you look at a minus lambda i represented in this then what do you think is the minimal polynomial given by of a minus lambda i. So, I am putting it to you it is exactly equal to what the largest block size will govern it because every time you are raising this to higher and higher powers that of diagonal that first super diagonal gets pushed one step at a time. So, the largest one will be 0 last all others are blocks have already been 0 by then. So, in fact, the degree of the minimal polynomial of this which is f is determines it determines the size of the largest Jordan block in the entire Jordan chain for lambda for a given eigenvalue I will repeat it once more. We have plenty of numbers one number is the algebraic multiplicity one number is the geometric multiplicity the other number is the multiplicity of that eigenvalue as it appears or multiplicity of its root as it appears in the minimal polynomial which is f. What I am saying now is that when you have the Jordan canonical form that f is telling you something about the Jordan canonical form it is telling you the largest possible Jordan block for the eigenvalue. So, suppose in the characteristic polynomial x minus 2 appears three times that is it is x minus 2 cubed, but in the in the minimal polynomial suppose you have x minus 2 squared what does that mean? It means that the largest Jordan block for 2 is of size 2. However, the algebraic multiplicity is 3. So, the Jordan chain is of size 3, but the largest Jordan block sitting in it is of size 2. So, in that case you can uniquely specify the Jordan canonical form how it will look just from the characteristic polynomial and the minimal polynomial. However, I leave it to you as a thought exercise is this always possible to determine if I just tell you the characteristic polynomial and the minimal polynomial is it always possible for you to uniquely be able to characterize the Jordan form how it is going to look? It is not just increase the number ramp it up let us say the algebraic multiplicity is 5. So, that it is x minus 2 to the let us say the minimal polynomial is what x minus 2 squared right. That means you have the largest size being 2. So, you could have technically a 2 by 2 block sitting here and you could have had 1 by 1 block 1 by 1 block and 1 by 1 block right. This corresponds to a 5 cross 5 matrix in the Jordan canonical because diagonals are special cases of the Jordan canonical form right. So, you have still have a 5 cross 5 on the other end someone else might come up with 2 cross 2 then 2 cross 2. So, 2 3 4 5 and 1. So, these are both going to satisfy the property what that the minimal polynomial is x minus 2 whole squared the characteristic polynomial is x minus 2 to the power 5. But the Jordan forms of course, permutations are allowed, but even if you allow permutations can you make this one look ever like this. These are completely different Jordan forms even though they share the same characteristic and minimal polynomials. So, be very careful just those two numbers are not enough, but suppose I also tell you the geometric multiplicity then can you fix it up because in this case apparently the geometric multiplicity is what the geometric multiplicity is also hiding somewhere in here can you tell me what the geometric multiplicity is in those number of blocks exactly because number of blocks are exactly telling you the dimension of the kernel and the dimension of the kernel is exactly the geometric multiplicity right. So, this one has a geometric multiplicity of 3 whereas, this one has a geometric multiplicity of 4 right. So, apparently these two share the same minimal polynomial the same characteristic polynomial and therefore, the same algebraic multiplicity, but they do not share the same geometric multiplicity again I will not answer this question, but I urge you to think about it if now I next fix up characteristic polynomial minimal polynomial and geometric multiplicity to be the same then must those two matrices with the same eigenvalues have the same Jordan form permutations are allowed permutations are allowed. So, up to a permutation we consider it to be the same Jordan form someone puts that 3 by 3 block here someone puts a 5 by 5 block here someone else decides to put the 5 by 5 block first and the 3 by 3 that is just a reordering of the basis something that you call as a V 1 your friend calls as a V 2 that is permitted that is just a permutation a reordering of the basis. So, up to a permutation are those Jordan forms identical. So, just think about it I will not answer this question, but it is food for thought right. Now with all of that in place I suppose you understand that this Jordan form does give us some very nice properties and structures. So, it is worth our while therefore to dive into a proof of this which is what we are going to do now in the next module.