 all right. So, we will leave the setup here as it were and we shall see how this is going to be true. So, why is this going to be true? So, we have this yes? No, it is actually going to have to be the algebraic multiplicity because ultimately the whole size of the matrix is n and the sums of the algebraic multiplicities will be n, n otherwise you will not have an operator that is of the appropriate size. So, it has to be n cross n, but we are breaking it down into its smallest possible constituents. Yes, we will see all of that. So, those are what we are trying to get you to see through all of this analysis that we will do. So, we will this is going to be a rather short module where we will see this proof and then we will see one we get consequence and maybe we will delegate the proof to the next module ok. So, why is this going to be true? It is actually quite simple. If you look at this expression here, this is what holds the key. We have split up the monic polynomial that is the minimal polynomial into co prime factors. So, therefore, this is definitely going to be true. So, this is going to be the proof. Consider any V belonging to this. So, if I act on V using this left hand side of this operator, what is it? I have a of a times p of a acting on V plus b of a times q of a acting on V is equal to V itself alright. What can you say about this object? And what about this object? Here is the claim. This belongs to the kernel of no, it belongs to the kernel of q a does it not? If you hit this fellow with q a because of the commutativity of the order of multiplications, this is q a times a a times p a. So, that is a a times q a times p a, but q a times p a is mu a mu a of a is 0. So, this pulverizes this and by the same token, this fellow belongs to the kernel of p a. Is that clear? Please ask if this is not clear. Let me write it down. Look at q a's action on a a times p a acting on V that is equal to q a times a a times p a times V. Now I am going to flip the order of q and a because matrix polynomials it does the order does not matter the commute. Univariate. So, it is a a times q a times p a acting on V, but that is nothing but a a times what is this? This is mu a right. So, mu a of a acting on V, but of course, this mu a of a is 0. So, this means this is equal to 0 by the same token if you hit this object with p a and flip the order of multiplication between b a and p a, you obtain the conclusion that that belongs to kernel. So, this just means that we have shown what that V is at least a sum of these two subspaces. In order to show that it is a direct sum we will have to show that if there is something that belongs to both of those kernels then it can be nothing other than 0. So, suppose V 1 belongs to kernel p a intersection kernel q a what does it mean implies p of a acting on V 1 is equal to 0 and q of a acting on V 1 is equal to 0. Now, so there is a reason I did not erase this from the left hand side and only retain this this is true holds here. Hence V 1 is equal to a a times p a acting on V 1 plus b a times q a acting on V 1, but individually this is 0 and this is 0 because it must belong to the kernels of both of p a and q a. So, therefore, this must be 0. So, only the 0 vector can live inside the intersection of these two fellows. So, on this side I have shown that V is a sum of kernel p a and kernel q a combined with the fact that there is nothing in the intersection except the 0 vector I can now conclude that V is indeed equal to a direct sum. So, whatever I had said very sketchily in the previous module towards the end to motivate our quest for this let us just try and be a little more formal and write it down. So, now that we are we are convinced that this is true. So, suppose a is an operator from V 1 V 2 itself let us look at it as a matrix now again just a matrix because it is the same thing finite dimensional vector space. So, let us say B 1 is a basis for kernel p a and B 2 is a basis for kernel q a alright. Then B V is equal to B 1 union B 2 we approve this right. This you are convinced if it is a direct sum then you just cook up individual basis for the subspaces stack them up together and you get the basis for the entire vector space. So, this is true. So, look at a now represented through this basis what do you think it is going to look like? What is this going to look like? See now you have gotten exactly what you dreamt of we saw that if one of the subspaces is a invariant and the other the complement part is not then you only get block triangularization there was this a 1 1 a 1 2 0 a 2 2, but if both of them for a invariant you would get exactly block diagonalization. So, then this would look like a 1 1 let us not even talk about sizes at the moment where what are these objects a 1 1 is equal to what? How do you define this? Right. So, it is a restricted to the kernel of P a represented in terms of B 1 does this make sense this notation because look I cannot just consider a acting on the entire vector space then I would have to consider the whole after what is the matrix representation I would have to look at the action of a on every element in the basis and then give its coordinate representation. So, that coordinate representation would entail an n tuple, but a 1 1 is not an n tuple right the columns of a 1 1 are not n tuples. So, I am just restricting myself to live inside the kernel of B a and yes, yes, yes exactly. So, this is this is this is the bar that is the restriction ok yeah. So, I said in words so, this is the action of a restricted to the kernel of P a. So, therefore, you need no more than the basis elements in B 1, but you are also going to represent it in terms of B 1 alone. You are not going to represent it in terms of B v if you did you would have to consider this entire thing you understand. So, that is why it matches the size matches up. Say, early a 2 2 would be a restricted to the kernel of Q a represented in terms of the basis B 2 any doubts about this. So, we are letting a act on only that subspace not any arbitrary vector not any arbitrary vector, but only vectors inside kernel P a and then representing the because it is a invariant the resultant is also going to live inside the kernel P a. So, represent it in terms of just the basis fellows in B 1 not B v again if you did B v then you would have to look at this entire object here with zeros padded together, but we are not interested in that we just want to look at the block diagonal how to evaluate them. So, here is the interesting deal what do you think is going to be the minimal polynomial of a 1 1 and a 2 2 they are definitely annihilating, but how do you know that the smallest degree right then there will exist some smaller degree polynomial which polynomial times those factor of that is a factor of the minimal polynomial if there was a lesser degree then that would have ended up pulverizing the overall a and therefore, the minimal polynomial that we are claiming would not be. So, I will just outline the proof we will not do this in detail today I just outline the proof that your friend has suggested and we will exactly follow that same reasoning what is it? We will look at this kernel of P a ok. So, look at P a acting on this fellow because this is a block diagonal the critical observation is if you raise this to higher and higher powers what happens it is just if you square this it is just a 1 1 squared a 2 2 squared that is what you do you decouple them completely. So, if you let P act on this entire a it is like P acting on a 1 1 and P acting on a 2 2, but P acting on a 1 1 is what? Look at this P acting on a 1 1 what is the basis for a 1 1 constituted of you are looking at fellows inside the kernel of P a yeah they will have to identically 0 right. So, this block will end up being 0 this block will end up being P a 2 2 similarly if you hit it with Q you will have this Q a 1 1 and this will be 0 right. So, definitely then P is an annihilator or lives inside the annihilating ideal for a 1 1 and Q lives inside the annihilating ideal of a 2 2 right is that clear? See what I am saying is the observation stems from raising a 2 higher and higher powers just leads to these fellows individually being raised to the same power and if you add those polynomials eventually end up getting P a 1 1 here and Q a P a 2 2 here, but P a 1 1 cannot be anything but 0 why? Because after all these are vectors if you look at the operator represented in terms of the basis this is the action of P a 1 1 on fellows in the basis of kernel P a 1 P a. So, they must vanish the first if you call this k and this n minus k. So, the first k columns must vanish when hit by P a. So, I will I promise I will not write I will do that next day if you if you grasp this it will be easier to follow next day if not don't worry we will write this formally. So, I will take just 2 3 minutes to explain this the idea behind the proof it will be easier to grasp next day. The idea is if you raise this to higher and higher powers this one becomes a 1 to the power r this one becomes a 2 2 to the power r. So, by the same token this one just becomes when hit by the polynomial P a it is P acting on a 1 1 this is P acting on a 2 2, but what is P acting on a 1 1 after all? Isn't that the first k columns of P a acting on the elements in the basis said b 1, but what are elements in the basis said b 1? They are exactly the fellows in the kernel of P a. So, when P a acts on fellows in its kernel it takes into 0. So, the first k columns must be 0 identically and therefore, the first k by k block of course, it is just a restricted version that is also 0. Similarly, when Q a hits this this fellow vanishes this becomes Q a 1 1. So, point I am making is that P definitely is an annihilating polynomial for a 1 1 Q definitely is an annihilating polynomial for a 2 2, but that does not necessarily immediately make them a minimal polynomial for a 1 1 and a 2 2, because after all any annihilating polynomial is not the minimal polynomial. Minimal polynomial is the smallest degree annihilating polynomial. If it is monic it is the if it is not monic it is a smallest degree polynomial. So, now, we have to show that if there is some other claimant to this throne who says that I am the minimal polynomial not P a and that fellow has to show that that fellow's degree is less than P that fellow's degree is less than P then P a can be generated by that fellow. So, P a is equal to some whatever g times that other claimant g times that mu a 1 1. If that is so, then we will arrive at a contradiction. What is that? You hit this this will become 0 with the mu fellow the new claimant and this will become non-zero. You hit the other fellow similarly and that will also have this one structure as Q a 1 1 and this one of vanishing. If you multiply 2 2 by 2 block matrices in one of which the second block diagonal block is 0 and the first one the first diagonal block is 0 what is that product? 0. So, then you would have gotten a 0 matrix or a 0 operator through the action of two polynomials which are of lower degree whose cumulative degree is of lower degree than the minimal polynomial of a because P is degree plus Q is degree is equal to the degree of the overall minimal polynomial. Now, you have another polynomial say mu 1 and mu 2 whose individual degrees are less than P and Q, but they together when they are multiplied they become an annihilating polynomial for a by the very structure that they know this operator with they become an annihilating polynomial for a. So, therefore, you have a new claimant to the minimal polynomial of a itself which is definitely a contradiction. So, therefore, P and Q. So, this is a beautiful factorization because now from here after once we prove this in the next lecture here after we shall just focus on these individual blocks and see how better we can massage them because what is the question now you have a minimal polynomial minimal polynomial that is factorized into x minus lambda i to the k i those are the smallest possible factorizations into coprimed or irreducible kind of polynomials right. And then each individually reducible polynomial then becomes a minimal polynomial for this block. So, we can now say ok first stage just get it down to its minimal polynomial split it up into its factors and now you have gotten into this stage a 1 1 a 2 2 a k a l l number of distinct eigenvalues. Now once you have gotten that if you now want to further break it down you had better look at this. So, our next step would be we will only restrict ourselves to matrices whose characteristic or sorry whose minimal polynomials are of the form x minus lambda to the power k ok. And from there we will see that we can further break this down matrices of that form x minus lambda to the power k they can always be split up into some matrix plus some nilpotent matrix which will allow us further to get into the Jordan canonical form ok. So, in the next module we shall first establish whatever we have said in words that the individual minimal polynomials of these are nothing but p and q and then we will push ahead with a particular type of matrices whose minimal polynomials are given by x minus lambda to the power k ok. Thank you no not necessarily diagonalized at all. So, we want to do now the best possible with this that is the next goal. Once we have the have them split up into all the distinct every distinct eigenvalue will correspond to one block, but the blocks themselves are not of a nice structure necessarily they can be any random structure. So, if we can also massage them to a very nice structure. So, as to get the best possible decoupling see at the end of the day it is always about the best possible decoupling that we can get right. So, we are not even happy with this we want to further now zoom in on this and get it to its further smaller smaller decouplings right that is we want to break it down into its smallest possible ingredients ok. Thank you.