 Third, image of A is A invariant. Is it not? Just check. Of course, I am getting from the most trivial examples to slightly more interesting. I did not say it was a very interesting example. But image of A means what? Any object that is in the image of A implies that y is equal to A v. And now, of course, A y is equal to A acting on A v. So, that is just A y. So, this is just A y. So, let us just call this. I mean this obviously by its very nature belongs to the image of A because any A y, any object of the form A y belongs to image of A. A little more interesting example, the kernel of A is A invariant. How can you check that? So, v belongs to kernel A. So, what happens to A v? Let A v act on it obviously. I said slightly more interesting. I never said very interesting. It implies A v belongs to kernel A. Let us again make it slightly more interesting than this. Suppose, v is an eigenvector of A. Then the span of v is A invariant, not difficult to see. Of course, A v is equal to lambda v. So, let us say C times v. Any object in the span of v is C times v. Is it not? So, that is just lambda C times v which also belongs to the span of v, right, okay. Next, remember those W i's that we spoke about. W i I mean the W i we have been speaking about thus far. Why? You just say that v belongs to kernel A minus lambda i means A minus lambda i i times v is equal to 0. So, look at A minus lambda i i times A v. What about this? Well, you just push this A inside is equal to A squared minus lambda i A times v. This is called the push pop technique which we often use with A's and we will see more of this in the next few examples. Push this A from the left, yeah, onto the left rather and pull it out here. So, from the right you push it in, from the left you pull it out or pop it out. So, push pop, right and then you put this A because of the commutativity of A with itself and not just with itself but later we will see with any power of A. A raise to any power multiplied with A from either side is the same. It commutes, right, A minus lambda i i v. So, this is A times 0 is equal to 0, right. So, this is also A invariant. Now, we will get to more interesting examples. So, kernel of F A where F x belongs to polynomials. F x is a polynomial over the field. So, that means, let v belong to kernel F A. It means F A which is still a matrix because it is a polynomial of the operators. So, that is also an operator. So, F of A acting on v is equal to 0. Again we are going to use the same push pop technique because after all the polynomial of A is going to contain only powers of A and as I just said powers of A commute with each other. So, now, A v when acted upon by F A I can just flip the order of this multiplication is equal to A times F A on v which is equal to 0, right. So, that is also another example of an A invariant subspace. And of course, in fact, why am I raising this? It is going to be very similar 8 the image of any such polynomial where F x, right, again very straight forward, yeah. So, F A times y is equal to v, that is something that belongs here. So, A v being acted on. So, A v is what is equal to A F A times y is equal to F A times A y call this some v bar, yeah. So, this also belongs to image of F A. So, all of these subspaces that we have just spoken about are invariant and you might notice a recurrent pattern here. It is these polynomials that are of great interest to us which is why we are going to invest if we have time today, then today itself or maybe from the next lecture onwards a little heavily on understanding polynomials and their nature because it is these polynomials that will actually land us up with these invariant subspaces, but again what is the connection between such invariant subspaces and block diagonalizability that is the question, right. Yes, you had a question? Sorry, what isomorphism from one vector space to another? Yeah, I do not get you what do you mean by one is invariant the other is? You mean no, these are operators, right. When you talk about isomorphism those are vector spaces that are isomorphic to one another. Are you talking about operators that are seen through different basis like transformed? So, it does not matter if you are looking at an operator, whether you are looking at the operator in the abstract vector space or you are looking at it through some basis that is what the choice of basis will give it a structure of a matrix and a Euclidean space like n tuples and m tuples of numbers. Yeah, so subject to certain choice of basis we will now see that is exactly the topic of our next study. We will see why this invariant a invariant subspaces are actually going to help us with the idea of pushing this forward towards block diagonalizability, right. So, isomorphism is only between vector spaces, but if you are talking about operators you say one operator is similar to another subject to a change of basis. So, if you have an a invariant subspace for an a that is represented with respect to one basis then corresponding to a different basis also you will have a similar a invariant subspace, but of course the basis will look different for that, yeah. So, you will always be able to find some a invariant subspace actually some of them are definitely there as I said these things will always exist, but if you change your basis then the representations will be different. Does that answer your question or was it something? Oh that was the quotient space is it? Yeah. Oh, you mean that first isomorphism theorem, but then you have to restrict the map accordingly you cannot talk about the whole map then. The moment you are talking about the whole map which is on the vector space v to itself and now when you are restricting it you are looking at the v restricted to I mean v quotiented by kernel, right and then it is a different map it is the induced map. So, that will be completely different the picture itself will be completely different, right. Sorry. Yeah, yeah. So, if v belongs to the kernel we are checking if a v also belongs to this kernel. So, suppose we start with a v that belongs to this kernel it means v belongs to w i we want to ensure that a v also belongs to this kernel. So, we are trying out what happens when we hit a v with this the only thing we know is that v belongs to this kernel we do not yet know if a v also belongs to this kernel, but the moment we do this because of this operation we can push this a inside. So, this is because a squared minus lambda i a and then I am pulling out this a from this side yeah and then this part I know to be 0 that is the deal, right. So, what if we actually get an a invariant subspace what is so great about this how does it help us in our endeavours. So, suppose w contained inside v is a invariant, right. Now, let so of course v has a dimension again equal to n. So, let w 1, w 2 so on till w k be a basis for w extend not just a basis let us say an ordered basis extend b w 2 b v is equal to w 1, w 2 until w k then v 1, v 2 until v n minus k which is an ordered basis for v nothing out of the ordinary so far, right. We can do this the only thing is that this w is an a invariant subspace and that should give us some special properties, right. Now, look at a the operators representation in terms of such a basis what is it going to be that is going to be quite simply a w 1 in terms of b v a w 2 in terms of b v until a w k in terms of b v, ok maybe I need little more space then a v 1 in terms of b v until a v n minus k in terms of b v, right again nothing out of the ordinary so far. The action of a linear transformation or a linear operator is captured by its action on each element on the basis and therefore, we just give it a coordinate representation and get the corresponding matrix representation of that linear operator, right. What do you think is this going to look like now where does this object reside the object inside the box bracket because of the a invariance this little object resides inside w so does this and so does this, right so do all of these first k fellows. So, the first k columns the vectors in and of themselves in the abstract vector space they live inside w if something lives inside w what are the objects that you require to represent them just the basis of w alone. So, this object I put it to you is going to only require representation in terms of what the first k members of these will they have any weight on v 1 v 2 till v n minus k no. So, this matrix then looks like the following let us just split this up this is k this is k. So, you will have some alpha 1 1 alpha 1 2 until alpha 1 k which means that a w 1 is alpha 1 1 w 1 plus alpha 1 2 w 2 plus so on till alpha 1 k w k and zeros thereafter because they do not require the v is to represent themselves they are living inside w what can you say about incidentally what can you say about let us say the span of v 1 v 2 v n minus k yeah I am going to define it in the following manner. So, you already get an idea of what is coming can any object belong simultaneously to this w c and w if it does then it has a representation as a linear combination of these fellows and as a representation in terms of the linear combination of these fellows. So, therefore, the linear combination in terms of these fellows equals the linear combination in terms of these fellows, but then it would violate the condition that these have to be linearly independent. So, therefore, apart from 0 w and w c cannot have anything in common. So, I might also write that v is equal to w direct sum with w c of course, the fact that the dimensions match can be readily checked right. So, this is a complement of w, but I have only stopped at that point let us just see what happens to the next fellow this is alpha 2 1 alpha 2 2. So, until alpha 2 k right it is going to be the same with every one of those fellows until alpha k 1 alpha k 2 alpha k k right and these are all going to be zeros. So, this is going to be of size what n minus k this is going to be of size k this is going to be of size n minus k and this is also of course I have already written it n minus k. So, this by the way let us call it a 1 2 and a 2 2. So, what happened because of my a invariant subspace I have not gotten a block diagonal, but I definitely have a block triangular structure is it not. See these objects v is when acted upon by a they can live anywhere inside v. So, they might require all of those basis elements to represent themselves. So, their coordinate representations will involve possibly all of those n tuples, but the first k fellows do not require the last n minus k because it is an a invariant subspace by extension just I mean not a great deal of imagination. Imagine very John Lennonisk, but let us imagine that W c the complement subspace is also a invariant what if that were true. So, I posit this question now or rather posit this answer that if both W and W c were a invariant then a represented in terms of such a basis would look like a 1 1 0 0 a 2 to do you see why if the objects inside W c sorry W c require nothing other than the basis elements of W c itself then they do not require the first k fellows the action of a on an object in W c leaves it still a vector living inside W c and therefore, it can be represented by the last n minus k fellows alone. So, each corresponding column has only non-zero elements possibly in the last n minus k, but the first k fellows must be 0 because they do not live inside they do not live outside of W c you see right. So, this gives us some hint of a recipe towards block diagonalization what if we can cook up the entire cook up a invariant subspaces whose direct sum equals the entire vector space. In fact, when you diagonalized you did exactly that what was that a invariant subspace kernel a minus lambda i i kernel of a minus lambda i i those W i's we have already seen are a invariant subspaces and their direct sum turned out to be the entire vector space V. So, we are just mimicking that it just so happened that even inside that individual W i also there was this beautiful diagonal structure we might not be able to get that, but short of that we should at least be able to diagonalize it next observation. So, this is clear right. So, we are now going a little ambitious and this is our wish list let me write that down as our wish list. So, we wish to find smallest when I say smallest it means I am talking about the dimension of the subspace smallest possible a invariant subspaces say what should I call it let us still call them W ok let us not use W. Say U i such that the direct sum of those U i's is equal to V then what is the largest diagonal block you will be left with the largest U i the dimension of the largest U i among these would correspond to the largest diagonal block that you have. In case of diagonalizability it was those W i's, but those W i's in turn for all of these dimensions you see and they also decomposed into this diagonal form even if the W i's had a greater dimension than 1 that was actually good when the algebraic multiplicity was greater than 1 we actually wanted the W i's dimension to be exactly equal to the algebraic multiplicity if it was less then we would not have been able to diagonalize it right. So, that is what we actually want we want this sort of a diagonalization block diagonalization now short of actual diagonalization, but still the best that it gets we want to get it down to smallest possible chunks these U i's. How do we get these U i's? If you have been a little observant you would notice some of those most interesting subspaces A invariant subspaces resulted from polynomials in A right kernels of polynomial image of polynomial more specifically we shall focus on kernels of polynomials. So, then this same question which is a question in the language of linear algebra or matrix theory or operators whatever you call it yeah translates to a question in terms of polynomials this boils down equivalently to the question how can we efficiently choose polynomials say f x that lead to A invariant subspaces. So, our wish list now consists of seeking polynomials whose kernels possibly allow us to break down the entire vector space V into the smallest possible chunks which is what will motivate us to look for polynomials look at polynomials and their nature in the next lecture. So, if you have time please revise the ideas we had discussed many lectures back about rings integral domains and fields right at the beginning of this course because we will see in the next lecture starting with a very basic dallying with these polynomial dallyances with polynomials that polynomials actually constitute a commutative ring with identity which is nothing but just one property short of a field which is that property that it does not have a multiplicative inverse necessarily. So, the next lecture we shall start to look at polynomials with this question in mind we will go a little deeper but let us not lose sight of this question. Thank you.