 So, in the last part of this lecture today, we shall look at something a little different. We shall actually now go back again to looking at these linear transformations in terms of matrices. Earlier, we have seen that any vector space, any vector in a finite dimensional vector space can be represented as an n-touple through this assignment of coordinates. But now we are going to see that the same sort of an idea, the idea of an ordered basis precisely allows us to do something similar basically to these linear operators too. So, what is it that allows us to do this? And again, pictures here, these are not like the pictures of sets and all. But this picture here, if you remember this, it is not really about remembering also. If you just visualize this in your minds, often you can do get away just fine with this. So, what is the picture? The picture is you have this vector space V, of course, finite dimensional vector space. And you have this phi which acts on objects in the vector space V and maps them to objects in the vector space W. So, let us say the dimension of this is n as we always do and the dimension of this is m. And suppose you have an ordered basis for this given by V1, V2 till Vn and an ordered basis for this given by W1, W2 till Wm. So, this is ordered basis and this is also an ordered basis. The moment you have an ordered basis, what I can simply do is, through an isomorphism, we proved it is an isomorphism, right? When we go from V to F to the n, where F is of course, a field over which both V and W are defined. So, when we go from here to here, the mapping that we have is, remember the dictionary? Yeah, that is what we have. We take each of the vectors in this basis, represent them in terms of, yeah, right? And then we know that this is basically the coordinate represent. In fact, for this object, this V1 maps to what exactly? 1 0 0 0, V2 maps to 0 1 0 0 like so, right? Does everybody understand what is going on here? See, I am saying coordinate assignment for V1 would be this. A coordinate assignment for V2 would look something like this, because what is V1 after all? It is just 1 times V1 plus 0 times V2 plus 0 times V3 plus 0 times V4. So, the base, the coordinate representation of V1 is this. By the same token, the coordinate representation of V2 is like this and the coordinate representation of Vn is going to be like this. So, when I am writing this map here, that is exactly what I am saying. That is what I am talking about, yeah? Clear so far, right? So, I can always do this coordinate assignment here. Exactly in the same manner, I can also take this, objects in this vector space w through some coordinate assignment with respect to this basis, I can take them to m tuples. So, here is the important thing. The question we are asking is, this object phi which maps vectors in the abstract vector space V, this abstract vector space could be the vector space of functions, vector space of polynomials, vector space of what have you, not necessarily Euclidean spaces, right? So, this is an abstract vector space, this is another abstract vector space, this is a linear transformation between two abstract vector spaces. What I have done is, I have stripped them off their fancy names, got them down to their coordinates because of the fact that they are finite dimensional, I am able to do that and assign them certain n tuples and m tuples they are in. And now, what I am claiming is that there will be a matrix A of size f n cross m which will capture exactly the action of phi. What do you mean by which will capture exactly the action of phi? It means that if instead of looking at this phi in a standalone fashion, I take any vector whose effect I want to study under the action of phi, right? How phi affects that vector? What I can do instead of just passing it as an argument of phi is as follows, I can simply say take that vector, get it down to its representation in terms of this ordered basis, let a matrix act on that n tuple that will lead me to an m tuple and then take this inverse mapping. Remember this is an isomorphism, so the inverse will exist. So, when I am going against the direction of flow here, it is of course, the inverse mapping. So, by bypassing this direct evaluation through phi, I can simply reduce it down to just computations of matrices and n tuples and m tuples. Which one sorry? Ah sorry, yeah m cross n, thank you, right. So, what I am saying is that instead of looking at the direct action of phi on objects in v, I can simply go via this root and it is going to be exactly the same. But of course, this requires this needs to be established through this diagram it is pretty apparent. Each of these actions are legitimate, but who is to guarantee that I will be able to find such an a? You see, that is what we are now going to try and see. So, you have this basis here, we know that in order to find out what a linear transformation does to any object in a vector space, it suffices to just investigate what it does to objects in any basis, right. So, let us look at the action of phi on v i. The moment I have phi of v i, where does it reside? Remember, v i resides in v, but after I have allowed phi to act on this v i, it now resides inside w. So, if it resides inside w, it can jolly well be represented as a linear combination of objects in this set, because this is the basis for w, agreed. So, now, I can go ahead and write this as, so I will use summation alpha i w i, there it is. But look at this fellow v i, this is just some special v i, right. So, if I remove this i here, this is still legitimate, right. This is true of any v, because if I take once I let phi act on this v, it is already gone there to w, and then it can be represented in this manner, right. So, now, if I say that phi v is representation, so suppose I had gone via the direct root here. So, I have gone from v to phi v, and then I represent phi v in terms of v w, that would be equal to alpha 1, alpha 2 till alpha m, agreed, right. But now notice, what about this object v here? What can I write v as? v is residing inside the vector space v, and therefore, can be written as a linear combination of objects inside this set v v, which is a basis for v, right. So, this is equal to summation, let us give it some other name, beta i or beta j v j, of course, with j going from 1 through n. Now, if I say that phi v, that is what, what is this? Summation of phi v, that is what I can push this, pull this summation out, because of the linearity of the operation you see. So, I can just write this as beta j phi v j summation j going from 1 through. But now each and every object such as this can be written as what? This sort of a linear combination. So, I did not want to do with a double subscript here, but it seems I will eventually have to use it here, right, because I will be using a double summation here. So, now, what is that v j? This phi v j is again a linear combination of objects inside w. So, this is summation beta j, j going from 1 through n, and this is summation. So, I have to use a double subscript now. I have to use over i, a i j, I think w i, right, i going from 1 through m, is that right? Yeah, because the j is fixed here, I am summing over i. So, the moment I have fixed this and then you run this j through 1 through m, is this correct? So, I can probably erase this part now, right, we will revisit this figure. So, what is this? Okay, let me just write that term, so that it is easier for you to see. So, phi of v then turns out to be, let us just write the first term, shall we? So, we have beta 1 times, what is this? a 1 1 w 1 plus a 2 1 w 2 plus dot, dot, dot till a m 1, yeah, w m plus beta 2 times a 1 plus a 1 plus a 1 2 w 1 plus a 2 2 w 2 plus a m 2 w m plus, I can carry on until I have beta n times a, what is it? 1 n w 1 plus a 2 n w 2 plus dot, dot, dot a m 1 plus a m 2 w m plus, I can carry on until I have beta n times a, what is it? 1 n w 1 plus a 2 n w 2 plus dot, dot, dot a m n w m, yeah. Again, you see something very familiar. So, you will begin to see the pattern after a while as you ask, right. So, now, I am going to collect together terms related to w 1, because what I am interested in is a coordinate representation of phi v, yeah. That is what I am interested in, right. So, that is what this is. I am going to try and evaluate these alphas and try and understand what they are going to constitute of. What is this? If I just collect together terms related to w 1, what is the coefficient of w 1? a 1 1 beta 1 plus a 1 2 beta 2 plus, dot, dot, dot till a 1 n beta n plus a 2 1 plus a 2 1 beta 1 plus a 2 2 beta 2 plus. So, these three dots, the mark of ellipses, then a 2 n beta n plus a m 1 beta 1 plus a m 2 beta 2 plus until a 2 n. So, a m n beta n times, oh, of course, we have w 2 w m, is all right. So, now, if you try and equate each of these alphas with terms like the one sitting here, what do we get? What is it that this reminds you of? Yeah, because it is a linear transformation, you see, at the end of the, the change of basis was also a special case of linear transformation. That is all there is. But now we are seeing that any linear transformation manifests itself through some matrix multiplication at the end of the day, because this means that I will just erase this punch, because I want you to be able to compare those alphas with what we have now gotten here. So, this means that through this route, we already know that this is what the representation would be. We want to find out what these are and now we have exactly found the answer to what these are. So, what is this? So, now, from this, all of this work that we have done here, in terms of b w, this turns out to be what? a 1 1 a 1 2 a 1 m, sorry, a m, no, is it? Where is it? Where is it? a 1 n, yeah? a 2 1 a 2 2 a 2 n a m 1 a m 2 a m n and what do we have here? beta 1 beta 2 till beta n. What is this exactly? What are these betas representing? If you look at how we started with the betas, what were these betas, right? So, this is not this a basis representation for the vector v in the vector space big v and what are these objects exactly? If you look at this, what is this? Yeah, so this is a representation of phi v 1, right? So, what is this? What is this? The action of phi on the first vector in the order basis for v, that is phi v 1 represented in terms of b w. This is phi v 2. So, this, let me just mark that. This column here is phi v 1 represented in terms of b w. This by the same token is phi v 2 represented in terms of b w and so on until this is nothing but phi v n represented in terms of b w, yeah? And this object is nothing but the representation of v in terms of objects in b v. There is nothing really to remember. It is very obvious. You see v is residing inside vector space v. So, the only way to represent it through a coordinate is through a ordered basis in v. So, that is what you have done. And if you want to study the action of this linear transformation phi on an object in the vector space v, in the abstract sense we know we need to study its effect on the fellows in the basis. So, in the coordinate sense, what we are doing is exactly that, because these numbers, these m tuples of numbers are exactly capturing the coordinate representation of the effect of phi on every member in the basis set of v. So, you are studying phi v 1, look at its basis representation in terms of w, b w. Phi v 2, look at its basis representation in terms of b w. Phi v n, look at its basis representation in terms of b w. That is what your equivalent A matrix is. So, that figure which we have raised now because of one of space, that A matrix which we had written probably in pink and we are asking the question as to what is the representation of the A matrix that captures the action of phi is nothing but this. So, in other words, in a very shorthand kind of a notation, we can just say in summary that this phi, of course, most of it is already written here. This phi v, if you want to represent in terms of coordinates, in terms of the basis in w because phi v is an object residing in w, then that is given by this A matrix acting on a basis, a coordinate representation of v. And this A matrix, let me erase this part now. And this A matrix is now defined in the following manner. It is nothing but phi v 1 represented in terms of b w, phi v 2 represented in terms of b w until phi v n represented in terms of b w, right? That is exactly the matrix. So, whether you directly go ahead and evaluate the action of phi on a vector in v or you go via the somewhat longest route, first assigning coordinates going to f n, then from f n you want to hit that fellow you have obtained in f n with this matrix, go to f m, then f m you again map it back. Given a coordinate, there is a 1 to 1 mapping, right? So, given a coordinate representation in f m, you can always map it back to the fellow in w. And that is the same action you see. So, that u shaped path going through the downward trend and the forward path, they are constituting essentially the same action, right? So, now we will make a very important observation. Let us see how much time we have got, ok good. We have about 5 minutes. So, now I am going to draw that figure again and ask you to use your imagination a bit, not by much, but just a bit. And just look at this v and w. And suppose you have represented v in terms of v n, v n, v n, v n, v n, some basis, right? So, this is let us say b v 1 and you have gone to f n. And you have also chosen a b, sorry, a b w 1 and you have gone to f m. So, the underlying linear transformation that you have in mind is phi. And then you have found some a matrix corresponding to your choices of basis in v as well as w, right? Your friend on the other hand decides that I am not going to choose the same set of basis. So, your friend chooses another basis, which is b v 2. So, of course, your friend decides that I have had enough. I am not going to choose the same as this other fellow. So, I am going to choose some other basis b w 2. So, of course, your friend is going to land up with a different a matrix, maybe an a tilde. But you see that at the end of the day, as I have said, it is the idea that counts. I mean the choice of basis is completely arbitrary at your will, right? You have chosen something, your friend has chosen something that should not really matter. So, now for the moment I just decide I am going to overlook, I am going to overlook this part and just look at it like a story of matrices, yeah? So, I know, suppose this is my choice. So, I know a, I know the basis I have chosen and I know the basis my friend has chosen. The question is, what is the corresponding matrix that my friend has obtained? So, try and understand the question. It is very important to understand the question. Once you understood the question, the answer is very obvious, it is staring us here. What I am saying is the following. This is something that I have done, the lower part of this. In going from V to W, I have chosen a basis in V, a basis in W and accordingly I have found this pink colored A, right? Suppose I know that my friend has chosen a different basis and I also know that basis in V he has chosen and the basis in W that he or she has chosen, right? The question is, how am I going to obtain this a tilde based on this knowledge? Suppose I do not even know what the V and W are. I mean, I can, I am, suppose I am told that you just forget about it. Then how do you go about this? I just know the matrices in question. I am not even going to delve into what is this phi because if you can go into phi, then you of course complete the root here and you know, go back to this W here and do all sorts of things. But I am not going to do, what I am just going to do? Forget about this blue part that exists here. So, now if you look at this picture here, it should be very obvious. What do I have to do? I have to find out this A, probably I will run out of colors before long. So, I have to find out this which is equivalent to this entire action. And now just by looking at this figure, that is why this figure is so powerful. What is it that I can say? You see, this A tilde that my friend has done is nothing but what has my friend first done? Born from here to here. But that is just the inverse of this action. So, this is, first action is the inversion of this coordinate allocation. That would take me to this. Then I would have B V1. Then I would require A, okay, maybe I should have had a little more space. So, let me write it elaborately here. So, this is B V2 inverse. Then I let it be acted on by B V1. Then I use the A that I have cooked up. And then what? Again an inversion of this B W1. B W1 inverse. And then finally, B W2. So, that is exactly how this A tilde matrix can be obtained. Yeah, if you try to derive this using all those enormous matrices and those large summations and everything, you will see that it is a pretty cumbersome thing. But once you understand what is going on and you follow the path here, right, and you see you land up with the requisite thing here. Just the final observation before we conclude today's lecture, we will revisit this in much detail, in some detail in the next lecture, not for much long. But if you now think of this V and W to be the same vector space. If you think of this V and W to be the same vector space and B V1 to be the same as B W1, B V2 to be the same as B W2, just observe what happens to this. Yeah, B V1 and B V2. Sorry, B V1 and B W1 are the same. B V2 and B W2 are the same, right. So, I am not saying that these two are the same. These two are still one and two are different choices. One is your choice, two is your friend's choice. So, these two do not cancel. But what happens to the sizes in the first place? The sizes are n cross n. And then what happens when you look at this? See, what am I saying? B V1 is equal to B W1. B V2 is equal to B W2. Don't you think these fellows are inverses of one another? These are all square matrices then. Remember, first observation is these are all square invertible matrices. Yeah. So, when you hit this with this and B W2 is B V2. B W1 is B V1, which means that I have chosen the same basis in the domain and the codomain. Your friend has also chosen the same basis in the domain and codomain. So, this is like T inverse A T. Similarity transformation is what we call them. And that is the reason why we studied briefly, very briefly when we are talking about fields, whether a matrix is diagonalizable through a similarity transformation. Because then we are saying that essentially there exists some non-singular transformation or just a change of basis will capture the same operation. And when you have a diagonal operator sitting here, it has very nice properties because it is in some sense like decoupled. We will revisit this point in much greater detail when we study eigenvalues and eigenvectors. But for now, at least please remember this and try to draw up an analogous figure when B V and W are the same vector space and B V1 and B W1 and B V2 and B W2 are also same. So, B V1 is equal to B W1, B V2 is B W2. That is all for today. Thank you.