 So, now look at the end of the day just like in a language it is the idea that counts language is just a medium for the idea whether you are writing a paper in Latin or in any Indian language if your idea is strong it goes through right a lot of during the closed a lot of papers got written in Russian of course, because it was closed people did not come across those papers, but later on these ideas were so strong right especially in our control theory. We find a lot of papers work done by the Russians in the 60s and 70s and they published in their own language, but when they made it through in the English language immediately there was a rush for translation. So, it is the idea that counts is what I am saying right. So, in the by the same token what is this idea here the idea is a vector that is residing here that is being communicated through in one language that is in one basis as some n tuple and in another basis in another language as another n tuple. So, that is just a translation of the one same fundamental idea right. So, let us start with that fundamental idea which is the vector. So, in terms of the first basis let us say this is written as alpha 1 alpha 2 till alpha n and in terms of the second basis this is written as beta 1 beta 2 beta n right. We are interested in investigating the relation between these two right. What this means essentially is that v is equal to alpha 1 v 1 plus alpha 2 v 2 plus dot dot dot till alpha n v n and what this means is that v is equal to beta 1 v 1 plus beta 2 v 2 beta n v n yeah oh you sorry yeah, but at the end of the day the connecting theme is this v which is the same you see. So, therefore, we choose to write the following alpha 1 v 1 plus alpha 2 v 2 alpha n v n is equal to beta 1 u 1 plus beta 2 u 2 plus beta n u n right, but now you choose which one is your familiar language right. So, let us say beta 1 or v 1 is the basis that you are familiar with. So, what do you do then? So, these are the fellows that you are familiar with and this is a basis that is new for you. So, let us say we want to write this v 1 in terms of fellows inside this ok. So, keep this equation with an asterisk and now let us say v 1 is equal to let us say ok what should I call it p 1 1 u 1 plus p 2 1 u 2 plus dot dot till p n 1 u n ok. Why can I do this? Because of course, both are legitimate basis v 1 is the legitimate member of the vector space v. So, since any vector in a vector space can be written as a linear combination of the fellows in the basis. So, let us write the first member in basis 1 as a linear combination of the members in basis 2. We can do the same with the second member in basis 1 because basis 1 is what we like what we know sort of. So, we are looking up in order to express the first word in my known language how do I combine words in the unknown language. So, this is like I am reading up from the dictionary ok. So, this is p 1 1 this is p 2 1 u 1 plus sorry p 1 2 u 1 plus p 2 2 u 2 plus dot dot dot p n 2 u n and likewise I can write v n as p 1 n u 1 plus p 2 n u 2 plus p n n u n right. This is legit if I now plug this back into asterisk equation. So, let us call this the double asterisked equation. So, if I put this double star equation in single star equation back which is to say that I am substituting this here right. What can I say? I am going to erase that part I hope it is ok right. So, what do I have there alpha 1 this is v sorry looks like u to me from the other side it is alpha 1 v 1. So, this is p 1 1 u 1 plus p 2 1 u 2 plus p n 1 u n plus alpha 2 p 1 2 u 1 plus p 2 2 u 2 plus p n 2 u n plus dot dot dot and you can fill in the gaps in the last one would be alpha n p 1 n u 1 plus p 2 n u 2 plus p n n u n is equal to on the other hand I have beta 1 u 1 plus beta 2 u 2 plus beta n u n. Can you guess what property I am now going to invoke something that I proved the other day which says that in terms of a given basis every vector can be represented uniquely. Here what I have is a vector which is represented in terms of the use on the right hand side I also have a vector which is represented in terms of the same use. Therefore, the coefficients of each of these use must agree yeah, but what does that entail? So, please fill out those gaps for me just pick out. So, beta 1 is equal to p 1 1 alpha 1 plus p 1 2 alpha 2 plus p 1 n alpha n beta 2 is equal to p 2 1 alpha 1 plus p 2 2 alpha 2 plus dot dot dot till p 2 n alpha n and beta n is equal to p n 1 alpha 1 plus p n 2 alpha 2 p n n alpha n. What does that remind us of this whole thingy that I have written here what is this? Can it be written in a more compact fashion in the form of a matrix right in the form of a matrix which means I may erase this part I hope right the star and the double star have done their stellar job. So, what I have then is beta 1 beta 2 beta n is equal to p 1 1 p 1 2 p 1 n p 2 1 p 2 2 p 2 n p n 1 p n 2 p n n times alpha 1 alpha 2 till alpha n which for the sake of brevity I might also write as remember what this is? This is my sentence in French this I leave it my sentence in English and this is my English to French dictionary how? Look at this what is this? Go back to the derivation see how each element in basis 1 got represent in terms of elements in basis 2. Look at the column here is this not the coordinate representation of which element? The elements in the basis 1 right the elements in basis 1 represented in terms of elements in basis 2. So, elements in basis 1 were v 1 through v n if I am not mistaken right. So, this is basically v 1 represented in terms of basis 2 this is v 2 represented in terms of basis 2 and this is v n represented in terms of basis 2 that is all you need to know see each English phrase translated to French and you finally have what you wanted in French right English dictionary transformed to French that is the way I sort of see it you can have other analogies too. So, this is how you get from one basis representation of a vector to another basis representation of the same vector all that you need to know is this and now I am going to make a very crucial claim. I am going to say that this is a square matrix right I am going to say this is always non-singular can you argue why? So, why what is the equivalence we have seen of matrices being non-singular forget about determinants and stuff whatever we have seen in this course we have seen that it is equivalent to saying that there is the kernel of the matrix is only 0 subspace. So, that means if I want this to be 0 I have to have this to be 0. So, that means the 0 vector has to have a unique representation, but that is obviously there is only one way you can get a 0 vector by combining the basis fellows and that is 0 no matter which basis you choose. So, since this is the basis representation of a vector if you are asking this vector to map to the 0 vector under this basis this both of them have to be the 0 representations right because ultimately this vector has to be 0 only then it gives a 0. So, this vector is 0 under any other basis also this is 0 nothing no linear independence nothing like that knows just the simple argument based on what we understand about matrices see first few lectures 2, 3 lectures we discussed this right that the equivalence between non-singularity of a matrix and of a square matrix and the fact that its kernel that is the 0 vector right the vector that takes it to 0 a x to 0 can only be the 0 vector nothing else it is linear independence of course, but then we have a more sophisticated way of seeing this right. So, that is a very crucial connection that we have seen now between representation of vectors in two different basis based on this now we will say that we will extend our understanding of vector spaces by investigating certain properties of matrices a little deeper because after the first 2, 3 lectures we kind of said these are kind of some trivial vector spaces this matrices and these n tuples. So, let us look at more abstract vector spaces, but now we have sort of come a full circle and shown you that after all at the end of the day if you are dealing with finite dimensional vector spaces then you can infer everything that you want to by just looking at n tuples of numbers over the appropriately chosen field which being the same field as that over which the vector spaces defined right. Now, then we should try to investigate certain properties of matrices because understanding properties of these matrices will be pivotal to understanding properties of more general class of things or objects that are called linear transformations. We will later see that over finite dimensional vector spaces any linear transformation can be sort of as some matrix operating on n tuples under this coordinate representation. So, we will try to motivate that discussion in whatever time is left today about a closer investigation of matrices and their properties. We will take off from where we left off with Ax is equal to b remember the last thing we discussed about it was classifying the solutions or characterizing the solution of Ax is equal to 0. We said take it to its row reduced echelon form you had those pivots and then you had those free variables and those free variables corresponded to each single vector right. So, we took this I think if I am not mistaken three equations in ten unknowns and then we had seven free variables if you go back to that early example that we have taken. Now, I am going to make a claim that the seven variables corresponding to the seven vectors right. Then the seven vectors if you look at them not only are the linearly independent, but they are actually a basis for a very important subspace which is the kernel of the matrix. And in addition after you have transformed a matrix to the row reduced echelon form the non-zero rows that you are left with of that row reduced echelon form matrix form a basis for the row span of the original matrix ok. So, let us at least see the second proof the first one might take a little more writing up, but at least in whatever times left let us try and see why the row spans of a matrix and its row reduced echelon form are identical and that their basis can be one of their basis can be the non-zero rows of the RREF of a matrix. So, look at this matrix A which is m cross n suppose all right. And through a series of elementary row operations which are all clubbed together into this one single matrix m we get this m A is equal to R is equal to the row reduced echelon form of A. What do we know about m? It is invertible it is non-singular obviously m is invertible right good. Now what we are going to claim is that the row span of A is equal to the row span of m A. What is the row span by the way? Just like we have a column span of the image if you take A transpose then the column span of A transpose is a row span of A. So, it is going to look like a wide vector a row vector but essentially structurally it is the same thing right it is just one in a tall form one in a fat form. So, it is a row span. So, we if you want to show this these are both subspaces by the same way we have shown that image is a subspace the row span is also a subspace. In fact, the word span should give you a hint that it is a subspace span by the rows right. So, we are going to show this if we want to show this we have to pick an arbitrary object here and show that it belongs here and an arbitrary object here and show that it belongs here right. So, suppose x belongs to row span of A which basically means what? That x transpose is equal to alpha transpose A for alpha belonging to what is the size of alpha m right sorry yeah l m yeah and what is the size of x because there are n columns. So, each of those wide objects is n n strings right n length. So, therefore, this is an n tuple, but I have written this transpose because I want to write it as a tallish I mean of wideish element rather than a tallish element. So, this is exactly like the image as I said right you just take the transpose on both sides anything that belongs to the image of A transpose is in the row span of A that is clear right. Then what does this mean? We have to show now that this belongs to the row span of m A how do we do this? We have taken an arbitrary object in the row span of A and we want to show that this object belongs to the row span of m A. Then we will be convinced that this row span of A is definitely contained in the row span of m A and then we will do the opposite we will take something that belongs to the row span of m A and we will show that it belongs to the row span of A. So, both sided inclusion is proved then equality of those two subspaces is proved. So, the first strategy is that we take something that is in the row span of A. If it belongs to the row span of A it means it belongs to the column span of A transpose which is the image of A transpose. So, you see if you take the transpose on both sides this is nothing, but saying that x is equal to A transpose alpha. So, there exists some alpha such that A transpose alpha is x. I have just taken the transpose because it is a row picture I am interested in now nothing fancy right it is just the row and column just like you have elementary row operations you perform elementary row operations on A transpose it is like performing elementary column operations on A nothing nothing different from that right. So, now if this is true then what can we write alpha transpose A because this is this is a row matrix it is a row vector. So, this is basically a combination of the rows of A. So, now we can say that this implies x transpose is equal to alpha transpose m inverse m A because m is invertible I can do this, but what is alpha transpose m inverse that is also some vector it is nothing more or nothing less than a vector right. So, therefore I can write x transpose is equal to some alpha hat transpose m A I mean I can I just do away with the bracket here really matters not where of course alpha hat is equal to m inverse transpose alpha, but what does this mean it means that the same row vector is nothing but a linear combination of the rows of m A which means that x transpose belongs to or rather x belongs to row span of m A. I started with an x that belongs to the row span of A and now I have shown that it must belong to the row span of m A. Now, let us do the other way round and I am sure you are already able to guess how we are going to go about it the same old trick suppose y belongs to row span of m A it means that y transpose is equal to some beta transpose m A which is nothing but some beta hat transpose A where beta hat is equal to m transpose beta. So, the observation of the fact that this thing taken together is also nothing but a row vector and here also this thing taken together is nothing but a row vector ceases through right. So, therefore we have this conclusion that y belongs to row span of A. So, if something belongs to the row span of m A it must belong to the row span of A something belongs to the row span of A it must belong to the row span of m A combining these two parts we have of course, row span of A is equal to row span of m A as required, but that also means that if that after all the same subspaces if they are the same subspaces then the dimensions must be equal it is the same object after all. So, their subspaces their dimensions must also be the same, but what is more crucial is now we have gone one step further and we are also going to claim that not just you have these subspaces being the same, but that you have explicitly have at least one ready made choice of basis for this subspace which is nothing but if you have gone all the way across and found out the row reduced echelon form then those non-zero non-zero rows exactly are elements of the basis that you are looking for and therefore, the dimension of this is exactly equal to the rank right. What do we need to prove in order to do that we have to show first that those bunch of non-zero rows that we are left with at the end of this row reduced echelon form operation do indeed generate and second that they are linearly independent. How do we show that they generate? How do we prove that the bunch of non-zero rows actually end up generating any object that is there in I mean the seats are already laid here. Any object again suppose some x belongs to this one of these choices of m definitely takes you to R if that happens to be your R then after all is there anything left to prove about the spanning part of the generating part. Because what you are dealing with at the rows of A which in turn at the rows of MA, but MA can very well be the row reduced echelon form. So, therefore, this is also equal to the row span of R. So, of course, if you take the non-zero the zero rows would not matter they do not add anything substantial. So, the non-zero rows definitely generate. So, non-zero rows of R is equal to R R E F A generate RSP A is there anything left to prove in this really nothing any doubts. So, the only thing that is left to prove is that this is a linearly independent set and that is very interesting because of the way we have constructed the row reduced echelon form. Remember the K 1, K 2, K 3 and those pivot variables did any other row have a non-zero entry corresponding to the pivot column of any row and a given row. What it means is that if you combine those non-zero rows say alpha times the row 1 plus alpha 1 times row 1 plus alpha 2 times row 2 let us say the transpose picture if you want plus alpha K times the Kth row. What is it going to look like? These are rows by the way the non-zero rows of the row reduced echelon form. What is it going to look like? You see in the K 1th position you will have alpha 1. In the K 2th position you will have alpha 2 and likewise in the K RS position you will have alpha R sorry this is I wanted the rank to be R right. Oh I should not choose R okay let us let it be K does not matter K Kth position bad notation sorry about that or maybe I can change it to L yeah. So, suppose the rank is L is what I am claiming right. So, the K Lth position is alpha L I do not care about the other positions you see but if this has to be equal to 0 if I am checking for linear independence then this has to be equal to 0 only when these alphas are equal to 0 but that is exactly what it is. If you want this to be 0 the only way that is going to be true is because these fellows at the K 1th, K 2th till the K Lth position contain only those scalars and nothing else. So, this entire tuple has to be 0 it means that each of these alphas have to be 0. If each of these alphas have to be 0 for this to result in a 0 then it can only mean one thing that those rows are also those rows that are left behind as non-zero rows after the row reduced echelon form must therefore, be linearly independent. So, what is the big picture the big story we will revisit this and also go through this in some more detail because there is lot to the story that is left. The summary that we have seen thus far with this row reduced echelon form is if you carry out elementary row operations the very important point the row span the vector space does not change. So, the row span remains invariant under elementary row operations. So, of course, the dimension of the row span also remains invariant under this elementary row operation and what is more if you go all the way up to getting this row reduced echelon form of the matrix then the non-zero rows that are left behind at the end of the day exactly form the basis for that row span. You can go ahead and extend the analogous picture I will not derive that for the elementary column operations which I have not defined, but can be very similarly defined by operating on A transpose. So, similarly the column span leaves the image or the column I mean the column operations the elementary column operations leave the image or the column span invariant and after the column reduced echelon form that you might obtain whatever is left behind as non-zero columns they are precisely the basis or one basis for the image or the column span of a matrix right. Next we will see the effect of this row reduced echelon this elementary row operations on the column span that is going to be interesting because it does change the column span. However, it does not change the dimension of the column span and that is going to be very crucial. So, we will derive all of that and see that complete picture in the next lecture. Thank you.