 So, in the previous lecture we have seen this idea of coordinates and also seen how it is a very powerful concept in the sense that any arbitrary finite dimensional vector space can now be viewed as exactly the sort of things we are used to viewing that is n tuples of numbers over certain fields. The field being precisely the one over which the vector space in question is defined right. And then along the way we saw that then it makes sense to look closely at these matrices right because the matrices after all capture whatever is being done to vectors in n tuples when described as n tuples right. So, therefore, if we understand matrices better probably we will understand what happens to vectors in arbitrary vector spaces as well. So, with that in mind we try to view the effect of these elementary row operations on certain special subspaces of matrices. To begin with we saw the row span of a matrix and we saw that if you perform these elementary row operations on a matrix the row span of the matrix remains invariant and therefore, of course, the dimension of the row span also remains invariant under elementary row operations right. Not just that we also saw that if you carry on these elementary row operations until you get the row reduced echelon form of a matrix then it is precisely the non-zero rows that are left behind in the row reduced echelon form that also constitute a basis for the row span right and that is where we stopped in the previous lecture. And we mean to continue or carry on from there and now study the effects of these elementary row operations on another important subspace of a matrix that is the column span or the image of a matrix. But before that there were a couple of questions which were pending from a couple of lectures back which I think needs a bit of clarification which were posed to me at the end of the lecture yesterday. One of the things was this symbol plus so we wrote subspaces like so and said that this was a sum of subspaces but you could also see certain places that they use this sort of a notation and this is different from the sum in the sense that this is what we call a direct sum. We have actually seen this object maybe we have not used the symbol explicitly so when you say that a sum of two subspaces is a direct sum it is precisely when not just the fact that every object belonging to W1 plus W2 is representable as W1 plus W2 with W1 coming from the big W1 and W2 coming from big W2 but the representation is also unique. We have of course studied this in some detail and seen that it is unique precisely when the intersection of those two subspaces W1 and W2 is exactly just the 0 right the 0 subspace. So you can now conclude that a sum is a direct sum if and only if the intersections of those two subspaces is just the 0 and nothing but the 0 yeah that is another alternate way of viewing what a direct sum is. So whenever you come across this sign in the context of subspaces you should read this as a direct sum instead of just merely being a sum. So a direct sum is a sum of course with a special property that every object therein is uniquely representable as a sum of two vectors one coming from W1 one coming from W2 right. So that I hope clears the air on what this symbol means you might come across this symbol in the problem sheet right. So this will be part of your syllabus for the quiz that is to say even though I am explaining it right away but you know you have come across this object multiple times before. So with that being cleared there was another question I mean probably which would help not precisely a question but something that would help in viewing this thing that we saw yesterday which is basis change right. So of course you know what basis change does you have this matrices and you operate and take a transformation right but when we talk about basis change in case of arbitrary vector spaces what I am going to show you now is that this basis change is nothing but a sort of substitution you see okay. So for instance let us take the polynomials of degree to or less whose coefficients are real right. So one of those basis could be 1 x and x squared we already seen this is a linearly independent set we proved this in an earlier lecture and we also know that this is a generating set for every polynomial of degree to or less over the field of real numbers. Suppose this is your English and French which is a new basis is given by this okay where t is some arbitrary real number. Now suppose I have a polynomial say f f of x which is an object that comes from here given by c naught plus c 1 x plus c 2 x squared alright. So obviously if I want to now give it a coordinate representation I can say that f with respect to the first basis is what remember these have to be ordered basis in order for them to be considered as coordinates suitable for coordinate representation. So this is of course c 0 times 1 plus c 1 times x plus c 2 times x squared alright. So this is just c 0 c 1 and c 2 and the question that is posed is that what is f represented in terms of b 2 alright. So you can give a very direct answer to this because this is something you are familiar with this is just substitution you see right. So on the other hand let us use the sophisticated language of this basis transformation that we have learned the other day what did we see there we saw that if you have something that you want to represent in terms of a new basis then you need to find out this transformation matrix that takes you there from something that you know in your old basis. So this is the basis you understand and what is the special property of this transformation matrix is the dictionary. So this t would be every object in the old basis which is this represented in terms of the new basis. So I might as well write that this is 1 represented in terms of b 2 this is x represented in terms of b 2 and this is x squared represented in terms of b 2 right. It is exactly the application of what we have derived in the previous lecture only that I am now choosing a very specific vector space to illustrate the point. So I am just solving an example to make that concept very clear which is something that some of you had probably asked for if I did not if I am not mistaken. So what is 1 in terms of fellows in b 2 you see 1 can be written as just 1 times the first fellow here. So 1 in terms of b 2 is simply 1 0 0 yeah what is x in terms of fellows in the so let us write that down shall we. So x is nothing but t plus x taken 1 time minus minus t times 1 is it not notice that this is the second element in the second basis is the first element in the second basis. So therefore when I write x in terms of the second basis what is its representation going to be is it not minus t 1 0 yeah clear so far it is just t plus x minus t right. So 1 is the member in the basis set that is this b 2 and t plus x is also member in the basis set. So I take the difference of these two basis with this one scaled by t and I get back x right what I need next is x squared. So let us write that down again x squared is I of course need t plus x whole squared but then I need to get rid of 2 t x yeah. So then I definitely need minus 2 t but I cannot have x free right. So I have to have t plus x but in trying to get rid of minus in 2 t x I have also added minus 2 t squared but this one also had a t squared which I had to anyway get rid of. So now I have to get rid of what t squared minus 2 t squared which is a plus t minus t squared that is remaining here. So I have to have a plus t squared times 1 right see the operation being carried out here I hope there are no mistakes here. So then x squared sorry x squared in terms of this basis gets the following representation which is t squared minus 2 t and 1 which basically tells me that this t matrix is going to be equal to by stitching together all those parts 1 0 0 minus t 1 0 and t squared minus 2 t 1 but you see this is a formal language that we have learned but in other words what we are essentially doing is just substitution instead of representing everything in terms of x squared x and 1 you are now representing in terms of it is a shift you are giving 2 x so x plus t whole squared x plus t and 1 is of course the constant does not matter right. So now if you take this as a t and you take any arbitrary value of c 0 c 1 c 2 and you hit it with this matrix the coefficients in terms of this representation will show up as a result of that matrix multiplication right. So this is probably an example that sort of elucidates what is being carried out essentially a substitution that is being carried out and we give it a name called the change of basis right this is clear right okay good. So now having cleared up the backlog from some of the queries posed in the previous lectures we will move on to the new stuff that we plan to cover today. So just to quickly summarize the last few points that we observed yesterday we saw this elementary row operations leave row span invariant in other words the row span of a matrix does not change through elementary row operations okay. So of course it is a straightforward thing to see that if the subspaces themselves are the same then the dimensions also would be the same right it is a no-brainer but by the same token since elementary row operations and elementary column operations are kind of like duals to one another in other words as I said elementary row operations on a are the same as elementary column operations on a transposed right. So whatever property you have for elementary row operations vis-a-vis row span reflect exactly in the same manner for elementary column operations vis-a-vis the column span or what we call better as the image but let us call it column span because we are dealing with matrices image will reserve for something more generic more general. So these two things I have shown you based on that I am not going to prove this there is nothing really to prove. So the column span invariant and of course the elementary column operations leave the dimension of the column span invariant okay. So far so good yeah however the problem is the problem is when we perform elementary row operations the column span what happens to it is it preserved let us take a very simple example you consider an A matrix given by 2 3 4 right. Suppose this is your A matrix 3 cross 1 I can always choose an A matrix like this. So if you perform elementary row operations on this until you get its row reduced echelon form what you will end up with is an R that looks like 1 0 0. Now if I ask you about the columns spans of these two fellows are they by any stretch of imagination the same what is in the column span of this any vector of the form alpha 0 0 right column span means what anything that is just a scaled version of this it is a single vector. So anything in the span of this is just something that is scaled by some real number what about this clearly not the same I mean I do not even need to prove this right. So column span of A is certainly not equal to column span of R so what do we do do we give up I mean is there no connection is nothing preserved well it turns out there is curiously something that is preserved yeah. So it is the dimension of the column span is preserved by elementary row operations something that seems quite surprising right but it is true and that is exactly what we shall now prove and then subsequently use to prove another very important result. So what are we saying the claim is okay so he has a proposition so I am not going to put it in exactly this words that I have just said it but I am going to write it down in the following manner. So here is what it says suppose V1 V2 till Vk is a basis for column span of A of course a sum matrix I can actually choose this over any field m cross n okay if m corresponds to elementary row operations then m V1 m V2 m Vk is a basis for column span of m A so not only have I said the same thing as I what I had stated just a while back in words but I have said something more what this means is that if the column span has dimension k if the column span of A has dimension k the column span of m A also has the same dimension which is k which is what I had said but even beyond that I have also explicitly told you that if you know the basis or at least one basis for the column span of A then you also know what the basis for the column span of m A would be where m corresponds to elementary row operations a matrix that captures elementary row operations which means that m is what invertible yeah that is all that m means right when I say m corresponds to elementary row operations each of those elementary row operations is captured by some invertible matrix right so this is the claim right so we are going to try and prove this now so how do we go about this any questions up until this point this is clear at least the claim is clear right because then we will try and prove this so what do we need to show in order to establish this we need to show that this if it's a basis it's a generating set I mean again see everything boils down to understanding the definitions if it's a basis it must be a generating set that means anything that belongs to the column span of m A should be representable as a linear combination of these fellows and also that these fellows must be linearly independent yeah so let's take the case suppose v belongs to column span of m A I'm choosing a typical fellow nothing special any arbitrary fellow in column span of m A I will try to see if this fellow can be represented as a linear combination of those MVIs right this means that v is equal to m A x for sum x belonging to f n right clear so far so what does that what what can I then see what do I know about the column span of A it is spanned by these vectors just look at this object here why does this object within the blue parenthesis come from it's definitely an object in the column span of A and therefore I can write this as m summation i going from 1 through k as it turns out here alpha i v i right is it not but now of course this alpha is a scalar and in case of you know multiplications of matrices and n tuples and m tuples and vectors and all this alpha can very well be pulled out so this is the same as alpha i I'm pushing this m inside I can always do that it's after all just a scalar right remember this is a scalar this is a vector this is a matrix so I'm just pushing this matrix in like so i is equal to 1 to k which definitely then belongs to the span of m v 1 until m v k in other words this set is indeed a generating set for the column span of m A now all that remains to show is the linear independence of this set so what do we do in order to prove linear independence we assume that for some linear combinations of these fellows it is 0 and then we show that the only possible linear combination when that happens is if each of the scalars acting as the coefficients of these vectors has to be 0 and no other option is left right so suppose summation beta i m v i i going from 1 through k is equal to 0 right this means that m now I am pulling it out okay summation beta i v i i going from 1 through k is equal to 0 can you tell me what the next step is going to be what is the next argument sorry so what is okay you can already argue here since m is even it's invertible therefore its kernel must be 0 so this means that I can get it or you can hit it with m inverse directly in either case you can say that this implies summation beta i v i must be 0 but then this set of v i is is already a basis by my proposition here and therefore the only way this can vanish is when beta i is 0 for all i yeah which means that I have established the linear independence of this set here so it's a linearly independent set of vectors it's also a generating set for column span of m a therefore it is indeed a basis for the column span of m a and from there of course I can deduce that the elementary row operation may not leave the column span invariant but it certainly leaves the dimension of the column span invariant yeah so let me add those two points down here in fact I think I can do better I can probably write a table of invariance yeah let me do that so this is a table of invariance okay so we have two kinds of operations that we are interested in exploring the elementary row operations and the elementary column operations and what are the objects under under the scanner we have the the row span the dimension of the row span the column span and the dimension of the column span so the elementary row operation leaves the row span invariant so that checks out it leaves the dimension of the row span invariant of course that checks out it does not leave the column span invariant so that doesn't check out but it does leave the dimension of the column span invariant so that also checks out and of course I don't need to prove any of it for this because it's very analogous so the row span isn't left invariant here but the dimension of the row span will be left invariant yeah the column span will of course be left invariant as will its dimension but what does this tell us now for one of the first non-intuitive sort of results in linear algebra at least in so far as matrices are concerned we shall now see that just armed with this table of invariance we'll be able to infer that so let me outline my argument first if you are interested in just the dimensions and not the subspaces per se then it should really not matter whether you're carrying out both elementary row operations and column operations in any arbitrary order because both elementary row operations and column operations do not change the dimensions these are as if sacrosanct see these have ticks for both so therefore if I can perform sequence of elementary row operations and column operations to get into a nice form from which it turns out that reading off the dimensions of those subspaces is easy I should go ahead and do that and that will tell me some story about this right okay so already we have seen that with elementary row operations we can take a matrix A to its row reduced echelon form I'm not going to give you a very formal I mean complete kind of a proof but I'm going to at least give you a feel for what this proof entails what is it saying then look at this matrix what is its structure going to be like typically you know think of the picture because you can visualize this now so you have some zeros until you come across the first leading one here and then you have some I don't know what in the second one similarly but the leading one has to be to the right so you have something here of course above it you must have zero and below it also you must have zero and so on that's the property of the leading one then the then the third one maybe you come across a leading one here so these are all zeros this is zero this is all zero in between you can have non-zero entries wherever you like like so so this is typically after you're done with all your row reduced I mean row operations elementary row operations you end up with this row reduced echelon form suppose now at this point you now commence a sequence of column operations to do what exactly look at this structure already after the row operations there's a beauty in this structure in the fact in this in the sense that at every column that contains a leading one there's nothing else below or above it it's a crucial observation so that if I now go ahead and take this column and suitably scale it and subtract it from other columns where the first entry or the elements in the row corresponding to the leading one are non-zero I can zero them out you follow what I'm saying is do a sequence of row operations get to this form you're done with row operations now begin a sequence of column operations what sort of column operations using this fellow yeah it's like chess you use a different pawn to get a new piece right on the other end when it reaches right so using this fellow you get here eliminate this eliminate this eliminate any non-zero fellow in this row corresponding to the leading one here see others will not be affected at all because these are all zeros so no matter what factor you scale this row this column by it's only the first row elements that will be affected by it because of this structure same thing you do with the second every I mean columns to the left do not really matter because they're anyway zeros so you have to only bother with some of those columns to the right precisely the ones that do not have any leading ones anywhere here after just zero them out if you do that so then you carry out elementary column operations what will you end up with you will just end up with r number of ones at some arbitrary locations corresponding to the k1 k2 k3 till kr so what you end up with is a one here everything else is zero here and then you end up with the one here everything else is zero here right you end up with the one here everything else is zero here now from here you carry out the third kind of elementary column operation which is permutations just like there are three kinds of elementary row operations there are three kinds of elementary column operations too if you carry out permutations you can just put this column as a first column this column as a second column this column as a third column and so on and so forth until you get a structure which looks like an identity matrix of size r cross r and everything else so this is also elementary column operation of the third kind yeah and this is all zero so in going from the original a matrix to this structure all that you have done is nothing other than elementary row operations and column operations and in the process what have you landed up with matrix that looks like this so how many linearly independent rows are there how many linearly independent columns are there they are both equal to r so therefore the row rank and the column rank are the same in other words when I talk about a rank of a matrix row rank a is equal to column rank a is equal to r in this case and therefore I can just call it rank a without any ambiguity which just says that rank a is equal to rank a transpose that is in other words right something that sounds a pretty pretty strange to begin with like why should the row rank and column rank be the same but the missing piece in the puzzle was that result which we proved today where we where we saw that if you carry out row operations the dimension of the column span doesn't change the column span will change but the dimension of the column span doesn't change and that is exactly the reason why this result is true okay so in the next module we shall try to use this and gain further insights about certain other subspaces and their basis and then see a very important result at least in so far as matrices are concerned which is the rank nullity theorem