 What the basis for the row span is, which is the non-zero row is in the row reduced echelon form. The basis for the columns span similarly would be the non-zero columns in the column reduced echelon form if you can cook it up analogously, right. But there are other important subspaces as well. The most important one that comes to mind apart from these two is the kernel of A, yeah. So, again we start with this situation where F is a m cross n, A is an m cross n matrix over the field of numbers in F, right. And then we investigate the kernel of A, that is our object of study. What is the kernel of A? It is basically the set X such that, I mean X you can say belonging to Fn such that, so this is its definition such that AX is equal to 0. But we have already studied this when studying the equations. And we have seen how to get this solution, we have seen that elementary row operations do not change the solution set, yeah. So, I might just as well study this question for Rx is equal to 0. So, suppose R is equal to the row reduced echelon form of A, yeah. Then I might just as well look at the solution set of this. But we already know how to characterize the solution set of the row reduced echelon form matrix. Again, so note that this is not a very formal proof, but this is at least going to give you an intuitive feel for why the result is true. Yeah, we will do it more formally for more general objects such as linear transformations of which matrices will happen to be a special class, special example. So, do not worry, we will do a more formal proof, but at the moment it is important to understand in the context of matrices why it is true. So, what we have here is bunch of 0s until the leading one appears and then heaven knows what else and then again a 1 here and arbitrary stuff like this and ultimately possibly some of the rows at the bottom are all 0s, okay. This is the last non-zero row that you possibly encounter, right. And then we have x1, x2 until x, what is the size of the matrix, xn and this has to be equal to 0, right and we know how to solve for this. So, we choose precisely certain special variables which correspond to the leading ones as the pivot variables and the others as non-pivot variables, yeah. I think we have already given them a name a set F for free variables, right. So, we saw that the solution of this turns out to be like x1, x2 till xn is equal to, so we gave it a name like let us say members in the set of free variables, let us get the u1, u2 likewise, okay. So, suppose there are r pivot variables, suppose there are r pivot variables which by the way is just another way of saying what that the rank of the matrix A is r but this also means there exist n minus r free variables, yeah, n minus r free variables and what is the solution exactly going to look like now. See, each of those pivot variables for evaluating them you put every other term on the right hand side, what about the free variables in and of themselves, where do they appear? They appear as ones in some unique positions, so in other words a solution set of this looks something like if I say u1 is the first free variable then u1 is probably this first fellow here but this fellow does not appear anywhere, so everything else will be 0 here. Let us say you have u2 which is probably the third fellow here, right, so second and okay no first, second and third, so let us say the fourth fellow which is x4 is u2, so this u1 is probably x1, this u2 is probably let me use a different color, so as not to confuse you, so this u1 is probably just a new symbol that I am using for x1 and u2 is probably something that is standing for x4, say for example in this case, right, where x1 and x2 are both pivot variables, sorry x2 and x3 are both pivot variables, right. So then what happens here? In the fourth position you will have 1, yeah, in the first one of course it will be 0, first position see that is the, that is a very crucial detail, second position it can it appear, so for example for x2 of course it can appear, so some constant term appears, I do not care really, in the third one it can appear, so if this entry is non-zero then this with a minus sign will appear, are you following this, if this entry is non-zero then this will appear with a negative sign, so this could be some thing like this, but the very important detail is if I carry on like this until I come to un-r, yeah, which will be probably the last variable by this, you see the very important observation here is first this characterizes all possible solutions, does it not of the equation Ax is equal to 0 or Rx is equal to 0, so that means if I take these vectors, remember u1, u2, u3, dot, dot, dot, till un minus r are arbitrary scalars, so now in the language of linear combinations we know what to call it, it is a linear combination of vectors that look like this, yeah, okay I mean I do not care really, this may not be 0, this last fellow may have been used somewhere, somewhere wherever, I do not care, so that star means I do not care about its value, but the very important thing is that these fellows definitely generate every member, it is a spanning set for the kernel of A, so the first observation is that this fellow that looks like this 1 with all 0s, then the second fellow with this 0, 1 you know and all of this and so on until this last member with A is a generating set for kernel A, please ask if there is any doubt about this because that is exactly what this is saying, every solution of Ax is equal to 0 must look like this, so therefore in other words it is that is to say that every solution of Ax is equal to 0 is expressible as a linear combination of these fellows alone, I do not need anything else, so therefore these fellows suffice, see generation is like the sufficient condition and the linear independence is like the necessity, it sort of restricts, you might have extra superfluous fellows in a generating set, in a linearly independent set you might not have enough which is why you can expand any linearly independent set to a basis as we have seen earlier. So, now all that we now need to do is to check the linear independence of this, but again even a very unsophisticated look at this will tell you that if you take a linear combination of this say alpha 1 times this plus alpha 2 times this plus so on till alpha n minus r times this, what happens at these unique positions where these 1s are sitting nothing else gets added right. So, if you want that to be equal to 0 then all your alphas have to be 0 just as a point of illustration, if I have a bunch of vectors suppose let us take a 5-tuple this is 1, I do not care what this is, I do not care what this is, okay no these have to be 0s here, this is 0 alpha 1 times this plus alpha 2 times this is 0 because there is no other possibility here, this is something here this is 1 and this is 0 plus alpha 3 times yeah let us say 0, I do not care about this it has to be a 0 in the third position because there is a 1 here and okay let us say there is a 1 here yeah and it has to be 0s below 1, 2, 3, 4, 5 so that is just 5 right. So suppose you take this and if you take this sum then at least in the first position of this sum you will have only alpha 1 in the third position you will only have alpha 2 and in the fourth position you will only have alpha 3 so that means I do not care about the other positions but at least in the first position you can have nothing but alpha 1 because of the structure of these vectors that are constructing this kernel the spanning set for the kernel the generating set for the kernel the constituent vectors because of their inherent structure so I should probably put a 0 here because of the inherent structure here this must be true at the third position second position I do not care the third position it is alpha 2 and at the fourth position it is alpha 3 right. So if this has to be equal to 0 then obviously you need each of them to be equal to 0 so alpha 1, alpha 2 and alpha 3 have to be 0 right that means that this set is not just a spanning set or a generating set for the kernel but it is also a linearly independent set which means that this is a basis for the kernel this is also a linearly independent set if anyone has any doubts about why this is linearly independent please feel free to ask is this clear it is very important okay why this is linearly independent to understand that these two together tell us this is thus a basis for kernel of A immediately what strikes us what is the dimension of this kernel n minus r but that tells us one of the most fundamental results at least in so far as matrices are concerned that n is equal to the rank of A plus the dimension of the kernel of A sometimes also called the nullity of A okay this kernel is also called the null space okay so therefore at least in so far as the this is concerned we have seen the rank nullity theorem for matrices along the way we have also seen why the rank of A must be equal to that of A transposed or the column rank and the row rank is always going to be the same right putting everything together this is the result something we had claimed to state in from in the form of like degrees of freedom which is what the nullity is the constraints the effective number of constraints that is the rank the effective number of original equations of constraints not some superfluous constraints that you are just cooking up by combining the original constraints the actual number of constraints okay right so having seen this and since we have also discussed this object called the direct sum right at the beginning of today's lecture we will see something interesting but not over general fields over the real field so here is an interesting result we will say that R n so suppose a is a matrix of size m cross n over the real field then R n is equal to kernel of a direct sum with the row span of a right first of all let us just look at objects that belong to the kernel as well as the row span what sort of objects can belong there okay what kind of objects can belong there yeah so try and prove this suppose v belongs to kernel a implies a v is equal to 0 if v belongs to row span of a then we have that v transposed is equal to let us not use alpha let us use let us say p transposed a for some p belonging to R m just from definition have an issue yeah no doubts about this so now something that comes from the intersection if v belongs to kernel a intersection row span a then what is the consequence okay let us just say look at v transposed v yeah what is v transposed v already from here we plug this in so this is p transposed a this is being substituted for this v transposed yeah and the rest of it we just let it be v but what do we know about a v we can open up this bracket we know a v is 0 so therefore this must be 0 yeah true right so this is true what do we know about v transposed v when it is an n couple of numbers real numbers isn't this the same as saying summation vi squared is equal to 0 now over the real field if you have sums of squares sum of squares equaling 0 then each of them must be individually 0 which means that vi is equal to 0 for all i that is this v that belongs to the intersection of the kernel and the row span must be nothing but the 0 vector why is this interesting and why did i choose real numbers because you consider the field z2 yeah and you consider the vector 1 1 can you infer that if if this is your v say from z2 with the modulo 2 additions and multiplications and so on if this is your v from this can you conclude this just look at this what is v transposed v over the binary field it's 0 right but v is certainly not 0 you see the point so do not take this property for granted that anything that belongs to kernel as well as the row span can only be the 0 vector that's only true if you're dealing with the real field that's like keeping the field in mind is important because of the binary field it's not true that you can infer such a thing just a caveat okay that's the reason why i specifically chose a real field in this case anyway so let's get rid of this apparent digression and carry on this definitely means what that these two subspaces have nothing but the 0 vector in common yeah they have nothing but the 0 vector in common which means that if you take their sum it is actually a direct sum right so what do you mean by that with what it means is that kernel of a plus row span of a is the same as kernel of a it's just by the definition we've seen that if two subspaces have nothing but the 0 in common then any object in that sum is uniquely representable by as a sum of one vector from here and one vector from here so this is just by definition this right so far so good what else do we know recall that dimension of kernel a plus row span a so this kind of is sort of telling you that this is equal to this but I've not yet shown that this must be equal to this you follow right we have just shown that this is a direct sum but the fact that this direct sum must also equal Rn is not obvious yet so the proof still remains to be done so what is this dimension of this sum is what dimension of kernel a plus dimension of row span a minus dimension of kernel a intersection row span a what is this plus this n right by the rank nullity theorem so I'm just invoking the rank nullity theorem this is equal to n minus what is this 0 how do you argue that the subspace containing only the 0 vector is of 0 dimension there are multiple ways one straight forward ways what's the generating set what's the maximal linearly independent set it's an empty set a basis as a maximally linearly independent set in a subspace that contains only the 0 vector you cannot cook up any linearly independent set so the cardinality of a linearly independent set is 0 it's an empty set therefore the dimension of the 0 subspace must be 0 the subspace span by the 0 vector has a dimension 0 okay just from fundamental principles from the definition right you can also think about the generating set that is also going to be an empty set right so this is n minus 0 but here is the D now this is a subspace sitting inside what but some subspaces are also subspaces themselves so this whole object is a subspace sitting inside what but kernel a plus row span a is contained inside our n now I have this peculiar situation where I have a subspace whose dimension is exactly equal to the vector space inside which it is lying so therefore can this be anything other than the vector space do I need to show the inclusion see this is another way of proving it but now that you have you are armed with this idea of dimension if you can just see one sided inclusion and if you can match the dimensions can you prove that you can probably I can probably put that as a quiz question or an assignment question right that prove that if once a subspace is contained inside the other and both the subspaces are the same dimension then they must be the same subspace for finite dimensional subs vector spaces of course I mean you can just it's not okay it's not probably worth putting in a quiz because it's going to be too easy you just take the contrary that's the way to do it suppose that this is strictly contained inside this so that there is something here that cannot be spanned by this but then any basis for this is also a basis for this because the basis cannot contain more or less fellows as many as the dimension of the subspace so any basis for this is any also basis for this so if there is a fellow here who cannot be spanned by this then that fellow cannot be spanned by a basis for this or generated by a basis for this but any basis for this is also basis for this so what you are saying is that a basis for Rn fails to span or fails to generate an object in Rn which is obviously absurd makes sense right so it's just a bit of argument please practice writing it down and convince yourself so I'm not writing that down but I'm going to use that the fact that this is inside this and it has a dimension exactly equal to this guarantees that this must be equal to this yeah which means that over the real field if I give you any vector this is a very interesting way of looking at a matrix what does a matrix do a matrix splits up Rn in such a way in a beautiful way such that given any matrix there is a unique way of writing any vector in Rn as a sum of two objects one of which comes from the kernel or the null space of the matrix the other comes from the row span of that matrix yeah right so that is exactly what it is okay you can now capture the action of a matrix on a vector in Rn in this manner that's another new way of viewing the action of a matrix okay so we will end this module here okay so what I'm saying is that because it's a direct sum and this is also equal by my proposition to this it means that you give me and you throw any matrix at me and immediately what I can say is okay so you are giving me a matrix which is splitting up this Rn into two parts such that every vector in Rn can now be written as a unique combination of two fellows one of which comes from the kernel of the given matrix you've thrown at me and another comes from the row span and that's a unique way so that uniquely characterizes the operation or the matrix that you've thrown at me