 So, in the previous lecture, we have just started to introduce the concept of linear transformations between vector spaces. We said that you have a vector space here and a mapping that takes you from an element in the vector space V to another element in the vector space W, right. And we said that if this is a linear mapping, ok, that is what we are going to study. And we call it a linear transformation. Now, we saw that this object came into being, at least in our discussions. The motivation of this object came from matrices. When we tried to get the abstract notion of what matrices encapsulate, we arrived at this object, linear transformations, right. So, we have proved a very important result for matrices, at least we have given you a fairly good idea of why it is true, which is the Rankinality theorem, right. We should also endeavor to do the same for linear transformations in general when they are acting on finite dimensional vector spaces, ok. But much before that, we have introduced the concept of one-to-one mappings and onto mappings, the injections and the surjections. So, let us try to understand what injective maps entail and what surjective maps entail. What is an equivalent condition to check in the language that we have defined so far? What is an equivalent condition to check whether given linear transformation is injective, ok. So, the first claim we are going to make in this lecture is phi which is a mapping from vector space which is finite dimensional to W. Actually, you need not necessarily have finite dimensional in this case, but we will be dealing mostly with finite dimensional vector spaces. So, let us leave it there. But in general, the result that I am claiming is this, is injective. This is equivalent to the condition that the kernel of phi is spanned just by the 0 vector of the vector space V and nothing else, right. So, this is the first claim. So, if you are given a linear transformation between two vector spaces and you want to figure out if this linear transformation is one-to-one, then you better try and investigate what the kernel of that linear transformation contains. If it is just the 0, then you are done. That is an evidence for the mapping being injective. Of course, you have to check whether it is linear that goes without saying. So, the first check you do is you take objects from the vector space V, say V1 and V2 and test out if phi acting on alpha times V1 plus V2 leads to alpha times phi V1 plus phi V2. That is the basic linearity check. And once you have assured yourself that it is a linear mapping or a linear transformation, then this is what you need to do, right. So, how do we establish that this is indeed true? Let us do a quick check. Again, we will rely solely on the definitions, the way we have defined injective or one-to-one maps. So, suppose kernel of phi is this. Suppose that despite this, the map fails to be one-to-one, which means what? Consider V1 not equal to V2. Of course, V1 and V2 are both objects living inside the vector space V, yeah. And yet, phi V1 is equal to phi V2. If this is true, then obviously this violates the very definition of this transformation of this mapping being one-to-one, right. Now, we will have to show that this is absurd. This cannot be true. That is what we will try to do now, right. So, if this is true, what can we say? We can say that phi V1 minus phi V2 is equal to 0 of W, yeah. V and W are not necessarily the same vector spaces, right. So, you have to be sure about which vector space an object belongs to. Once you have mapped an object through phi, it no longer belongs to V, it belongs to W. So, therefore, this is the 0 of the vector space W. Later on, when things will be obvious from the context, we will not carry on this habit, we will just omit this subscript. But for now, since we are just about getting our hang of this topic, so I am retaining the subscript, okay. Anyway, so what does this mean? What do we know about phi? At least the fact that it is linear, right. So, this means, what is this minus? Can I not write this as phi V1 plus minus 1? Because remember, the field is still the same, yeah, field is still the same. But now, because of the linearity, this is just a scalar, so I can push this inside, yeah. So, there will be two steps of linearity being invoked. First, phi V1 plus, sorry, phi of minus V2, because again, minus 1 times V2 is minus V2. We have already seen about objects in vector spaces that that is true, right, is equal to 0 W. What is the next step? Again linearity. So, please fill in the reasons, okay. I am just saying that in words, but you should make it a habit to write down the reason in brackets. Why is this true, yeah? Why is this true? So on and so forth, right. So, in this step, again, we are going to invoke linearity because phi is linear, phi of A plus phi of B is phi of A plus B. So, this is V1 minus V2 is equal to 0. But what does that tell us? Remember, the one thing that we have assumed to be true is this. Now, we see that this V1 minus V2 object takes it to the 0 and W. Therefore, this object must belong to the kernel of phi, right. So, this in turn implies that V1 minus V2 belongs to kernel of phi implies V1 minus V2 is equal to this, which essentially means V1 is equal to V2, which is a contradiction. So, one side of the proof is definitely true that if you have the kernel being nothing but the 0, then it is definitely 1 to 1. But we have claimed more. We have claimed that it is a both ways implication, right. So, we have claimed both of these to be true. So, we should investigate the other side too, yeah? So now, let us assume the other side, which is suppose phi, which is a mapping from V to W is 1 to 1. But now for any linear map, what can I say about the following object? What can I say about this object? It has to be the 0 of W, right? Because at least you know that any object when multiplied by 0 takes it to 0. So, you take any vector V in the vector space V multiplied by the scalar 0 and then that scalar can be pulled outside because of the linearity. So, the 0 it takes it to the 0 of W. Suppose V, which is not equal to the 0 of V, but of course belongs to the vector space V, also belongs to kernel phi, is that possible? Because I am saying that this is 1 to 1 and this is true for any linear map, nothing to do with 1 to 1 or anything, this is just true of any linear map. Now, once I have found one object that takes me to the 0 of W, because of the 1 to 1 nature because of the injection or injective nature of phi, can there be any object other than this which also takes it to 0 of W? It cannot, right? Because then it would violate the 1 to 1 condition. U to injection V must be equal to 0 V, which is again a contradiction. So, it is indeed an if and only if condition goes both ways, right? Which is why this is useful, because how many objects are you going to check and look, as I said earlier, this finite dimensional thing, it never really entered the picture. Of course, I have just written F dV as finite dimensional, because mostly we will be dealing with that, but in general this is true, nowhere have we invoked any restriction on the dimensionality of this vector space V, right? So, maybe I can also get rid of it, if you do not like it, okay? So, this is an equivalent condition for a 1 to 1 map that the kernel must be nothing but the trivial subspace. Next claim, again about this phi, which is a mapping from V to W is surjective. This is if and only if, well this is quite straight forward, there is really nothing to prove here. This is just a definition, can you guess what this is? Sorry? What should this be? This should be W, right? That is the very definition, there is nothing to prove actually in this. What is surjectivity? What is this on to nature? If you pick out anything in W, it must have a pre-imagined V, so that means it must come through some mapping of some object in V through phi. So, every W that you pick out is related to some V through that linear mapping. So therefore, it definitely belongs to the image, right? And by default of course, this is contained inside this, whether it is surjective or not, you are mapping something from V to W. So, whatever is the image of this phi is definitely contained here. So, the other way round is, I have to show that this is also contained in the image of phi but that follows from the very definition of the on to mapping because anything that I pick out from here, I will be able to find a pre-image and therefore if I pass that pre-image through phi, I will get that object back. So, therefore, it is definitely contained within the image of phi. So, there is both way containment therefore, of course, this is true. So, I am not really going into the proof of this. If you want to satisfy yourself, you can just write a couple of lines there, right? There are two steps basically. One is an obvious observation that this is contained inside this, that is also from definition and the second from the definition of surjection. This is also contained inside this, yeah? So, in other words, if you want to check whether a map is one to one, try to figure out if this is true. If you want to check whether a map is on to, try to check out if this is true, all right? Now we will see something very interesting, particularly for finite dimensional vector spaces but before that, we want to get a more sophisticated and this time I am not just going to give you a hand waving or a sketchy proof but rather a more concrete proof of the rank nullity theorem and we will see that it is not just true of matrices but it is true of this general class of things that we have now defined as linear transformations. Matrices just happen to be one such class of linear transformations, right? Any questions? This is clear? Okay. So, here goes the rank nullity theorem. Of course, this is a finite dimensional vector space, all right? Dimension of V is equal to dimension of the kernel of phi plus dimension of the image of phi, right? Quick remark, just go ahead and think about matrices, all right? So what do you have to substitute this by? This is an m cross n matrix, yeah? What is this? F to the n, n-touple, this is F to the m, m-touple. So what is this? n. This is just that dimension of the kernel, dimension of the null space, which just we have seen, it is of course of the matrix and what is this dimension of the image? This is the rank, right? Is the column rank, but we have seen that the column rank and the row rank must be the same. So this is indeed what we call as the rank of the matrix, this is the nullity of the matrix, the dimension of the space from which you are picking out objects and passing through the matrix, yeah? You are allowing the matrix to operate on objects inside V. So the same thing is what this phi is doing, right? So we are now going to try and prove this. So you see, one of the most common tricks that we use in such proofs, not just this proof but any proof, start with the smallest possible subspace that you can think of. And then we know because of this beautiful result that if you have a linearly independent set, you can always extend it to a basis for some vector space. So start with the smallest possible subspace, consider its basis, expand it till it becomes a basis of an extended vector space, right? Within which this initial subspace was contained, right? So here we are going to do the same thing. Consider B1 is equal to say V1, V2 till Vk to be a basis for which subspace do you think we should start? That would be a smart idea, image or kernels. See, I have given you a hint already, I have picked it out from V. No, so these are objects from V, so which of these is a subspace from V? The kernel, right? So immediately I have told you there is a linearly independent set and the dimension of the kernel is k. So begin to see the similarity as we proceed with this, right? And B1 to be a basis for what? V, so that B1 hat is equal to V1, V2 till Vk, V hat k plus 1 continued till let us say the dimension is N of V, right? So let us say the dimension of V is N, so we have V hat N is the required basis. So we have the numbers you see by our very definition. If you just think about it, just think of it like some problem that you are given, don't think about it some major problem, major theorem in linear algebra, just think about it like a problem. We have the numbers with us, we already have the number k here, we have the number N here. What we need to show is that the number N minus k must correspond to this and we shall be done, right? So just what do you think like pops at you from this board as a potential you know candidate for a basis for image of phi? First of all how many objects need to be there in the image of phi? N minus k. So what sort of an object do you think makes a strong claim as a good candidate for this image of, for the basis of image of k phi, sorry. But these are objects in V, image of phi where does it come from? W, so in and of themselves these objects do not certainly prove to be what the candidates for elements in the basis of image of phi. So what kind of objects could be possibly considered phi of, right? So it immediately suggests to us that probably a good guess would be if I map these last N minus k objects from this and I try to show that that is a candidate for the basis maybe I will have some luck, okay? So you are convinced that once I have shown that there is nothing really to prove in this rank nullity theorem. So I am going to just bring that down to another claim, okay? So the claim is since, okay, B defined by phi of V1 hat, sorry, Vk plus 1 hat, right? k plus 1 hat, k plus 2 hat until phi is a basis for image of phi. If I am able to prove this claim I would have shown you the proof for rank nullity theorem. If I want to show that this is true I need to show two things, one that this is a generating set for the image of phi and two that this is linearly independent. So consider W belonging to W or rather W belonging to image of phi, right? Of course it comes from the subspace W, the vector space W but of course this is within a subspace of W, right? This means there exists V such that phi of V is equal to W which means this V can be written as what exactly, phi summation i going from 1 to k alpha i V i plus summation i going from k plus 1 to n alpha i V i hat, right? V object in vector space V is a linear combination of the objects and its basis and the basis for V has been shown to be that set B1 hat as I have defined it there. So this is true. Now if I use the linearity property of this what happens to these fellows? These are all objects in the kernel so they vanish, do not they? So this implies that W is equal to summation i going from k plus 1 to n alpha i V i hat, right? Which is contained in the span of the set I was looking for. Let us just use the notation we have introduced, yeah? So this is a generating set, no doubt about that, right? Yes. So it is a generating set, at least the set B is a generating set and now we need to show linear independence, right? So consider summation beta i, i going from k plus 1 through n phi of V i is equal to 0. What I need to show is that all these beta i's must identically V i hat, is it not, yeah? What I need to show is that all these beta i's must vanish, that is the only way that this is possible. If this set has to be linearly independent then a linear combination of this can only be 0 if it is a trivial linear combination, right? So what does that mean? That means by the linearity of this map phi acting on summation i going from k plus 1 to n beta i V i hat is equal to 0. See I have already dropped the subscript here, this is of course a 0 of the W, right? If need be you can just put that W back in. But what does this mean? This means that summation beta i V i hat belongs to kernel phi, right? So what can we say about this object? i going from k plus 1 through n, this object, let me box that in color, this object belongs to the kernel. But where did this come from here? You see, look at this here, if these objects indeed belong to the kernel then they can be expressed as a linear combination of the first k fellows, can they not? Because the first k fellows in that set B 1 hat constitute exactly a basis. See look at this B 1 here, it is a basis for the kernel. So if this object in turn belongs to the kernel then this object can also be written as a linear combination of these objects in B 1, right? So this means that summation beta i V i hat i going from k plus 1 through n minus summation gamma i V i, i going from 1 through k must vanish. Of course this is the 0 of V, right? But what does that mean? This is a linear combination of a linearly independent set of vectors. So this is a linear combination of a linearly independent set of vectors which vanishes each of these coefficients gamma i's and beta i's must vanish. I do not care about the gamma i's really, but what matters is that the beta i's must identically vanish, right? So let me just go about this, beta i must be 0 for all i is equal to k plus 1 k plus 2 till n, right? Which means that not only is this set a generating set for the image of phi, but it is also a linearly independent set which essentially then means that it is a legitimate basis for the image of phi. If it is a legitimate basis for the image of phi then the dimension of the image of phi is n minus k. So this object is n minus k, this by our choice is k, sum thereof is n which is again by our choice and our definition, right? So this is the rank nullity theorem, the proof of the rank nullity theorem. This, yeah, yeah, this, yeah, because this object when phi acts on it takes it to 0. So that means that is a classic definition of the kernel. Any object which when acted upon by a map takes it to 0 is taken to 0, definitely must belong to the kernel of that map. So by the definition of the kernel, this object must belong to the kernel and if it belongs to the kernel, yeah, and then it must be representable in terms of the basis for the kernel. So therefore this object must be equal to this object, I brought it to the other side so that this minus this and this now becomes a linear combination. If you do not like this notation, you can just substitute gamma i with minus gamma i and call it gamma i hat. It is the same thing, but the point is that all these beta i's and gamma i's must vanish. Gamma i's do not matter to me for my proof, beta i's must vanish, if the beta i's must vanish, it is linearly independent, it is linearly independent generating set, it has got the makings of a basis, right? So that is the proof of the rank nullity theorem, ok.