 In this video, I want to talk about how one can determine whether a linear transformation is one to one or not. This is not the first time we've approached this problem in this lecture series. But it turns out that using matrix representation, this problem becomes fundamentally easier. And so let's consider that. So first recall, what does it mean for a function to even be one to one, a linear transformation? We say that a linear transformation is one to one. If whenever t of x equals t of y, it must have been that x equals y. So one to one in a nutshell means that the only time that the images were the same, the only time the output was the same, is if the input was the same. Different vectors must go to different locations with respect to this map. That's what it means for a function to be one to one. Now we've seen that for linear transformations, being one to one is equivalent to the kernel being trivial. Remember, the kernel, this is the set of all vectors such that t of x is equal to zero. The kernel is all of those vectors which map to the zero vector. So obviously the zero vector always goes to the zero vector because the maps are linear. And so if it's one to one, nothing else can map to zero. So this is a necessary condition to be a one to one map. But it turns out, because linear transformations preserve vector addition, that everything is, what I'm trying to say is that it's efficient that to show that the kernel is trivial, that shows that everything has to be one to one, that two different vectors all map to the same place. All right, so to show that transformations is one to one, we usually show that the kernel is trivial. But how do you find the kernel of a transformation here? Well, it turns out that when you look at the matrix representation of a linear transformation, call it A, then the kernel of A, the kernel of t is actually gonna coincide with the null space of A. The kernel of t is the null space of A, this matrix representation, because what is the null space after all? The null space is the set of all vectors such that A x equals zero. Now, the matrix transformation has the property that t of x is just A times x. So if the kernel, we want all those vectors which go to zero, which is the same thing as the null space. So the thing to remember here is that the null space of a transformation, excuse me, the kernel of a transformation is equal to the null space of its matrix representation. And we have practice computing null spaces. So if we can compute a null space, then we can compute a kernel. Consider the following linear transformation. Take the transformation t of x equals x plus y for the, so t of x y is x plus y for the first coordinate, zero for the second coordinate, two x plus three y for the second coordinate. When it comes to computing the matrix representation, it's a little bit easier if you write it as a column vector, so x plus y, zero, and two x plus three y. And you wanna line up the variables as if they were columns of the matrix. So then when you look at them, you can see the column vectors. So you get one, zero, two, that's the first column, and then you get one, zero, three, that's the second column. So we can very quickly see this matrix representation for the transformation. Let's now row reduce this. I would put a pivot in the one, one position, and I'm gonna put rows of zeros at the bottom because that's required to be a national on form. So now we see this matrix right here. We still have our pivot position in the one, one spot. I wanna get rid of the two that's below it. So I'm gonna take row two minus two times row one. So we get a minus two, minus two. That then row reduces the matrix to the following. So we then look at our next pivot position right here, which admittedly, we actually have enough information to answer our question, but just for the sake of example, if we wanna put this in row reduced echelon form, we'll take row one minus two times row two. So we get a minus two right here. And so now we have our echelon form right here. This is actually our row reduced echelon form. So okay, how do we determine the kernel from this information right here? You're gonna notice the following that there is a pivot in each of the columns. So this tells us that the columns, the columns of the matrix A, these columns are linearly independent, linearly independent. And so what does this tell us about the null space? If the column vectors of A are independent, then that actually means that the null space of A is going to be trivial. The only thing in there is the zero vector. We don't have any free variables for which we can find non-trivial solutions to the homogeneous system. But the null space of A is the same thing as the kernel of T. And so we've now seen that the kernel of T is trivial. And so therefore we can conclude T is in fact one-to-one. So we are able to solve this question about being one-to-one by looking at the matrix representation of this transformation. Basically put it into coordinates and solve it in coordinates. Let's look at another example. This time let's take the linear transformation where S goes from R3 to R2 and it follows the following rule. S of XYZ is equal to X plus Y minus 2Z for the first coordinate and negative Y plus Z for the second coordinate. If we write this as a column vector, the only two columns, you get X plus Y minus 2Z. I'm going to put the variables in descending order. And then you're going to get negative Y plus Z. So we have to compute what is S of E1, S of E2, S of E3 to find the matrix representation. But we can see that in the columns, right? So if you look at the X variables, you just get the coefficients one and zero, like so. To do it for Y, you're going to get one and negative one. And then for the last one, you're going to get negative two and one. We see right here. It's much like, if you write to your formula for the linear transformation as a column vector and you put the variables in order, then it's basically like, how do you translate a linear system into an augmented matrix? It's basically the same process. You can see the matrix representation right here. All right, now let's row reduce this thing to see if it's one to one. Where our first pivot position is one, there's already a zero below it. So voila, I guess we move on to the next entry. So your next pivot would be right here. I want that to be a one. So we're going to times row two by negative one. That then switches the rows. We have now a zero, one, negative one. And I want to get rid of this one above it. So we're going to take row one and subtract from it row two. So we minus one, we add one. And so then we get our REF right here. So the first row is one, zero, negative one. And the next row is zero, one, negative one. Okay. So what does this tell us about is the transformation one to one or not? So let's analyze this. Well, notice how there is no pivot in the third column. This actually indicates that the homogeneous system of equations is going to have a free variable. So how do we interpret this? The columns, the columns of our matrix representation A, our columns are in fact linearly dependent because we have this free variable. It's linearly dependent. This tells us that the null space, the null space of A is non-trivial. It's not the zero space. There's something in there other than zero. And we've actually seen how this works before. So we can construct a general vector of the null space. We're going to put a one in the single pivot position and then looking at these numbers and switching their signs, we get the vector one and one. And so I claim that this vector one, one, one is in the null space of A. Let's try it out. If we take the vector one, one, one and we multiply it by the matrix A, I would recall the matrix A was one, one, negative two. And if you take zero, negative one, one, what happens when you multiply them together? The first row with the first column, you end up with one plus one minus two. Notice that's going to give you a zero. Then if you take the second column and times it by our vector X right here, you get zero minus one plus one, that's likewise a zero. This is in fact the zero vector, but our vector X was not the zero vector to begin with. So we see that this null space of A is actually going to be the span of this vector, one, one. But the thing to remember is that the null space here is equal to the kernel of T. A was just not T, it was called S in this example. The matrix A is a representation of this transformation S. Since the null space is non-trivial, that means this is also non-trivial. And if the kernel is non-trivial, we then conclude that S is not, it's not one to one. And in fact, we have evidence on why it's not one to one now, right? Look, if I take S of the vector one, one, one, what's gonna happen here? So remember what the formula of S was, we can see it back here. So we're gonna take, for the first coordinate, we take X plus Y minus two Z, and for the second coordinate, we take negative Y plus Z. So what happens here is you're gonna get X plus Y minus two times Z. And then for the last coordinate, you're gonna take negative Y plus Z. And so this is gonna add up to be zero, zero. Notice how these things are the same thing? That's because the matrix representation does the same thing. So we see that the zero vector, zero, zero, zero, is not the same thing as the vector one, one, one, but S of the zero vector is equal to S of one, one. So even though the images are the same, the original vectors were not the same, so it's not one to one. And in fact, we're just playing around with zero, but we can do this for anything. Let's take our favorite vector, favorite vector of all time, right? So let's take the vector, let's say like one, two, right? How do we get that? Well, if we can find a vector that solves this, so let's say we found, let's say we found the vector Y equals whatever, right? Who knows? So that S of Y equals this, then I claim that S of Y plus one, one, one will also equal one, two. How do I know that? Well, because S is a linear transformation, we see that this is equal to S of Y plus S of one, one, one. Now, S of Y equals one, two, like we claim, and then S of one, one, one is equal to the zero vector for which that adds up to be that. And so the thing is we can always find a second value, and that's the thing is not one-to-one. And it's not just one, one, one, right? We can take the span of one, one, one. So we also could just take like any number here, C, C, C. That's also gonna do it. So we saw in this example, how to show when a linear transformation is one-to-one and when it's not one-to-one. And the key is just to use its matrix representation and compute its null space. Computing the null space, of course, is equivalent to solving the homogeneous system, AX equals zero. When it has a non-trivial solution, that means the kernel is non-trivial of the transformation. So the transformation is not one-to-one. But if there's a non-trivial solution, that means there's a non-pivot column in the matrix A. Now, remember that nullity was the dimension of the null space, but it's also the number of non-pivot columns in A. So if A is an M by N matrix, and let's suppose M is less than N, that means that A has more columns than rows and so A necessarily has to have non-pivot columns. So if you have fewer rows than columns, like we saw in the previous example, then the nullity has to be greater than or equal to one. And so the null space of that matrix A has to be non-trivial. So the corresponding linear transformation could not be one-to-one. So from the very beginning, notice we had this map, S. It went from, it went from R, let's see, it went from R3 to R2. And so that tells us that the transformation, the matrix associated to S right here, it's gonna be two times three, that's gonna be two by three. So because the domain is bigger than the range, this function cannot be one-to-one, not one-to-one. We actually knew that without having to do a calculation. If the domain is too big, then your function cannot be one-to-one. You have too many vectors so that some of the vectors are gonna have to collide, thus making it not one-to-one.