 In the previous video we learned how to find a basis for the column space of a matrix. Now in this video I want to talk about finding a basis for the null space of the matrix. This one's a little bit more involved. Remember that the null space of a matrix A, this is going to be the solution set, the solution set of the homogeneous system AX equals 0. And so we have to basically solve the problem, solve the homogeneous system in order to find the solution set. And then when we do that it turns out in the process of solving for the solution set we can actually pull out a basis. We've been doing it already. We just didn't know it. Let's explain why it works. Now what they remember is that if two matrices are row equivalent, what does that mean? It means that you can transform one matrix into the other by some sequence of row operations, replacement, interchange, or scaling. And if you have a system of equations like AX equals 0, replacing a matrix with a row equivalent matrix doesn't change the solution set whatsoever. And as a consequence, if row equivalency doesn't change the solution set to a linear system of equations and the null space is a solution set to the homogeneous system, that means if A and B are row equivalent, then the two matrices will have the same null space. The null space of A is equal to the null space of B. This is particularly useful if B, for example, is the RREF of the original matrix A. So our strategy for finding a basis for the null space is going to be the following. We're going to use the same technique we've done before to solve the homogeneous system like we did for example in section 2.6. And then we're going to focus on the non-pivot columns of the matrix. These non-pivot columns are going to correspond to the free variables and the free variables are the things that are going to produce non-pivot solutions to the homogeneous system. So that's how we're going to proceed. I'm going to use an example to explain what's going on here. Consider the following three by five matrix A. We're going to construct a basis for the null space of this thing. Now if you're paying attention, this matrix might seem familiar. Turns out we played with this exact matrix in the previous video of this series. So we showed that if you take this matrix A and you perform some raw operations, you can get the following echelon form of the matrix. Now that matrix is not yet in row reduced echelon form to solve the null space here. Although you can get away with it in echelon form, you probably want row reduced. And so let's do one more step to get rid of the three right here. We're just going to take row one minus three times row two and we get the following matrix right here. So this matrix right here is the r, r, e, f of A. Now we are solving a homogeneous system. So be aware that I did augment the column zero here. But the fact that you have a column of zeros means that when you do row operations, nothing will ever change. So when we did, when we found the basis for the column space, I didn't put this, this column of zeros on here. That was okay. I don't need to do that because I'm doing the same raw operations, but they don't affect this zero column. So therefore, I can slap it on without any big deal whatsoever. The reason I'm including it this time is to emphasize that this augmented matrix represents a system of equations. So when we take the r, r, e, f form of this augmented matrix, the first row will give us x one plus three x three plus five x four plus x five equals zero. The second equate, the second row gives us the equation x two minus four minus x four plus x five equals zero. Now the third row actually is just the equation zero equals zero, which is offering no benefit to the system whatsoever. So I omitted it here. Now, because these, you're looking at these, these equations here, we see that because of the pivots, these are going to be dependent variables. And so like we saw before, the first and second columns of A give us a column space for the, they give us a basis for the column space. But for the null space, we're going to focus now on the non pivot columns, because these are what's going to give us the free variables. The solution to a homogeneous system, its size is dependent on how many free variables you have here. That's why the nullity is going to be three here, accounting the free variables. And so if we solve the dependent variables with respect to the free variables, we get that x one equals negative three x three minus five x four minus x five. Notice that the coefficients of x three, x four, x five changed their signs as they moved to the side of the equation. And then we also see that x two is going to equal x four minus x five. So again, the coefficients of x four and x five changed from positive negative or vice versa when we move to the side of the equation. That's an important detail that will be helpful in a little bit. Just kind of notice that. So we saw this system, we have these free variables in play here. So if we look at the general solution to this homogeneous system, this is the vector x, which has five variables, five components, x one, x two, x three, x four, and x five. Now what we've learned here is the following will x three is a free variable. So it could be whatever it's wants. And you know, same thing with x four and x five, you know, they're just like queen, whatever I want to be x one and x two on the hand, they're dependent variables. And so we have to use these assignments from above x one will be negative three x three minus five x four minus x five and x two is given as x four minus x five. So this is the general solution. But this general solution, I could decompose it into smaller vectors. That is, I could, I could kind of rip it apart into the three vectors corresponded to the three, the three free variables. That sounds like a Dr. Seuss thing right there. Three, three, I can't even say it, three free variables. I need to go read Fox and Sox tonight, I think to my kids to practice this for the next lecture. So if you take the first, take the first vector associated to x three, so look at those terms in this vector that involve an x three like so. So we get the vector negative three x three, zero, the second one didn't have any x threes in it, we have an x three, and then the last two didn't have any x threes in it. The next one, we're going to have a vector for x four, which there are a few of those inside the vector right here. So the next one would look like negative five x four, x four, zero, x four, and zero. And then lastly, we're going to do a third vector for x five, we're going to do this for each of the three variables x five, x five, x five, like so. So make sure you grab all of those. This is going to give us negative x five, negative x five, zero, zero and x five. So we separate, we separate our general solution into a sum of three vectors depending on the three free variables. Oh, see, I practiced that one. And so for each of the free variable vectors, factor out one, factor out the free variable. So for the first one, you can take on an x three, and this gives us negative three, zero, one, zero, zero, like so. For the next one, you're going to take out the x four, and that leaves behind a negative five, a one, a zero, a one and a zero. And then for the last one, take out the x five, and that gives you negative one, negative one, zero, zero and one. And so for clarification, or maybe just simplification, let's call this first one you, the second one V, and this third one W. I think that's the alphabet. And so what we've now see is that this right here, we're saying that x is just the combination x three times U plus x four times V plus x five times W. And since x three, four and five are free variables, we can choose them whatever we want, x will be a linear combination of these three variables right here. And as x was the general solution, this then gives us that the null space of A is equal to the span of these three variables, U, V and W. And that agrees with what we said earlier, the nullity of the matrix is how many, how many free variables are in the system. We counted there was three, and therefore it should be span, we should be able to span it using three independent variables and three independent vectors, I should say. And we have in fact our three vectors. But how do we know they're independent? Look at these three vectors and I'm going to make the following kind of argument. If we look at just, look at just the third position right here, you'll notice that every other vector has a zero there. And so if this thing is going to combine to give me an x three somehow, I basically got a fourth x three to be this coefficient, because no one else is going to contribute to the third component. So there's no way that this thing could add up to be zero without this coefficient being zero, not going to happen. And same thing for the other ones, right? If we focus on the fourth entry, notice here that in the fourth component, V has a one, U and W have a zero. If this were to add up to the zero vector, it would basically force that this coefficient have to be zero. And again, same thing for the fifth component as well. The fifth one W, that is, I should, the fourth, the fifth component of W has a one for you and V, they're both zero. The only way you could get a zero in the last component is if we had a coefficient right here. So these vectors are independent. Basically, if you kind of forgot, we put this together as a matrix and you kind of forgot the first two rows, this would be a matrix in row reduced echelon form. That's going to be independent. So we do have, we do in fact have an independent set of vectors which span the null space. So this right here gives us a basis for the null space. And that's really great. And this actually the technique we saw before, right? So in the process trying to solve homogeneous systems, we actually were finding bases for the null space. But it turns out we can dramatically simplify this process. How are we going to do that? Well, I'm going to come back up to the original matrix right here. Now, one thing I should mention in contrast with the column space, the column space, the basis will consist of vectors from the original columns of A. When it comes to the basis for the null space, these vectors do not coincide with columns or rows of A. They do not coincide with rows or columns of the RREF either. They come about from solving the system of equations. But what I want to mention to you is that this process of finding the RREF, pulling out a general solution and ripping it apart, this process can be dramatically simplified in the following manner. So when you have your RREF and you want to find the null space here, what you need to do is you need to identify who are the free variables which we saw before. And you're going to create a vector for each of the free variables. So there is a vector that's going to be associated to X3. So what you're going to do is that you're going to just put a little asterisk right here in those positions that are corresponding to pivot positions. And then for the free variables, you're either going to put a one or a zero dependent upon RREF on X3. You put a one and three spot, you put zero and four and five. Then there's going to be a free, the free variable X4 will produce a vector that gives us the basis. You're going to put a star, a star where the dependent variables go. You're going to put a one in the fourth position and then zero in the other free spots. And then lastly, there should be a vector associated to X5. You're going to put a one in it in the fifth position, zero in the indices that correspond to the other free variables and then put asterisk, just these little stars in the one and two spot. The reason we put stars for one and two is that for one and two, those were the dependent variables. So it depends on which of these variables we're looking at. So you start off with these ones and zeros. Okay. Now what you're going to do is you're going to fill in these stars. So let's look at the first row here. These stars are going to coincide by looking at the first pivot row. So we know that X1 is a dependent variable because there's a, there's a pivot position in the first column, but then associated to that first column, there's a pivot row. We're going to read off the numbers in this pivot row corresponding to it. So if we look at the first row, notice in the three spot, you have a three. And so I'm going to record a negative three right here. In the fourth column, we have a five. I'm going to write a negative five right here. And then here in the fifth column, we have a, we have a one, I'm going to write a negative one right here. So we're going to write down the numbers we see in this first column and this first row, excuse me, based upon which variable is it connected to? Now you might wonder why did you do a negative three as opposed to a positive three? Remember when I mentioned earlier that when you move the, when you move the three X three to the other side of the equation becomes negative, that is what we're incorporating right here. So as you go from the matrix to the vector, it's like you're moving the variable to the other side of the equation. It switches its signs. How about X2? How about X2 here? So you look at the second row, you can ignore the pivot columns, but we look at the second pivot row. So in the three spot, you get a zero. So we get negative zero, which is still zero in the fourth spot, we have a negative one. So I'm going to record a one. And then in the fifth spot, we have a one. So I'm going to record a negative one. And so there, this right here will be our basis for the null space of the matrix. And hopefully these vectors look a little bit familiar, right? So the first one was negative three, zero, one, zero. I wonder if I can get them all on one slide here. I'll have to zoom out a little bit. You can then compare, right? Negative three, zero, one, zero, zero. That was the same thing. The next one was negative five, one, zero, one, zero. Same thing. And the last one's negative one, negative one, zero, zero, one, same thing. So we can actually extrapolate these, the basis of the null space directly from the RREF. And we can kind of skip all of this middle stuff with systems of equations, general solutions, if we want to. Now, if this more long drawn out process makes sense to you, that's fine. But I did want to show you this nice little shortcut. In the next video, I'll do another example of the shortcut method for finding the null space of the matrix.