 The null space is an important idea associated with any transformation. So let t be any transformation on some vectors. If t sends some vector v to the zero vector, then we say that v belongs to the null space or the kernel of t. The null space of t is usually designated null t or care t. So let's think about this as a problem to be solved. Suppose I have a linear transformation. How can I find the null space of that transformation? So remember we can always describe a linear transformation in matrix form where the coefficients for the component-wise formulas become the entries in a coefficient matrix. Which means I can take any linear transformation and if I think about this as a transformation from one vector to another vector, the coefficients of that matrix give me the formulas that I can use to find the components of the vector. So now the only question becomes which vector do we want? Since we're trying to find the null space, then the vector that we're looking for should be the zero vector. So we want to find the vectors that will map to the zero vector. So to find the null space, we'll set up and solve a system of linear equations. Since we want to produce the zero vector, our components u1, u2, and so on are all going to be zero. And since we don't know what vectors are going to be in the null space, the components v1 through vn are going to be our unknowns. And that gives us a nice system of linear equations with a corresponding augmented coefficient matrix. And a useful habit to get into as a mathematician is to ask yourself constantly, where have I seen this before? And in this case, it's worth noting that our augmented coefficient matrix is the same as the matrix representing our linear transformation augmented with a column of zeros. And this suggests the following theorem. Suppose t is a matrix representing a linear transformation. Then null t, the null space of t, can be found by row reducing t augmented by a column of zeros. Now one important observation here is that because elementary row operations won't change the values in a column of zeros, we can actually leave the zeros off until the very end of the problem. You can trust this statement because I'm on the internet and you can't put anything up online unless it's true. But you might want to think about why that is actually the case. So let's see what we can do. Suppose I want to find the null space for the transformation shown. Now it helps to do a little bit of analysis. Since t is a 3 by 4 matrix, it's going to take vectors with four components and map them to vectors with three components. So to find the null space, I want to find all vectors with four components that become the three component zero vector. And I can read the coefficients of the transformation matrix as the formulas that tell me what those three components are going to be. That first row of the transformation matrix tells me that 3v1 plus 1v2 plus 3v3 plus 5v4 is going to be my first component. And since I'm looking for the null space, I want that first component to be zero. Likewise, the second row tells me that 2v1 plus 2v2 plus 3v3 plus 0v4 is going to be the second component of my vector, which I also want to be zero. And finally, the third row tells me that 2v1 plus 0v2 plus 1v3 plus 1v4 is going to give me the third component of the vector, which again, I want that to be zero. And so now I have this nice system of linear equations to solve. And since the constants are all zero, I can just record the coefficient matrix alone and row reduce that. So our first row pivot is 3, so we'll multiply the second and third rows by 3. Then we'll multiply the first row by negative 2 and add it to the second to get a zero below the pivot. And likewise, we'll multiply the first row by negative 2 and add it to the third to get a zero below the pivot. Moving on to the second row, the pivot is 4, so we'll multiply the third row by 4. And that gives us a new third row. Now we can multiply the second row by 2 and add it to the third to get a zero below the pivot. And since every entry in the last row has a common factor, we can multiply it by negative 1, 6, and simplify it. And this gives us our system in row reduced form. And now we can parameterize our solutions. Since x4 never appears as a leading variable, we need to parameterize x4 and to avoid fractions, we'll let x4 be 12t. We can then substitute this value into the equation involving x3 and find x3 is negative 96t. We can substitute the value of x3 and x4 into the second equation, and that allows us to solve for x2, and then we can substitute the value for x2, x3, and x4 into the first equation and find a value for x1. And so remember that this transformation took vectors in R4 and sent them to vectors in R3, so the components of our vectors are going to be 42t, 102t, negative 96t, and 12t. So our null space is going to consist of all vectors with these components. And we can express that as some scalar multiple of some basic vector.