 So let's talk about matrix spaces. Suppose I have some set of vectors. We can represent the linear combination as a matrix product, where here we would apparently multiply this row of vectors by this column of coefficients. Now this makes sense regardless of how we represent our vectors, but it's convenient to write each vector as a column vector, so this looks like ordinary matrix multiplication. And this leads naturally to the following two definitions. Let A be any matrix. The column space of A, written column A, is the vector space spanned by V whose elements are the columns of A. Likewise, we can have the row space of a matrix. Let A be a matrix, and this time we'll take the wait for it. Rows of A as the vectors in our set. We can also talk about the null space. The null space of a linear transformation is a set of vectors where the transformation sends those vectors to the zero vector. We found it by row reducing the transformation matrix augmented by a column of zeros. Well, what happens if we view T as a set of column or row vectors? How could we then find the null space? Well, to answer that question, we'll set up a system of equations. So if we consider our augmented matrix, it corresponds to the system of equations. But the system of equations is exactly what we need to solve if we are looking for linear combinations of our vectors equal to the zero vector. And this leads to the following result. Let A be a matrix where we consider the columns of A as vectors v1, v2, and so on up to vn. Reducing the augmented matrix A, augmented by a column of zeros, corresponds to solving the problem of finding a linear combination equal to the zero vector. Consequently, we have the following result. The null space of a linear transformation A corresponds to the set of linear combinations equal to the zero vector in the column space of A. So let's see what this does for us. Remember that if a set of vectors is linearly independent, then the only solution to linear combination equal to the zero vector is for all of the coefficients to be zero. And so what this means is if the columns of A are linearly independent, that is, if the vectors forming the columns of A are a linearly independent set, then the null space of our transformation is going to be the zero vector only. This allows us to go even further. Remember that the rank of a matrix is the number of non-zero rows in the row echelon form of A. Now every one of those zero rows corresponds to a free variable, and since the vectors corresponding to the free variables can be written in terms of the vectors corresponding to the basic variables, then we get the following theorem. The dimension of the column space of A is equal to the rank of A. What about the free variables? Well the free variables correspond to a basis for the null space. You should take a few minutes to prove this statement. If we put together all of our results, namely that the dimension of the column space of A is equal to the rank of A, and the number of free variables corresponds to the number of vectors in a basis for the null space, then we have the following theorem. Suppose I have a matrix with m columns and let the dimensions of the column space of A be c, and the dimension of the corresponding null space of A be n, then m is equal to c plus n. What about the row space? Suppose we reduce A to row echelon form and have a row of zeros. Elementary row operations, with the exception of interchanging rows, correspond to producing linear combinations of the rows. So a row of zeros corresponds to expressing the zero vector as some linear combination of the preceding rows, and so by a similar set of arguments we have the following theorem. If we have a matrix with m rows, let the dimension of the row space of A be r, and the dimension of the corresponding null space of A be n, then m is equal to r plus n.