 So in addition to the row space and column space of a matrix, one of the other things we can talk about is the null space of the matrix. Now, this goes back to the idea of our set of vectors that span some vector space. And so given some set of vectors that span a vector space of interest, so the first of all the corresponding vector space consists of all linear combinations of the vectors in v. And while vector addition and scalar multiplication, now, while we are assuming, because we are dealing with a vector space, that vector addition is going to be commutative. And so there's no order that we have to do our additions in. Suppose that we actually view the vectors themselves as having some order. So maybe our set of vectors, we're going to decide that they will have some order. This is the first, this is the second, and so on. And so the vectors have some order. And I can write our linear combinations in the order of those vectors. So I have my coefficient x1, v1, and so on. So I can write my linear combination in some sort of order. And if I pick off the coefficients, as soon as I've decided on the order of the vectors, then I can write an ordered n-tuple of just the coefficients. And this n-tuple, I can view as a set of numbers in Rn. And we have a special name for this. We call these the coordinates of x relative to v. And the individual values are going to be the ordnance. So, for example, let's take my set of vectors to be 5x plus 1, x squared minus 1, and wait a minute, are these polynomials? And the answer is, sure. And they can also be vectors. All we need to have a vector space is to have something that we can define vector addition and scalar multiplication on. I know how to add two polynomials. I know how to multiply a polynomial by a real number, so I could use polynomials as vectors. And I can take a couple of polynomials and talk about the vector space spanned by those polynomials. Now, here I actually want to find the coordinates of something relative to my vectors that span the vector space. So that means that I'm looking for the coefficients of the linear combination I need to produce this thing. So I have some linear combination, and that linear combination, x1, x2 being real numbers, is going to produce x squared minus 15x minus 3. So I can solve this equation for x1, x2, and well, you should pause the video and solve it for yourself. And we find that our solutions are going to be x1 equals negative 3, x2 equals 1. If x1 is negative 3, x2 is 1, this linear combination will give me x squared minus 15x minus 3. Now let's take a broader approach here. So I have some set of vectors that span a vector space of interest, and I'm going to define what we call the null space. I'm going to write that null v, and that'll be the set of coordinates that correspond to linear combinations that produce the zero vector. So in other words, if I have some linear combination that produces the zero vector, then the coefficients of the individual vectors are going to be part of the null space of v. Now, I know there's always one solution to this, namely all of these coefficients equal to zero, so null of v is never empty, but there might be non-trivial solutions to this. And if there are non-trivial solutions, then we know that this set of vectors, our set of vectors v, is going to be dependent. So let's find nothing. Which is to say, let's find our null space. So here's a set of vectors. v is our set of vectors 1, 0, 0, 1, and 2, 1. And again, I can treat these as first vector, second vector, third vector. I have some vector space that these three span, and I want to find the set of linear combinations that give me the zero vector. So there are one, two, three vectors. So any linear combination is going to have one, two, three coefficients, and so it will have three coordinates, which we'll call x1, x2. And in a fit of creativity, we'll call that third vector, third variable, x3. So I want to find linear combinations that produce the zero vector, so I want to solve this equation. And so this is a nice, simple, component-wise equation, and we can solve it. Go ahead and solve. And we find that our null space, our solutions are actually parameterized, and the parameterization, one parameterization, x3 equals 2, 3, x2 equals minus 2, and x1 equals minus 2t. Try to say that 10 times fast. And I can write this as, well, I can either write it as x3. My third ordinate is t. My second ordinate is minus t. My first ordinate is minus 2t. Or let's try and factor this t out, because it appears in all terms. This is t times minus 2, minus 1, 1. Now we pause and note that my null space is actually the set of all linear combinations of this vector 2, negative 1, 1. Well, it's a one-dimensional null space which corresponds geometrically to a line. And we'll have other things to say about that. Now let's take a slight change in viewpoint. If I take the columns of some vector a as the vectors that are in our set, then null a, the null space of the matrix a, is going to be the same as the null space of this set of vectors v. So our original set of vectors 1, 0, 0, 1, 2, 1. If I view these as column vectors and produce some matrix a, then column a is going to be the same as the vector space spanned by these vectors. And my null space of these vectors is going to be spanned by t times minus 2, negative 1, 1. And that's the same as the null space of a. So again, the important thing to remember here is that when we talk about the null space of a, what we really mean is the null space of the vector space spanned by the columns of a treated as vectors. Now there's a few things to notice about that null space for a being our matrix once again. We note column a, our vector space spanned by the columns of a, consists of all linear combinations of the columns when we treat those columns as vectors. So those linear combinations are going to look like x1 times 1, 0, x2 times 0, 1, x3 times 2, 1. On the other hand, as we found, null a consists of all vectors of the form t times minus 2, minus 1, 1. The linear combination of just a single vector, the scalar multiples of that single vector. Now what we notice here is the elements of null a are three tuples. They are 1, 2, 3. They are things that consist of three ordered real numbers. While the elements of column a, these are all going to be two tuples of real numbers. And what that means is that these don't live in the same space. Column a and null a are only related by their matrix. This is not a subspace of column a. And in fact, there is very little direct connection between column a and null a, except for what we get from the matrix that joins them. Well, there's one more component we can talk about. So suppose I have some point with these coordinates. And suppose that's an element of null a, where a is going to be some set of vectors. So here I've actually drawn the vectors, but we remember we want to read these as being column vectors. Well, then because it's an element of null a, then I know the linear combinations of the vi's, where the xi's are the coefficients, that's going to equal the zero vector. And I can write that as x1, v1, and so on, equals zero. And somewhat more compactly, I can write that as a vector x equals the zero vector. So again, I'm taking this, again, v's are treated as column vectors, and the xi's are going to be treated as the individual values. And we have to align it appropriately, but this product is going to give me the zero vector. Now, if I view this algebraic equation as a geometric transformation, what this is saying is that if I have a point in the null space, this geometric transformation is going to squash that down to the origin. So the geometric significance of the null space is that these consist of all points that get squashed down to the origin by the geometric transformation. And finally, there's one last need possibility, which is that if I look at this, AX equals zero, well, I might try to generalize this. AX equals zero is the same as AX equals zero times the original vector x. So this multiplication by the matrix A, it's as if we multiplied our matrix, our vector x, by zero. And so the question is, can we try to generalize this? Maybe I can find a value, oh, I don't know, call it lambda, not equal to zero, for which A times x gives me some scalar multiple of x. And that is an important problem, which is known as the eigenproblem, and we'll take a look at that in a little bit.