 Welcome back to our lecture series, linear algebra done openly. As usual, I'm your professor today, Dr. Andrew Misseldine. In section 2.5 entitled subspaces, we're going to continue the discussion of affine geometry introduced in section 2.4. So in that section, we had talked about lines, planes, hyperplanes, and the general notion of an affine set, or what we call a flat for short. In this section, we're going to focus on those lines, planes, hyperplanes, etc., which pass through the origin, then as it passes through the zero vector of the vector space in play. It turns out those affine sets are of more importance than the general ones, and this is what we call a subspace. So more precisely, if we have some vector space v. So remember, vector space is going to be a family of vectors for which we have a well-defined notion of vector addition, scalar multiplication, and it satisfies that long list of axioms. Addition of vector should be commutative, it should be associative, it has an identity, it has inverses. Scale of multiplication distributes over vector addition. Scale of multiplication associates, it has an identity, those lists we saw previously. Now, we can just pretend the vector space is fn. We don't really lose much by doing that, but of course, we have a broad notion of vector space, and we allow that in the situation. We say that a subspace of the vector space v is any subset that satisfies the following three conditions. It's a subset, that means w itself will consist of vectors which belong to the vector space itself, but it might not necessarily be all the vectors inside of v, some are probably removed. But to be a subspace, we need to have the following three conditions. The first one is that we require that the zero vector belongs to w. So whatever the zero vector was in v, we want that to belong to the subspace w. Second, whenever there are two vectors, what's called a u and v that belong to w, we require that their sum also belongs to w. And thirdly, whenever we have a vector v that belongs to w, and you have some scalar c, and so of course it belongs to the field, we want that the scalar multiple of v by c belongs to w as well. And so these three conditions give us what we refer to as a subspace. Notice that the first condition that w contains the zero vector is gonna guarantee that the subset is non-empty. There's something that belongs to that set, at least at the bare minimum, it's going to be w itself. On the other hand, if you look at condition two and condition three here, condition two, we typically, when we say something like this, we typically mean that it is closed under addition. That's that we often phrase it in that way so that when you have two vectors and you add them together, the sum is contained inside there as well. We say that property two, the set is closed under addition. And similarly, condition three, we often refer this as being closed under scalars. That is any vector inside of the set, it's all scalar multiples of that will belong to the set as well. And so in other words, a subset of a vector space forms a subspace if it's non-empty and is closed under all linear combinations. Because if we have sums of two vectors and we have scalar multiples of two vectors, we can actually show that any linear combination of the vectors will not automatically be in there as well. And so the reason why we call this set here a subspace is that essentially a subspace is a vector space that lives inside of another vector space. So like when you think of the vector space R2, we can actually think of R2 as a vector space which lives inside of the larger R3. The XY plane is by itself in its own right, a vector space. But the XY plane is just a subset of the larger three space that's spanned by the X, Y and Z axes. So ignoring the Z axis does give us another vector space inside of the larger vector space. So when considering vector spaces before, we saw that a set of things is called a vector space when we can appropriately add and scale all of the objects and those objects we call them vectors. And I listed some of those properties earlier. Now necessary to be, necessary to this definition is that the sum of two vectors and a scalar, a scale vector is still, these are still vectors, right? So in any vector space, any vector space has these properties, right? There's an axiom that guarantees a zero vector. There's an operation of vector addition. So when you add two vectors, you get another vector. There's an operation of scalar multiplication. Those are there. So every vector space has these conditions. Now a subspace is a vector space inside of that space. That is each subspace of V is also a vector space in its own right. That's what I was trying to explain earlier. The sum of two vectors or the scalar multiple of vector originating from W must remain in W. That's what these closure principle means. If you start in W and you operate using the vector operations, you should end up still in W. So for example, if you add two vectors in the XY plane, you stay inside the XY plane. There's no sum of vectors that will ever leave that plane if you start in the plane. And likewise, if you scale a vector inside the XY plane, no amount of scaling can ever get you outside of that plane. You don't leave the subspace when you operate. You need something outside of it in order to do that. Now in regard to the axioms of vector space, the subspace inherits the axioms from the ambient vector space. So for example, since all vectors commute inside of V, then if you take two vectors that are inside of W, then they'll commute. U plus V is the same thing as V plus U. So that property of the vector space is inherited by the subspace. This will be true for the associative properties, distributive properties, you name it. Basically, the only thing that's not guaranteed is that a subset needs to have something in it, the zero vector, and we need these closure principles. And if we have these, then we can guarantee that the subset forms a vector space in its own right. Now I should mention that the vector space V, in consideration of a subspace, is actually a subspace of itself, right? It kind of trivially satisfies these conditions. So every vector space is technically a subspace of itself although we usually won't consider that situation. On the other extreme, you can take, for example, the space F0, which is the space that only contains the zero vector. This is often what we refer to as the zero space. It contains only the zero vector. And I want you to see that this thing is a natural subset of any vector space. So take Fn to be any n right here. And it'll satisfy these conditions, right? The zero space contains the zero vector. In fact, that's the only thing it contains, right? Was it closed under addition, right? If you take two arbitrary vectors from F0, let's still be in there. Well, there's only one option. You got zero plus zero, which is equal to zero. So in a trivial way, it satisfies this closure under addition. And in terms of scalar multiplication, right? If you take any number and you times it by the zero vector, you're gonna get back to the zero vector, which belongs to F0. So the zero space is always a subspace, absolutely. Now, of course, that's not gonna be all, the zero space is always a subspace. But what we wanna then do in this section is come up with some conditions for when we can check, given a certain subset of a vector space, is it a subspace or is it not a subspace? Now, let me mention one situation for which we always can get a subspace. Let's take our vector space Fn and just take two vectors inside of Fn, V1, V2. I don't even care what they are. They could be the zero vector, it could be something else, different vectors, the same vectors, doesn't matter. Just take your two favorite vectors inside of the vectors, the vector space Fn. We're then gonna take the span of those vectors, V1 and V2. And so this is gonna start to show you how we're connected to the affine geometry we talked about before, right? Because every flat is just a vector plus linear combinations of these so-called spanners. You'll notice here that this right here, we're taking as our particular vector X0, just to be the zero vector itself, because the zero vector is always contained inside of every span, because you can always take zero times the first one plus zero times the second vector, and that gives you the zero vector always. So the span of a set of vectors itself is a affine set. It's a flat that actually passes through the origin, just to make a connection there. So let's take W to be the span of two vectors, V1 and V2. We claim that this set W, the span of these two vectors, is a subspace of Fn. Now to prove this, we have to check the three conditions that were mentioned on the previous slide. Does W contain the zero vector? And as I mentioned a moment ago, the zero vector is just the trivial combination of V1, V2. Set the coefficients of V0, add them together, that gives you the zero vector, irrelevant of what V1 and V2 are. So the zero vector will be contained inside of this span. Now what happens if we add together two arbitrary elements of W? Well, a generic element of W is gonna look something like this. S1 V1 plus S2 times V2, where S1 and S2 are just two scalars, and V1 or V2 are the two spanning vectors in the situation here. This is just a generic linear combination. And so this is what a typical element of W is gonna look like. Well, what if we take two typical elements? We have a linear combination of V1 and V2. We have another linear combination of V1 and V2. So we'll call the scalars this time T1 and T2. We don't necessarily claim any relationship between S1, S2 with T1, T2. They're just scalars right here. Now if we add these two linear combinations together, we can actually combine some like terms, right? So there's the vector V1, it shows up twice. When we add that together, we add together it's coefficient, which would be S1 plus T1. And similarly, V2 come combined with the other V2. Combining like terms, we're gonna get S2 plus T2, which is equal to V2. And I want you to notice what we have right here. Ignoring what we have here. So here's an A, V1 plus A, B, V2. I want you to notice that elements of W are linear combinations of V1, V2. So if you wanna determine does this vector, does it belong to W or not? Well, it really just comes down to can you write as a linear combination of V1 or V2, right? That's what membership in W means. Can I write it as a combination of V1 and V2? And so we're looking for things that look like something times V1 plus something times V2. Now, if we erase what we covered up here, notice that's what we have. We have something times V1 plus something times V2. As that's a typical element of W, that we say that it belongs to W. So this tells us that the combination, the sum of two linear combinations of V1, V2 is itself a linear combination of V1, V2 and therefore belongs to W. So the span of two vectors is closed under addition. What about scalar multiplication? If we take a typical element of W, which will just be a generic combination of V1 and V2 and you scale it by any number, it's an arbitrary scalar, right? What happens? Well, by the distributive property, you can distribute that C. And so you're gonna get CS1 times V1 and then CS2 times V2. And you'll see here that this is something times V1 plus something times V2. This is just a combination of the V1, V2 and therefore it belongs to W. And so the three conditions are then satisfied. Therefore, W is a subspace of Fn. And guess what? What was so special about having two vectors, V1 and V2? Nothing really. We could have done this with V3, with V4, with V5. We could keep on going. We could even do this with one vector or even no vectors, right? If you did no vectors, you just get the zero space again. But if you take any span, any span whatsoever, you're going to get a vector space. And so this argument here is basically giving you the following. If you take some set S to be any subset of your vector space Fn, then if you take W to be the span of this set of vectors, call it S, then W is a subspace. It's a subspace of Fn. And this happens in general. That subspaces are always spans. Let me say that again. Every span of a collection of vectors is always a subspace of the ambient vector space that it lives inside of. And it does turn out that that process is reversible. Every subspace is just the span of some set. We'll talk about that a little bit more in the future. But in the meanwhile, this set S, which in the example here was V1, V2, this is commonly referred to as the spanning set, the spanning set of your subspace. That is, if you have some subspace in hand and you can write it as a span of some collection of vectors, that collection of vectors we call the spanning set of W. And we could say that W is the span of those vectors, right? It's a subspace span by V1, V2. And what we're trying to say here is that if your spanning set becomes larger and larger and larger, those still will give us subspaces always, always, always, always. And so by the consideration of lines, planes, hyperplanes and affine sets that we had talked about before, spans are gonna be those affine sets, those flats, which pass through the origin, like we mentioned earlier, because a flat is essentially just the span of some vectors, but we've translated through space using some particular vector on the flat. Well, if your particular vector is zero and you translate by zero, well, nothing happens. And so spans are just flats that go through the origin. And since every span is a flat, I should say, since every span is a subspace, subspaces are flats and they're flats that go through the origin. And it turns out that does in fact characterize all subspaces, these flats that go through the origin.