 Welcome back to our lecture series linear algebra done openly. As usual, I'm your professor today, Dr. Andrew Misseldine. In this video, we're going to start section 2.7 entitled Basis, which is the plural form for basis, which a basis as it's defined for a vector space, we see right here in definition 271, a basis B of some subspace of Fn is a linearly independent spanning set of W. A basis is two things. It has to be linearly independent and it has to be a spanning set. The importance of linear independence will be a little bit more clear in the next section 2.8, but in particular, it'll be a spanning set that's independent. The size of a basis actually doesn't depend on which basis you choose. This is one thing that you need to remember that when you talk about a subspace, the basis is not unique. There actually will be multiple basis for any given subspace. But one thing that I will be constant is that the size of every basis will be the same, the number of vectors. If this subspace requires three vectors to form a basis, then every basis will contain three vectors. Because that number is uniquely determined, we call it the dimension of the subspace and denote as DIMW. Now dimension definitely has a geometric connotation to it. Every line through the origin is a one-dimensional subspace, and every plane through the origin is a two-dimensional subspace. In fact, if we were to generalize this notion, a flat through the origin, which itself is just a subspace, right? If a flat passes to the origin and it takes P many independent vectors to span said flat, then that flat is P-dimensional. As we've talked about P-flats before, where P here is just the number of independent spanners you need, that's what dimension's all about. This idea of counting the independent spanners of a set, this is an algebraic way of capturing this notion of geometric dimension. This is useful because when we start going to higher dimensional spaces that are four dimensional, five dimensional, 17 dimensional, these are little things our mere moral brains have a hard time comprehending from a geometric point of view. But from an algebraic point of view, nothing is lost whatsoever when we start counting the sizes of these subsets, of these bases. And so that's what dimension's all about. I should also mention that if we take, if we take what we call the zero dimensional space, why do we call it that? Well, the zero dimensional space is just the set of the zero vector. There's only one thing in that. Now I claim that this is the span of nothing, right? And why is that? Well, the way we define a span, the span always contains the zero vector. So even if there's no spanners inside of here whatsoever, every span contains the zero vector. So if I span nothing, I get zero, which zero is nothing to begin with in the first place, right? And notice when you look at F zero, there's only two possible subsets. There's the empty set itself and then there's the set that contains the zero vector, which is F zero itself, right? No set that contains the zero vector can be linearly independent. And by technicality, you know, by default, the empty set actually is an independent set. So it's independent and it does span the zero space. So the empty set is gonna be a basis for the zero space. And as it contains nothing, this is what we mean by one or zero dimensional here. We could do the same thing for a line, right? If one were to describe F one, it's span by a single vector. F two can be span by two vectors. F three can be span by four vectors. And in more generality, if we take the vector space Fn, we can span Fn within many independent vectors. And we're gonna call this the standard basis where in this situation, E one, E one is defined to be the vector, which has a one in the first spot, zeros everywhere else. E two is the vector whose second entry would be zero, but it has zeros everywhere else. E three is the vector whose third entry is one and everywhere else is zeros. I don't know if I said that correctly, right? So E one has a one in the first spot, zero is everywhere else. E two has a one in the second spot, zero is everywhere else. E three has a one in the third spot, zero is everywhere else. So in general, E I is that vector which has a one in the eighth spot and then zeros everywhere else. And so if you look at, for example, R three, R three is spanned by the vectors one, zero, zero, zero, one, zero, and zero, zero, one. This is the standard basis for R three. If you were in like a multivariable calculus course or maybe like a physics course or whatever, these vectors are often called, they're the unit directional vectors, I, J, and K, right? We haven't really been using those names in this course whatsoever. And that's because we transcend three dimensions and therefore our alphabet might be insufficient for 100 dimensional space, but the idea here is the same. If we take these vectors with a single one and zeros everywhere else, we'll call their collection capital E and this is what we refer to as the standard basis for FN. But like I mentioned earlier, that's not the only basis for FN. There are multiple bases. And so consider the following. Take this basis B, which contains the vector one, zero, zero, which admittedly this vector right here is just E one. Then you have one, one, zero, which that's actually just E one plus E two. And then the third vector is actually E one plus E two plus E three, one, one, one. And so this is not the standard basis. It has vectors in it other than the E's, but it is still a basis. Some things to mention here is that if you thought this set of matrices, if you saw this, if this set of vectors could be made into a matrix, right? And then the echelon, it's an echelon form already, right? You can see the pivots has that downward case here. And so we can see that this set is in fact linearly independent. And one thing I want you to notice here is that if you have, if you have an independent set that's the size of the dimension, then it's automatically gonna be a basis. So in other words, like this maximal independent set, because we've learned that if you add too many vectors, you automatically lose independence. If I added a fourth vector here, you would have to be linearly dependent. There's something magical about this basis. That's the idea of dimension. The basis, a few things I wanna mention here, a basis, it's course, a linearly independent spanning set. That's how we define it. But one could also prove the following. We also have that a basis is a maximal, it's a maximal linear independent set. That is, it's a set for which no new vector could be added to it without forcing it to become linearly dependent. Well, like we've seen in F3, if you add four vectors together, it has to be dependent. So if you can find a linear independent set of size three in F3, then because the dimension is three, it has to be a basis. And this works in general, an independent set of size N in FN is gonna be a basis. And it also goes the other way around. If you want a minimal spanning set, it turns out that if you wanna span every vector in R3, you need at least three vectors. If you have more than three, then you could throw some out, right? You could prune it down. And so a basis is all of these things right here.