 Welcome back to our lecture series, linear algebra done openly. As usual, I'll be your professor today, Dr. Andrew Missildine. In this section 2.8, in telecoordinates, which is also the last section for our chapter two about the algebra and geometry of vectors, we're going to explore the idea of a basis a little bit more. So a basis, remember, was a linearly independent spanning set for a vector space. In this situation, let's call our vector space V. So take a basis, for example, B, and let's let it be V1, V2 all the way up to Vm. So we have n vectors in there. Now a basis is two things. First of all, it's a spanning set. So since B is a spanning set, that means there are scalars. That should be a V right there. Sorry about the typo. There are scalars C1, C2 up to Cn that belong to the field, such that x can be written as a linear combination of those vectors in the basis with those scalars. So x equals C1, V1 plus C2, V2 all the way up to Cn, Vn. That's what it means to be a spanning set. But why does, and now that kind of makes sense, right? Spanning is helpful because it means you can produce every vector in the subspace. But why is it so important we have linear independence? Well, if there was independence, that means that there's only one way to express a combination of zero. Why would that be helpful? Well, let's see. Suppose, for example, we have another linear combination of x using the vectors V1, V2 up to Vn. So let's say we have some other coefficients D1, which will times by V1, we'll add that to D2, V2 all the way up to D and Vn. Suppose that's a combination of x as well. Well, let's subtract x from itself. If we take x minus x, well, on one hand it's just going to be zero. But on the other hand, since we can express x as a linear combination of the V's in these two different ways here, right? We can express it as a linear combination using the coefficient C's and the coefficient D's. When we subtract these things, we can combine like terms, add the V1's together, add the V2's together, add the Vn's together and everyone else. If you combine like terms here, you're going to get C1 minus D1 times V1 plus C2 minus D2 times V2 all the way up to Cn minus Dn times Vn. So notice what we have now in hand is we have a linear combination of our vectors V1, V2 up to Vn that adds up to be zero. Now, if we suppose our set is a basis that is it's linearly dependent, that means each and every one of these coefficients has to be zero in this combination. But if they're all zero, that would mean C i minus D i equals zero for each of the i's. But moving the D to the other side, we see that C i would actually equal D i. So what this tells us is that there is only one way you can express x as a linear combination of the V's. You have to use the C i's because any other combination will produce the same thing. So the linear independence of the vectors will give us that spans that I should say linear combinations are unique. There's only one way to express and that one way is what we're gonna call the coordinates of the vector. So again, suppose we have a basis V1, V2 up to Vn for some vector space V and let x be a vector in the vector space. Again, sorry about the typos there. x belongs to our vector space V right here. We are gonna say that the coordinates of x relative to this basis B are gonna be the scalars C1, C2 up to Cn such that x equals C1, V1 plus C2, V2 up to Cn, Vn. In other words, because x belongs to the vector space, there is a unique way to express x as a linear combination of the basis. You have to use the coefficients C1, C2, all of these Cs right? But this is a unique, these are unique coefficients. There's only one way to do it. So there's no ambiguity to this description. We call these the coordinates of x. Now these coordinates we can arrange into its own vector right? So we take C1 up to C2 all the way up to Cn. Now in this column vector, this is a genuine element of Fn and this is called the coordinate vector of x with respect to B. So we'll denote this bracket x sub B. The bracket here are indicating that this thing looks like a column vector. Even if the vectors in play aren't themselves column vectors, we have this expression right here of column vectors. That is the corner vector will be a column vector even if x wasn't. So for example, if we take the vectors V1, which we'll say is 101, V2, which will be 5, 2, 3 and x will be the vector 3, 2, 1. Let's then take the basis B to equal V1, V2 and we want B to be a basis for the subspace of R3, which we'll call W and it'll be the span of V1, V2. Now notice that V1 and V2 are linearly independent, right? If you look for example, the second component of these vectors, V1 has a zero, V2 has a two. There's no combination of V1 that can produce a V2 and therefore this set is gonna be literally dependent just the two vectors. They're not multiples of each other. So we have an independent set. It spans a subspace, which we're calling W. Therefore B is a basis for W. Now we're gonna first determine is does x belong to the subspace W? And if it does, how can we compute the coordinate vector of x relative to this basis B? And I wanna mention, why do we put this word relative into play right here? Well, because if you pick a different basis for the same subspace, the coordinate vector will change. It is dependent on the basis in play, which is why the term relative is necessary here. So if x belongs to W, if it does, then there would exist coefficient C1, C2. They're gonna be real numbers, such that C1, V1 plus C2, V2 is equal to x. So we have to determine what is C1, what is C2? Now be aware that this comes down to solving a system of linear equations, which we're gonna take a matrix whose coefficient matrix is just the basis B and we're gonna augment it with the vector x, the vector that we are trying to see is x inside the span. So if we care is x inside the span of the vectors inside this basis B, right? We have to solve this linear system. We've done this problem before. So the first column of the coefficient matrix would just be the first vector in the basis B. The second column will be just V2, the second element in the basis there. And then we're gonna augment it with the vector we're trying to figure out is in the span or not. And so if you solve this by row reduction, just doing the usual row operations, I'm not gonna worry about the details of that. You'll find the following. Now we have these two pivot positions, you get a row of zeros, which is perfectly hunky-dory. You get that in this situation, C1 is gonna equal negative two and C2 is gonna equal one. And so therefore we can verify that fact. We can verify that fact here. Sorry, there's another typo. There's too many typos in this video. Sorry, everyone, but we're gonna get, you can check that three, two, one is in fact equal to negative, negative two times 101 plus five, two, three. And let's just double check that to make sure, right? If you take the first one, you're gonna get negative two, zero, negative two. You're gonna add to that five, two and three and then just double checking, right? Five take away three, five take away two is a three, zero plus two is a two, and three take away two is a one. So that is in fact correct. So we've then found right here the coordinates of X with respect to W. The fact that this system was consistent, the fact that the system consistent tells us that the answer to the question was yes, X is in the span of B, therefore X is inside W. But in fact, the solution to that linear system, the negative two and the one, that then gives you the coordinate vector of B relative to X or the coordinate vector of X relative to B. So we find this coordinate vector by solving the system and the consistency of the system tells us that we are in fact in the subspace because if we just take a random vector from R3, it might not be obvious that X is in there, but by solving the system, we were to figure that thing out. Now I wanna mention that here, this vector X lives in R3, but since it also lives in W, we can actually get away with just describing W with just the coordinates negative two and one. And so I wanna give you some type of geometric motivation what's going on here. If we have a vector space, W, that's spanned by two independent vectors, essentially this object is a plane. And we might think of it as a plane that lives in three-dimensional space, mind you, yeah. So you see something like this, but it might be tilted, right? It might not be the XY plane or something. It might be at an angle through space, but we have these two vectors that live on this space, right? So we have our vector V1 and V2, just as an example. And these vectors essentially determine a coordinate system, right? We could extend V1 to make the V1 axis. We could extend V2 to make the V2 axis. And therefore every point in the plane, we can describe as some part of V2 and some part of V1. We get this C1 and C2 values right there. And so we get these coordinates from the situation X is equal to C1 V1 plus C2 V2. So because the plane lives in R3, it's natural to express X as this vector with three numbers. But because it lives in a plane, why can't we just describe the vector as an element of that plane for which we don't have to reference all of three dimensional space? We just have to reference these coordinates, the coordinates of the V1 axis and the coordinates of the V2 axis. And so what I'm trying to say is in this example, that even though the vectors in W are vectors in R3, they're completely determined by their coordinate vectors, which the coordinate vector belongs to R2. And thus there's a natural identification between the vectors of W with the vectors in R2. And that identification is we could just associate X with its coordinate vector X with relative to B. This mapping is in fact a one-to-one mapping. It is an onto mapping. And in fact, it is a linear transformation. So associating a vector to its coordinate vector is a one-to-one onto linear transformation. Essentially, this means that the two spaces are, they look the same. The space W, which we identify with a plane in three dimensions, is congruent to R2. That is, they're both planes and we can attach a coordinate geometry to the plane W much in the same way that we talk about coordinates in the traditional R2. So we say that W is congruent to R2 or sometimes we say that they're isomorphic. Isomorphic here meaning that they have the same shape. And that's the real benefit of coordinates that we can take any vector space and make it look like a standard vector space like R2, R3, R4.