 In the previous video, we essentially found the following procedure for computing the change of basis matrix to B from C. That is, B and C are two bases for a vector space V. And so what we are going to do is we're going to create a matrix whose columns are augmented in the following way. Basically, we take all the vectors from the B basis and we put those in the left-hand side of our augmented matrix. And that's going to be our coefficient matrix, so to speak. But then we take all of the vectors from C and we're going to put those on the right-hand side of those as well. And so we get this matrix, this augmented matrix, but then the right-hand side actually might have several columns instead of the one column we've often done. And that's because the row reductions to row reduce B into the standard basis for fn right here. And I should mention that the standard basis would look something like 1, 0, 0, however many zeros, 0, 1, 0, et cetera, 0, 0, 1. So it just has a bunch of zeros in these diagonal positions. One's along this diagonal, zero's everywhere else. The standard basis for fn right there. B will transition into the standard basis here because B is a basis. We've seen this before that as we were trying to check whether a set of vectors is linearly independent or not. When you row reduce it, it'll turn into the standard basis. And I guess there's a little bit of a caveat there because you could get the standard basis over a bunch of rows of zeros or something. You could get something like that. So what I mean is you have like a 1, 0, 0, maybe like that, 0, 1, 0, 0, 0, 0, 0, 0, 0. You could get something like that where there could be like these rows of zeros. But we have to have a pivot in each column to be linearly independent. And because of that, you can zero everything above the pivot and zero everything out. So you'll get something like this where here this would be our standard basis E. And then maybe there's some zeros in the bottom. That is a possibility here. So B will transition into the standard basis. And then C is going to transition into this change of basis matrix, which again, there could be a bunch of zeros on the bottom, which isn't such a big deal. You're going to get some zeros on the left-hand side and on the right-hand side, which since if B is a basis, you're going to get zeros on the left-hand side. If you don't get zeros, then actually B wasn't a basis. Now, if you didn't get zeros on the right-hand side, that would actually suggest that either C is not a basis or B and C don't span the same thing. So any inconsistency really means that B and C are not bases for the same vector space. And so as that's not going to happen, we're going to get a bunch of zeros so it'll be consistent. And then the change of basis matrix will just be this matrix on top. And this actually further suggests why we denoted the change of basis matrix the way we did. How we have the C on the right and the B on the left and the arrow transitions from right to left. I mentioned it beforehand, as it was a consequence of the matrix equation where the change of basis matrix from C to B, you're going to multiply on the right by C coordinates. And this will give you B coordinates on the left. That's part of it, but it's also this formula right here. You take your old basis C, you put it on the right. You take your new basis B, you put it on the left. You re-reduce this matrix and it'll transition you to the change of basis matrix you get. And then let's do an example of that, right? So let's take three vectors from R4. The first vectors B, we'll call B1, 11-30, B2, 10-24 and B3 is 300-2. We're going to take another basis for that same, for that same vector space C1 will consist of 63-21-26, C2 will be 15-5, negative 23-12, C3 will be 3-2, negative 8-4. So it's not too difficult to show that both of these sets of vectors, the B's and the C's, are B linearly independent, but it turns out in the process of computing the change of basis matrix will actually show that these are linearly independent. I'll point that out along the way. So these both form a basis for a subspace of R2, R4, excuse me, these vectors live in R4, but I only have three of them, so I can't span all of R4. It's gonna make a three space inside of R4. So how does one compute the change of basis matrix? So what we're gonna do is we're gonna take the matrix B augment C, so we're gonna take the three vectors from B and then we're gonna augment that with the three vectors from C and we're gonna row reduce that. I'm skipping over the steps right here and so let's see what happens. So if we then take the three vectors from B and we row reduce that, it's gonna row reduce and you're gonna get pivots in each and every column. This indicates to us that the set of vectors B1, B2, B3 is linearly independent because we have a pivot in every column. Now, because we have three vectors in R4, there was no way we were gonna get a pivot in every row. So we do end up with this row of zeros, but that doesn't clash with the fact that we are linearly independent here. And so if we actually call this subspace in question, we'll say that W is equal to the span of the Bs, B1, B2, B3. So by construction here, W is the span of the Bs, therefore B is a spanning set for W. It's a linearly independent spanning set, so B is necessarily gonna be a basis for W. So just kind of by construction, that was the case that happened. Now, what happened as we row reduce this? Well, like I said, we have this row of zeros on the left-hand side. So this always should give us pause because if you have a row of zeros, there is a possibility of inconsistency, right? But on the right-hand side, this row of zeros is matched up with a row of zeros. And so there actually is no concern whatsoever. We can ignore this row of zeros. And so in particular, this system turned out to be consistent because this system turned out to be consistent, what this tells us is that these vectors right here are gonna be our Corbin vectors. This vector right here, 36 negative one, this is right here, the Corbin vector for C1 and B1 coordinates, and B coordinates, excuse me. This would be the vector 36 and negative one. You don't put the zero at the bottom because B only consists of three vectors, B1, B2, B3. We need the coefficients of B1, B2, B3. The zero doesn't have any bearing here. The Corbin vector would only contain three vectors. And the other thing we should mention, you know, just to do some more examples, the vectors C2B, the Corbin vector for C2 would be the second column right here. It would be five, four, and two. And then the Corbin vector for C3 and B coordinates would be the vector two, one, zero, like we see in the matrix above. So we can describe each of these Corbin vectors. But when you put the Corbin vectors together, let me clean this up a little bit. When you put the Corbin vectors together, this right here, ignoring the rows of zeros, gives us our change of basis matrix right here. The fact that this system was consistent showed us that C was in fact a subset of, that was a subset of W. Each C1, C2, C3 are in W because of the consistency happening right here. And likewise, if we argued that the set C was likewise linearly independent, then it would be a basis. And this is of course also the change of basis matrix in consideration here. So let's then use this change of basis matrix. So for example, if we take a vector X, which is in C coordinates, let's say that C coordinates are two, negative three, and four, right? So X is a vector in R4. So there should be four coordinates, right? But that's with respect to the standard coordinates of R4. And since this vector X belongs to this three dimensional subspace, we only need three numbers to describe the vector X. And if we use the C basis, those numbers will be two, negative three, and four. How do we convert this into B coordinates? Well, to get B coordinates, we will times the C coordinates of X by the change of basis matrix to B coordinates from C coordinates. This change of basis matrix we found from the previous slide, we multiply that by the coordinate vector we have for C coordinates. And this is just the usual multiplication here. We're gonna take the first row times the first, or by the vector right here. So you're gonna get, if we did this in a little bit more detail, you're gonna get three times two, which is six, minus 15, which is five times three, plus eight. And you can see that six plus eight is 14, takeaway 15 is a negative one. Next, then you take the second row of the matrix and times it by the vector right here. You're gonna get two times six, which is 12, three, negative three times four, which is negative 12, and then one times four there, which it gives us four. So that checks out. And then for the last one, you take the third row times the vector right here. This would give us negative one times two, which is negative two, two times negative three, which is negative six, and then you get a zero. So that gives us a negative eight. Hence the calculation we saw right there. So what we see about our vector X is the following. We see that in B coordinates, X looks like negative one, four, negative eight. So what that means is X is the linear combination negative B one plus four B two minus eight B three. But we also saw the C coordinates of X, two, negative three, four, so that says that X is the linear combination two C one minus three C two plus four C three. And so both of those linear combinations will produce X, although it's with different bases. Now, if we were to insert the original interpretations of B one, B two, B three into this linear combination, we could compute that vector X as a vector in R four. Same thing, we could plug in the C vectors from before and we could get X as a vector in R four. And now that'll actually become negative 21, negative one, negative five, and 32. But the thing that I'm trying to emphasize with these coordinate vectors is that yes, X belongs to R four, so we can describe, we can describe X as a vector with four bits of information. But because X belongs to the subspace, we can actually describe X with only three bits of information, whether we choose B coordinates or C coordinates, we only need three numbers to describe this vector X, even though it's in four dimensional space. So that's the issue at hand, do we really need all of the information to describe this vector? And this idea of representing a vector with less information than the ambient space would expect, this is sort of like the basic idea behind data compression. So like when you zip your file on your computer, you're trying to compress or decompress some images, it turns out that when you store data, you might store it in a more compact way, maybe using some type of coordinate system. But then to use it, you have to decompress it into the larger vector in R four perhaps. And that takes a little bit of time, but you can store then the information with a lot less information because you only need three bits to store the same stuff. Because it's like, okay, it's three bits plus a basis. Well, if you're only storing one vector, that's a lot of data. But if you're storing like tens of thousands of vectors, then you actually can get some efficiency by storing the data using some stored basis and then the coordinate vectors would be more efficient. And that's why it takes time to load things sometimes. Your computer's got to compute what all the data points were by decompressing them by solving these linear systems of equations. Coding theory is also based upon this same basic idea. And so this brings us to the end of chapter two and this idea of coordinates has very important applications as I'm alluding to right now at the end of this video. In the next chapter, chapter three, we are gonna transition from studying vectors to studying matrices. We've seen a lot of matrices in this chapter. This video itself was talking about the change of basis matrix, which we'll see this one again in the next chapter. So stay tuned for those videos. You should probably see the link of them right now. Bye everyone.