 Welcome back to our lecture series based upon the textbook linear algebra done openly. As always, I am your professor today, Dr. Andrew Misteldine. We are in this lecture in section 6.5 entitled similarity and linear transformations. And at the time of the recording of this video, this is actually the final section of our textbook here. Now by the open nature of the textbook, it's actually meant to expand. So it's very likely in the future there might be some more sections and videos accompanying those sections. But for now, we've reached essentially the end of our series. And what I want to do in this final section here is make one more connection with linear transformations, one of the most fundamental concepts of linear algebra, and connected this idea of similarity that we've developed here in chapter six with regard to eigenvalues and eigenvectors and such. So a little bit of notation, and a reminder of a couple things. Imagine we have two vector spaces, v and w, and let them be n dimensional and n m dimensional respectively. And we'll say that the basis of v is the set B, which contains vectors b1, b2, b3 up to bn. And we'll say that the basis for w is called C, which will contain m vectors c1, c2 up to cm, respectively. Remember, the dimension of our vector space represents how large is a basis. Okay. And so now let's define a linear transformation t, which goes from the vector space v to the vector space w. And so v here is going to be the domain. This is the space that we're coming out of. And then w will be our co domain, the space that we're targeting that the states we're moving into via this transformation here. Now back in chapter two, we had talked about coordinate vectors and we've revisited this idea a couple times here. But the vector space v, we often have dealt with this vector space fn so much, we forget that there are other vector spaces. And in fact, that vector space, I mean, it could be like a it could be like a flat of some kind that passes to the origin. So it could be a still a set of column vectors. But these vector spaces could be anything really, we could talk about spaces of linear equations, spaces of matrices, spaces of functions, spaces of polynomials, just to name a few potential examples, like in physics, vectors are arrows. And we'd have a set of arrows as our like force vector field or things like that, right? Well, whenever, whenever you have a vector space v, we can put it into coordinates, which is an isomorphism from your vector space v into this column space, fn right here. And that's just because our basis has in elements. And so we can put in one one correspondence with column vectors over here. Like for example, if we take v to be the set of say, like, degree two polynomials, then we're describing things like ax squared plus bx plus c. What we can do is we can identify this polynomial with the vector, we're going to put c for our constant term, b for our linear term and a for our quadratic term, something like that. And this is sort of using the idea that our vector space has the basis one x and x squared. All polynomials are linear combinations of these monomials one x and x squared. And so we can identify polynomials with vectors in that manner. We've talked about this many times within this series. So that's kind of the perspective we have right now. Okay, so we have the coordinate mapping, which connects our vector space v with some column space will column vectors fn. And we do the same thing for w, we can put it into c coordinates. And these are the so these coordinate mappings are linear transformations, you put v into b coordinates, and you put w into c coordinates. And these are in fact our isomorphisms, these are dimension preserving linear transformations. And so now consider the composite of these things, right, we could take the composite linear transformation, which we take the reverse coordinate map with respect to be composite with the linear transformation t and compose that with the coordinate map C. This will then give us a map from fn to fm. So you take a column vector, you switch it into a vector in v, you then do the map associated to t, and then you put that output, that image into c coordinates. And this composition gives you a map from fn to fm. Well, every map from fn to fn has a matrix representation, we typically call this the standard matrix representation. So this map fn to fm, the composition of these three maps, two of which are coordinate maps, it has a standard matrix representation, call that matrix here a. And the way that this matrix composition or this this transformation composition is, is we'll have the property that if you take any x in v, and you map it through the map t, so you get t of x, that'll pop out and be in w. If you take the coordinate vector of t of x in c coordinates, this will equal the product, the matrix product of a times the vector x in b coordinates. And so this matrix right here a, we're going to call the matrix representation of t relative to bases b and c, we need a basis for the domain and we have need a basis for the co domain. And so this this matrix representation is what we mean here by this notation, a equals t relative to c and b there. Okay. And so be aware that the direction of the c and b it has to do with this mapping right here, b is on the right hand side and it transitions to c on the left hand side as well. And so the following diagram can be very helpful in trying to understand this process here. So you think of x over here is your starting location in v. You start in v. So one possibility is you could go, you could map from v into w using the transformation t. And then t of x is a vector in w, you then use the coordinate map, the coordinate map c coordinates here. And using c coordinates, you then end up with this vector, the coordinate vector of t of x. And so this is the end right here. But we potentially could have gone a different direction starting at starting at v, we could have taken the coordinate vector first. So then we take b coordinates over here. And once once you're in b coordinates, then we can multiply by the matrix A. And that'll then give us the coordinate vector of t of x. And so what this diagram is trying to tell us is it doesn't matter which path you take, if you go to the right, then down, that's the same thing as going down, then right. In algebra, we often refer to this as a commutative diagram. It doesn't matter which order you go. Now, that's what this matrix does. But how do we find this matrix representation? Well, there's a formula for it right here. A is going to be the matrix whose columns are the c coordinate vectors of the images of the original basis B. So we take the image of t, b one, where b one was the first basis element, then you take it c coordinates. And then you take the image of b two with respect to the map and it's c coordinates all the way to the end, where you get the image of bn with respect to t calculate its c coordinate map right there. This will give us an m by n matrix right here. And to solve each of these vector to solve these coordinate vectors here, you want to row reduce, you want to reduce the problem where you take the coordinates from c there, that basis and row reduce it with t of b one, right? And then you have to do this for all of these things individually kind of repeated over here. You take c augment t b two, row reduce that thing. But admittedly, you could handle all of these things in tandem. If you take your basis c and augment it with t of b, right, we're going to take the image of all of them. So you get a column for each one tb one tb two t up to bn. And if you row reduce those things, then this would row reduce to be something like the identity augment the matrix a that you're looking for. But I should warn you that there could be rows of zeros because we're not claiming that this is a square matrix. So there could be like some zeros popping up in here put all the rows of zeros if they appear just on the bottom. No big deal. I want to show you an example of how something like this might work. And what's what's kind of the the dealio when it comes to this matrix representation. So consider the following linear transformation from r four to r four. So it's a map between four dimensional space here, where t will take the vector x, y, z, w, and it'll just compute these various linear combinations of the coordinates, right? It can be very easily shown that the standard matrix representation of this linear transformation t would be given by this matrix right here, or that says four by four matrix, you can take a look at it for if you need to here. Now using the notation we presented on the previous slide, the standard matrix that we've seen in the past, this would be the matrix where you take the standard, you take the matrix representation relative to the standard basis and the standard basis, which we usually call that calligraphy right there. And so this would be just the standard matrix for this linear transformation, we've seen this before this offers us nothing really new. But we do have this four by four matrix here, what I want to then consider is what if we have a subset here, what if we take a subspace of r four. And so consider the basis B, that is spanned by these two vectors 1234, and 10 negative 10. This will give us a two dimensional subspace of our four, we're going to call that V. And next take another basis, which we'll call C, it'll contain three literally linearly independent vectors, one negative one one negative one, and two zero zero one and three negative two one zero. This will form a basis for w, which w will be a three dimensional subspace of our four. So the way we've chosen this here, our map, our linear transformation T, actually restricts to a linear transformation from V to W, that is, if we plug in things from V into T, this will always pop out something in W. So we can sort of ignore a lot of the information. And this this this rule we had before will be a map from the subspace V to the subspace W. And to see that, let's look at the images of the elements of B. Let's call this first one B1 right here. And this is B2 right here. Then if we take the image of these two vectors with respect to the map, you're going to get that T of B1 is equal to 19 negative six three and eight. And T of B2 is equal to three negative three zero negative one and one and you can double check that if you take this matrix right here, and you multiply by this vector and this vector, you'll get exactly the two vectors that we see right here, and right here. All right, so this gives us the images of TB1 and TB2. So to compute the matrix representation, if we want to find the matrix representation of this map from T to W, that is with respect to these bases, bases B and C right here, we have to consider the matrix where we're going to augment C with the images of these B vectors. And so you're going to you're going to see this right here. So what you see on the left hand side, these are maybe scooched it up just a tiny bit, can we fit it all on one screen. So we're going to see the first three columns are going to be the vectors from the basis C. So you get right there like so the first one's one negative one one negative one, you get one negative one one negative one the second one is two zero zero one, two zero zero one the third one is the same right I'm not going to go back I don't want to give anyone whiplash right now. And then the second two vectors on the other side, this is the image of the original basis B with respect to the linear transformation T. So we want to row reduce this, which will send us from here into here I'll just kind of skip the details of that. Like I said, since we don't have square matrices you might get a row of zeros but that's not a big deal just put those in the bottom. We will have three pivots because the co domain is three dimensional so we have these three pivots right here. And then these these entries right here give us our coordinate vectors. So notice that the C coordinate vector of T of B1 is exactly this vector the first one we see there to the right of the the line zero five three that's the coordinate vector there. And then the coordinate vector the C coordinate vector of TB2 right negative two negative two one that's the coordinate vector right there you can ignore this row of zeros. And so putting these vectors together the the matrix the standard the the the the matrix representation relative to B and C coordinates right here this is going to be just not a four by four matrix this is just going to be a three by two matrix right here where we just have those two coordinate vectors right here and so if you multiply any vector in B coordinates on the right right so if you multiply by some B coordinates on the right this will produce over here the image of that vector automatically in C coordinates so we're putting our base we're putting our matrix into coordinates just like we can put vectors into coordinates as well. And one last comment I want to make mentioned with respect to this example you'll notice that the this matrix representation is a three by two matrix which accounts to six bits of information that is you have these six numbers going on here but if we go up to the original matrix the original matrix here was four by four right it's a four by four matrix which actually means there's going to be 16 bits of information in there but really if if you want to go from R4 to R4 you need all 16 bits but if you want to go from V to W you only need six so six of those bits of information are completely unnecessary and we can actually reduce it to just get away with six bits of information and so this principle right here how it's the same it's the same linear transformation but we are able to in some regard compress the information by using the a different matrix representation uh this is actually really important for data compression like imagine for example this matrix represents a picture which is an array of colors where colors are identify colors are identified with numbers maybe by a hexadecimal scheme or something like that but if we think of a picture as just like an array of colors colors being numbers we potentially could compress that photo by using a smaller um a smaller matrix that represents the same linear transformation we're identifying the photo as a linear transformation um this same principle works in coding theory as well that we might not need the full 16 bits of information we might be able to compress it um to just six bits and using those bits we can perhaps that difference could be used to help us in error correction and error detection some other interesting applications with that right there