 Welcome back to our lecture series linear algebra done openly. As usual, I'm your professor today, Dr. Andrew Misseldine. As we have seen before, linear transformations are those maps between vector spaces which preserve the vector structure. That is, they preserve vector addition and scalar multiplication. We've seen that matrix multiplication is in fact a linear transformation, hence all geometric transformations in the previous lecture were linear transformations. In Section 3.7, we will now take a look at the fact that all linear transformations are essentially just matrix multiplication. In fact, the way we define matrix multiplication at the beginning of this chapter had the sole purpose that we wanted matrix multiplication to coincide with linear transformations. In other words, matrices are just linear transformations in coordinates. Let's see some examples of that to illustrate the point. Suppose we have a linear transformation, T, which goes from R2 to R3. If we take the standard basis for R2, just the vectors E1 and E2, which you can 0, 1, 0, and 0, 1, suppose we know what the linear transformation T does to the standard basis of R2. Let's say that T of E1 is equal to 1, 2, 3, and T of E2 is 3, negative 5, and 0. It turns out with this little information, we actually have enough to know exactly what T will do to every matrix, every vector, excuse me. How do we know that? Well, because E1 and E2 form a basis for R2, every vector in R2 can be expressed as a linear combination of E1 and E2. But wait, linear transformations preserve linear combinations. That is, a linear transformation will send a linear combination to a linear combination, and those two ingredients together make a delicious pie. That is to say, we know exactly what this transformation will do to any vector. Let's take a generic vector in R2. If it's in R2, that means there's two coordinates, X1 and X2. Now, this vector can be T composed as a linear combination of E1 and E2. You're going to take X1 times E1 plus X2 times X2. You can commit yourself with that very quickly. Now, since T is a linear transformation, when we take T of X, well, X can be replaced with the linear combination X1 E1 plus X2 E2. Since it's a linear transformation, this combination can be broken up in the following way. The sum inside of a linear transformation becomes a sum outside of it. Linear transformations preserve addition, but each of these scalars can be brought out of the linear transformation process as well, giving us to where we are right here, X1 T of E1 plus X2 times T of E2. But wait a second, we know what T of E1 does. That was the vector 1, 2, 3. And we also know what T does to E2. That was the vector 3 negative 5, 0. So, hey, we actually know what's going to happen to the vector X. It's going to be X1 times 1, 2, 3 plus X2 times 3 negative 5, 0. And if we add together these vectors, we get the following formula. T of X is going to give you X1 plus 3 times X2 for the first coordinate. For the second coordinate, it's going to get 2 times X1 minus 5 times X2. And then for the third coordinates, it's going to get 3 times X1. So we can actually get a formula for the linear transformation, the types of formulas we've seen many times. But wait a second. This linear combination could actually be written as a matrix vector product. This right here is the product where we take the matrix whose first column is 1, 2, 3 and whose second column is 3 negative 5, 0. If you times it by the vector X, this matrix vector product is identical to this linear transformation, which is identical to this matrix or to this linear transformation formula right here. So what we've done here is we've shown that this formula is actually equal to matrix multiplication. And this isn't just a coincidence. This actually works for every single linear transformation there is. We can apply this same basic argument. Let's write the general principle. Let EI be a vector in FN. And this is going to be the standard basis. So EI is going to be the vector which has a one in its ith position and zero is everywhere else. This is the standard basis for FN. Then let T be any linear transformations from FN to FM. And then take the matrix A where matrix A, the matrix A here is going to be an M by N matrix whose first column is T of E1, whose second column is T of E2, whose third column would be T of E3 all the way down to T of EN. So the column vectors of A are going to be the images under the map T of the standard basis. This is going to give us a matrix and we claim that evaluating the linear transformation at the vector X is identical to multiplying the vector X by this matrix right here. And this matrix A is referred to as the standard matrix representation of this linear transformation T. And oftentimes this is denoted as T inside of these brackets. We use brackets to describe matrices. So putting the transformation into the brackets indicates to us that, oh, we're taking the matrix that represents this linear transformation. Let's see an example of this. Let's take the linear transformation, T of X1 comma X2 gives you 3X1 plus X2 and 2X minus, or 2X1 minus 4X2. So just so you're aware, this is going to be a linear transformation from F2 to F2. Didn't really specify the scalars here. We'll just make it simple and say this is the real numbers R2 to R2, like so. So we want to know what does this matrix do to E, or what does this transformation do to E1? Well, E1 is the vector which has a one in the first coordinate and a zero in the second coordinate. So we plug in a one for E, or for X1 and plug in a zero for X2. We're going to get three times one plus zero comma. And then we're going to get two times one minus four times zero. Maybe we should write it as a column vector. So we end up with three times one plus zero for the first coordinate and two times one minus four times zero for the second coordinate. This simplifies just to be three comma two, all right? If we do this for T of E2, same basic idea, we're taking T of zero one. This is going to give us this time three times zero plus one, and then we get two times zero minus four times one, which is this is going to give us one and negative four. So another way of thinking about this, well, let's finish up the situation here. So the standard matrix transformation of T, this is going to be the matrix whose first column coincided with the image of E1. So we're going to get three and two, and then the second column coincides with the image of E2, which is going to be one and negative four. And so multiplication by this matrix A is the same thing as the linear transformation. And so let's verify that, right? If I take this matrix three, one, two, negative four, and you multiply it by some generic vector x, so x1, x2, notice you're going to end up with three x1 plus x2, and then you're going to end up with a two x1 minus four x2. This is the exact same formula we have before, right? Although this one I've written in line, the other one written as a vector, it's the exact same formula here. And also when you write your formula like this, if you line up your variables x1 comes first and x2 comes first, then it's very easy to see what the transformation matrix is going to be. You're going to get three two for the first column, and then one negative four for the other one. So if you line up your variables, you can see, you can actually see the matrix representation inside of the formula right there. Now another really neat thing about matrix representations of linear transformations is the following. This is critical right here. So remember what it means to compose two functions together. You might remember this from like college algebra or calculus. You put one function inside of the other. And so I like to think of it as like these conveyor belt machines. So let's say we have some machine, we're going to call it S right here, and we have some other machine right here, which we call it T. So T takes input, some vector x, it goes inside the machine. And so what it's going to do, it's going to go machine, it's going to process. And then the first machine is going to spit out T of x. That's what comes out of the machine. But then the output of the first machine becomes the input of the second machine. It gets processed by the machine S, and it's going to spit out the element S of T of x, like so. And then we call the tandem of these two machines, that when you put them together, this becomes F composed with T. And so this right here is S composed with T of x. Okay? So that's what function composition's all about. So S of T of x, you can see it right here, S of T just means you put the function T before S. So you put that inside of them right there. And so you can think of it in terms of this analogy with machines on a conveyor belt, the processes going in there. There was actually a link you would have seen if you want to see a little bit more detail on function composition. So what I want to do is connect this to matrix multiplication. So suppose we have two linear transformations. One linear transformation is going to be S. It'll be a transformation from FM space to FP space. And then take another transformation, T, it goes from F in space to FM space. So think of it this way. You have your first vector space F in. T transforms it into the vector space FM by some type of manipulation of the geometry. And then S is going to translate FM into FP space, which we call S. And so then when you put these processes together, the function S of T will go from F in to FP. It's the composite. It's the tandem of these two processes together. So we see that. So S of T is the composition. What does this have to do with the matrix representations? So suppose A is the matrix representation of S. So A equals S bracket right here. And suppose B is the matrix representation of the transformation T. So we see something like this right here. So then it turns out that when you compose our two linear transformations together, the composition of the two transformations, S of T, its standard matrix will be the product. So matrix composition coincides with matrix, linear transformation composition coincides with matrix multiplication. And the argument is actually quite simple and basically looks like the following. Well, T of X is the same thing as B of X, like so. And S of say Y, some vector Y is the same thing as A times Y. The two, the transformation acts like a matrix. So what happens when we do these things together? S of T of X. Well, what does S of T mean? It means you put, you do T of X before S. Well, T of X is the same thing as B of X. Applying the transformation T is the same thing as multiplying by the vector, I'm multiplying by the matrix B. Well, what does S do to B of X? So B of X is just, well, it's just our vector Y right here. Applying the transformation S to a vector is the same thing as multiplying by the matrix A. So you end up with A times B of X right here. And since matrix multiplication is associative, you get this is equal to AB times X. So performing the composite of the functions S and T is the same thing as multiplying by the product of their matrix representations A times B. So we've designed matrix multiplication to be this representation of linear transformations so that multiplying by the matrix is the same as applying the function and multiplying the matrices together is the same thing as doing two linear transformations together.