 So we've learned already that we can use matrices to encode a system of linear equations. And then by row reducing that matrix, if we can learn about the solution set consistency where the pivots are, things like that, right? But we've also introduced the fact that we can define matrix multiplication between a matrix and a vector. And in some regard, this matrix multiplication is encoding linear combinations, which admittedly are equivalent to the linear systems we saw before. But in all reality, the reason why we care so much about matrices in linear algebra is that matrices encode linear transformations. Let me explain how that works here. Given a matrix A, which we'll say it's m by n, it has m many rows and n many columns, if we have an m by n matrix, take your favorite vector x inside of fn. Therefore, the product x times A is well-defined and using this matrix multiplication, we can define what we call a matrix transformation. By transformation, we just mean there's a function, right? There's a rule that given any input vector, we need an output vector. And so we can associate to the vector x the matrix product A times x. And so this gives us a function which we call a matrix transformation. Now, as this course is on linear algebra, for the most part, we probably don't care about transformations unless they're linear transformations. And it turns out that multiplying by a matrix is in fact a linear transformation. Because recall, to be a linear transformation, what we need to have is the following property, that for all vectors x and y, let me rewrite that one, all vectors x and y inside of our vector space fn, it must be true that T of x plus y is the same thing as T of x plus T of y. So we need that the transformation preserves vector addition. We also need that we take T of cx. So if you scale x by c, this is the same thing as c times T of x. So because remember, T of x is itself just going to be a vector. It'll live inside of fm. And so a linear transformation preserves vector addition and scalar multiplication. Is matrix multiplication going to do that? So if we take two vectors, what's this time called u and v, and we take a scalar c, notice that if you take a times u plus v, this is equal to a u plus a v. After all, this is matrix multiplication. We can distribute the matrix factor across the vector sum. And so we get a u and a v. So by the distributed property of matrix multiplication, which we'll talk more about the properties of matrix multiplication in greater detail in chapter three. But because of distributed property, we can distribute matrix multiplication across the vector sum, and therefore it preserves vector sums. And likewise, when it comes to matrix multiplication, you have a times cu. Well, this is just a product of things. We can factor the scalar out and we get c times a u. So matrix multiplication preserves the scalar multiple there. And so this tells us that matrix multiplication, in fact, induces a linear transformation. Every matrix transform is actually a linear transform. And what's gonna be impressive for us is that later on in this series, we will see that every linear transformation is actually just matrix multiplication with the right perspective. But that's getting a little ahead of ourselves. Let's just play around with this matrix transformation for just a quick example here. And so let's do the following. Let's consider the matrix, the three by two matrix that you see right here, zero negative two, one negative three, two negative three. This will induce a transformation from r two to r three. So notice that the matrix, this is a little bit backwards, but the matrix we write as three by two, so three rows, two columns. This induces a map from r two to r three. So notice how it kind of gets swapped around here. And that's because the matrix A, you have to time to buy a vector in r two because it has two columns. And then when you're done, you'll get a matrix, you'll get a, excuse me, you'll get a vector with three entries, which is an r three. And so this is the matrix transformation induced by T of X is gonna be A times X. So we get the following formula right here. So we could multiply by A and that gives us matrix transformation. So if we want to do something like what's T of U where U is given as this vector right here, well, that would mean, so T of three negative one, this means we're gonna multiply by the matrix A, which we see above is zero negative two, one negative three, two and negative three, like so, you times it by three and negative one. So we've learned about matrix multiplication is that when you times a matrix by a vector, this is a linear combination. It's a linear combination of the column vectors scaled by the scalars in the vector. So this is gonna give you three times zero one two plus I should say minus one times negative two, negative three and negative three, which when we simplify this thing, we end up with, well times the first one by three, we get zero three six times the second one by negative one, we're gonna get two three three. And so then combining terms there, that is combining components, you get zero plus two, which is two, three plus three, which is six and six plus three, which is equal to nine. And so two six nine is then the evaluation. The vector U got mapped to the vector two six nine. Now, one thing I wanna mention here is that if you were to kind of do this generally speaking, right? With this matrix transformation, notice you would get something like the following. You're gonna end up with X one times the vector zero one two and then you're gonna get the vector, well X two times the vector negative two, negative three, negative three. And so when you combine those together, this will look something like where you're gonna get a zero X one minus two X two, you're gonna get X one minus three X two. And then lastly, you're gonna get a two X one minus a three X two. So you can see right here, we get a formula very similar to how we described linear transformations back in chapter one. Didn't I tell you that a linear transformation would be formed by writing a vector whose components are linear combinations of the original variables? Well, that's looks like what we got going on right here. This idea of matrix vector multiplication gives you a linear combination of the column vectors will produce exactly something like that. And so in the future, we'll show you how to kind of reverse this process. Given a linear transformation, how do we write this as a matrix? But when it comes to this matrix multiplication, I do wanna mention to you sort of like this general principle. If you have a matrix, let's say a one one, a one two, up to a one n, then you have a two one, a two two, all the way up to a two n. And then this continues on until you end with a m one, a m two. Let me slide it up a little bit. And then you end up with a m n. So this is like a general matrix and you take the vector x one, x two, all the way up to x n. I do want you to be aware though as you write out this linear combination, you end up with x one times the first vector, a one one, a two one, all the way up down to a m one. You're gonna get the second vector, x two times this time a one two, a two two, all the way down to a two m. And then finally, this would add up to be, add up with the last term being x n times a one n, a two n, all the way down to a m n, like so. So when you do the multiplication, you get this linear combination. But then what happens when you do these scalar products, right? You're gonna times the first one by x one, the next one by x two, last one with x n. So when you put that together, you end up with this x one, a one one. And I'm gonna write it like we usually do for a linear system, a one one, x one, plus a one two, x two. And this will continue down to a one n, x n. That's the first entry. Then you get like a two one, x one, a two two, x two, all the way down to a two n, x n. This will then continue down. And you're left with in the last entry, a m one, x one, plus a m two, x two, all the way down to a m n, x n. And so when you look at the final product here, you get the single vector where the, like if you look at the first entry for a moment, what you do here is you take all of the elements in the first row of the matrix A and you times it by all of the entries in the vector x. And then for the second entry, you're gonna take all of the entries in the second row of the matrix. You're gonna multiply them by the entries in the vector x and you add them together. And then you proceed to do this for each entry. So if we were to proceed from this original problem one more time, you would see something like the following. Whoops. If we take the matrix, well, actually it's right here. If you take the first row and you times it by the vector here, you're gonna end up with a vectors whose entry would look like zero times three minus two times negative one. And then you do the second row times the vector like this. You're then gonna get the row times the vector or the book times the column. You're gonna get one times negative three plus three times, I'm sorry, you're gonna get negative three times negative one. And then the last one you're gonna get the third row times the column if you multiply a row by a column, you get two times three minus three times negative one right here. And as you simplify these things, you're gonna get the same values like we did before, right? You're gonna get a two, you're going to get negative three plus, you're gonna get negative three plus three here. Oh, I'm sorry, you're gonna get, I forgot an entry here. Oh, that's my issue here. My spider sense was tingling there three times one. And so you're gonna get three plus, three plus three, which is a six. And then the last one you'll get six plus three, which is nine. And so this gives you sort of an alternative way of computing matrix vector multiplication. It kind of jumps over the middle step of the linear combinations. And you're basically just taking rows times columns. Some people like this approach. And by all means, feel free to use it as you do these matrix vector multiplications. We'll probably do more and more of them when we start doing general matrix multiplication. And likewise, it's just convenient, just convenient right now, right here and now. And so we saw how we can evaluate a matrix transformation, but we often have to go the other way around. What if we have something in the co-domain? Can we determine whether this vector is in the range of the function? Is there some vector X that maps onto the vector B equals negative eight, negative seven, negative two? So we're trying to solve the equation T of X is equal to B. But be aware that's the same thing as solving the equation AX equals B, which is the same thing as trying to determine is B inside the column space of the matrix. That's what we basically have to do right here. So with this matrix transformation, we have, if our vector B is in fact in the image of the function, that means there has to be some vector X which when multiplied by the matrix A gives us the vector B. This matrix equation is a linear system and B will be in the column space exactly if this system right here is consistent. That is what we have to determine. And so we convert over to the associated augmented matrix where the coefficient matrix will be just the matrix of the transformation. And then the vector in question, is this in the image? That's the augmented column right here. So to start solving the system of equations, we need, well our pivot position is gonna be the one-one spot. That's a zero, we need something non-zero. So I'm just gonna grab the second row since there's a one right there. And so now our pivot position has a one in it. To get rid of the two in the bottom row, we'll take row three minus two times row one. So we're gonna get minus two plus six and plus 14. And then modifying the bottom row here, we're gonna get zero, we're gonna get three and then 14 take away two is a 12. That finishes the first column. So then our pivot position is gonna move to the two-two spot. I noticed that everything in the second row is divisible by negative two. So I'm actually gonna divide out the negative two. So negative one-half, row two. But also since I'm at it, the third row, everything's a multiple of three. So we're gonna divide everything by three there. And so then looking at the next matrix down here, my second row looks like zero, one, four, but that's also my third row. So that's gonna be easy. R three take away R two, R two. And so that gives us the following matrix that we see right here. You do have a row of zeros, but that's no problem for consistency. So far, so good. I can actually mention that this matrix that we see here is in echelon form. And so because it's an echelon form and there's no contradictions, that does tell us that the system in question is consistent. That means the vector in play here is inside the column space of the matrix A. It turns out that this vector is in the image of the linear transformation that we're considering. But what vector will map onto it? We have to continue to solve the system to see that. And so noticing that we only need one more step to get a row reduced echelon form, we're gonna finish it by doing that. We're gonna take row one and add to it three times row two. So we add a three and we're gonna add a 12. And so now here we see our, what we see the REF of our matrix right here. So we should set X one equal to five and we should set X two equal to four. And so then what we claim here is that T of the vector five comma four will do exactly what we needed to do. And so we can verify this, right? What was our matrix again? It was zero, negative two, one, negative three, two and negative three. We times this by the vector five and four. And so take the first row times the column. This is gonna give us zero minus eight. Take the second row times the column. This is gonna give us five minus 12. Take the third row times the column. This is going to give us 10 minus 12, like so. And then simplifying, we get a negative eight. We get a negative seven and we get a negative two, which in case we had forgotten was in fact, the vector B that we were trying to show was inside the column space of this matrix. That is inside the image of the function. So it turns out that we can consider all the same type of things with a matrix transformation that we did with linear transformations before. The fact that we have a matrix does make life a little bit easier for us. And so we'll get some practice with this, these matrix transformations from this section here. Thank you for watching. That actually brings us to the end of section 2.2. We're gonna continue in the next section, talk about linear independence. I hope you'll take a look at that video. See you everyone.