 One of the important uses of linear algebra is in what are called linear transformations. Suppose I have a function that takes things in some set A to some things in set B. We say that T is a linear transformation if, for all vectors U and V, and for all scalars C, the following two statements are always true. First, whenever we apply T to the sum of two vectors U plus V, we get T applied to U plus T applied to V. In other words, our transformation transforms sums into sums. Next, if we apply T to a scalar multiple of U, that scalar multiplier can be removed, and what we'll obtain is C times the transformation applied to U. In other words, linear transformations turn scalar multiples into scalar multiples. Now since T is a function, we typically represent it using function notation. However, an alternative is to use what's called operator notation, and that's just function notation without the parentheses. So by Tx equals y, we mean the transformation T applied to the vector x gives us the vector y. So let's consider the following problem. We want to prove, or possibly disprove, that something is a linear transformation. So we'll check our requirements for what makes a linear transformation. So the first requirement is that a linear transformation must take a sum of vectors into the sum of the transformed vectors. It's often convenient to proceed in the following manner. What I'd really like to happen is that T applied to the sum U plus V will give me T applied to U plus T applied to V, and so I have this starting point, T applied to U plus V, and I have this ending point, T of U plus T of V. And what I want to do is to see if I can build a mathematical bridge from the starting point to the ending point. So the question you should always ask yourself is, self, what do I know? So we might begin here at our starting point, and we know that our linear transformation takes whatever vector we give it and multiplies it by 3. So we'll take the vector that we're given, U plus V, and multiply it by 3. Now since U and V are vectors, then we know that we can distribute this scalar multiple 3 among both of them, so this becomes 3U plus 3V. And again, we ask ourselves, self, what do you know? And we know that 3U is T of U. And likewise 3V is T of V. And what this means is that once we get to this point, we can immediately conclude the last statement. And because we've maintained equality throughout, we know that T applied to U plus V is therefore equal to T applied to U plus T applied to V. And so we have the first requirement of a linear transformation. And just like getting change, as long as the first couple of bills are correct, we know that all the rest are going to be there as well. And if you believe that, I'd like to make change for you someday. We have to check the next requirement of the linear transformation, which is that T applied to a scalar multiple of U has to be that scalar multiple applied to the transformation of U. So again, let's write down what we want to start with. T applied to a scalar multiple times our vector U. That's our starting point. And our destination is C applied to the transformation of U. And as before, I know that T applied to U is 3 times whatever vector we start with. So T applied to C U is 3 times C U. Again, since U is a vector and 3 and C are scalars, we know we can change the order of the scalar multipliers. And this gives us C times the vector 3 U. But we know that the vector 3 U is the same as T of U. And so we go from our starting point T of C U to our destination C times T of U. And because we've maintained equality throughout, that gives us our second requirement for a linear transformation. And so, since everything we need is in fact present, we can conclude that T is in fact a linear transformation. Let's take a look at another problem where we have a different transformation. So again, we have our starting point T of the vector sum U plus V, and we have our desired ending point T of U plus T of V. And it may be helpful to work one step backwards from our ending point. We know what T does to any vector X. So we know that T of U is MU plus V. Likewise, we know that T of V is MV plus B. And that's the same as MU plus MV plus 2B. So we'll apply our transformation to the vector sum U plus V, and that's going to be M times U plus V plus B. And rearranging things a little bit, that's MU plus MV plus B. And that's equal to MU plus MV plus 2B. Wait a minute. We in fact have an inequality here. And that inequality means we can start with T of U plus V, but we can only get so far. And after this point, the bridge is out, and we are not able to get to our destination. We fail the first requirement of being a linear transformation. And as soon as we fail one requirement, the others are irrelevant. Whatever this thing is, it is not a linear transformation. Why do we care? Well, one of the things about a linear transformation is once you know what the transformation does to a couple of vectors, you can figure out what the transformation does to all vectors. For example, suppose we know that T applied to V is A, T applied to U is B. Well, let's find T applied to 3V minus 4U. So first of all, remember that we're treating the subtraction of two vectors as the sum of the additive inverse. So 3V minus 4U is 3V plus negative 4 times U. Because this is a linear transformation, the transformation applied to a sum of vectors is the same as the sum of the transformation applied to the individual vectors. So this becomes the transformation applied to 3V plus the transformation applied to negative 4U. But wait, there's still more. Remember that the transformation applied to a scalar multiple becomes a scalar multiple of the transformation. So the transformation applied to 3V is the same as 3 times the transformation applied to V. Likewise, the transformation applied to negative 4U is negative 4 times the transformation applied to U. And we know what these individual transformations are, so this is going to be 3 times the vector a plus negative 4 times the vector b, or we can simply write this as 3a minus 4b. So one of the important strategies in linear algebra is to always ask yourself, self, what is the equation that's going to show up here? And so we claim the following. Suppose t is a linear transformation that takes vectors with n components and transforms them into vectors with m components. Then the new components can be described by a system of linear equations, where our first component is equal to some linear combination of the components of the vector, our second component is equal to some linear combination of the components of the vector, and so on. And because I'm on the internet, you know that this must be true. But maybe you're one of those people who don't believe everything you see on the internet, or even if you do, remember that this is an advanced mathematics course, so one of the important questions you'll always want to do is to prove anything that somebody claims to be true. This leads to the following idea. Every linear transformation can be described using some system of equations, and once we have a system of equations, the coefficients can be used to produce a matrix, which is known as the transformation matrix. And conversely, what this suggests is that if I have any matrix, I can read the entries of the matrix as the coefficients of a set of formulas that describe a linear transformation.