 Welcome back to our lecture series, linear algebra done openly. As usual, I'll be your professor today, Dr. Andrew Misseldine. In this series so far, we have seen in Chapter 1 many important linear structures. That is, we've seen the word linear used in different contexts. We've learned about linear equations. We've learned about linear systems. We've learned about linear combinations in the previous section. At the point of Chapter 1 in our series is to expose us to those things which are linear, to help us better understand what linear algebra is all about. The next important linear structure, we've actually seen before, assuming you have some calculus under your belt here, and this is the idea of a linear operator or sometimes called a linear transformation. Linear transformations are functions of vectors that preserve the linear operations. Let me explain what that is more precisely here. Imagine we have two vector spaces. We will call them x and y, and they are all over the same field F. Now, for all purposes, that could just be the field of real numbers, but that could be any arbitrary field, like we learned about in Section 1. Consider a function t from the vector space x to the vector space y. We'd have a function between two vector spaces, which means the elements of the domain and of the range are going to be vectors. We say that a function between vectors is a linear transformation if the following two properties hold. We want it to hold that if you take the image of a sum of vectors. So you add two vectors together and then you put that inside your function. This should be the same thing as the sum of two vectors. So notice what's going on here. U and V are vectors that belong to the domain of the function. They are vectors, so we can add them together. If you add the vectors together and then put them inside the machine, this is the same thing as putting the two vectors into the machine separately, and then their output together. And so addition before the function is the same thing as addition after the function. Now, I should also mention that this addition sign right here means addition inside of the vector space X. And this addition sign right here means addition inside the vector space Y, which could be very different meanings of a vector addition there. So with property one right here, we often say that our function is preserving, it preserves vector addition. So a linear transformation is gonna preserve addition. Addition before the function is still, and the second property of a linear transformation is similar to that. If we have some vector X, which is inside of, sorry, some vector U, which lives inside the vector space X, and we have some scalar C, scalar just means it's a member of the field F right here. Since we have a vector space, we can scale or multiply by C here. So if we were to scale the vector and then we put that inside of our function machine, we want this to be equal to C times T of U. So the scalar multiple is a scalar multiple of the image. And so this idea that the scalar multiple can be brought out of the vector, or of the transformation here, the fact that you can factor the scalar out. This is what we often mean when we say the transformation preserves, it preserves scalar multiplication, it preserves scalars. Scalars are preserved by this because we can factor the transformation, we can factor the scalar out of the transformation, much in the same way that we can bring the plus sign out of the transformation, when we have a, is those functions of vectors that preserve the vector operations, it preserves scalar multiplication. Now a little bit more vocabulary here, just as a reminder, when you have any function of the form, so we have T as it goes from X to Y right here, remember that the set here is called the domain and then the, so the set X here, the set Y is referred to as the co-domain, this is the function we're mapping into, that's the co-domain, the domain is the set we're coming from. Now that's not the exact same thing as the image, right? The image of the set, sometimes this is called the range, the image of the set, we'll denote that as IM of T, the image of the set, this is gonna be the set of all vectors of the form T of X, where little X here is a vector, such that little X is a vector inside the vector space. So this is the set of images, we would say that T of X is the image of the vector X, and I should mention that this image vector, T of X would be something that belongs to the co-domain, Y. The entire set of images, so you take the set of all images of functions, sorry, vectors, excuse me, you call that the image of the function IM of T, right? That's something we could define for any function whatsoever, the linearity isn't required to make sense out of that. But with the linear transformation, we're also interested in another object which we possibly haven't seen before, what we refer to as the kernel, the kernel of the linear transformation. So KER of T, this is gonna be the set of all vectors X, such that the image of X is equal to zero. Now this right here is the zero vector. Since the co-domain of the function is itself a vector space, right? Y is a vector space, it has a zero vector. And so the kernel of T here keeps track of all the vectors that map to the zero vector. We'll show you how to compute images and kernels of linear transformations in the next video. But I did wanna present some examples of linear transformations. I mentioned we'd seen these before in calculus. In calculus, linear transformations are everywhere. Let me give you two examples. Operation is a linear operator. You take the derivative as a function from the set X to the set X here, where X is gonna be the set of differentiable real-valued functions. This is gonna be differential functions of the form F goes from R to R, or something like that. So this is a set we use in calculus, like calculus one, calculus two, maybe calculus three if you've been in such a class. But if we take the set of differentiable functions real-valued functions, the derivative is a map from those functions back to themselves. And if you look at the property of the derivative, right, if you have the sum of two functions, sum of two functions, the derivative of it is equal to a sum of derivatives. The sum, the plus sign, comes out of the derivative process, right? You can take the derivative of F and G individually and then add them together, or you can add them first and take the derivative. The derivative preserves function addition. Likewise, if you have a scalar multiple of a function, if you take a function and you vertically stretch it by a factor of C, you've learned in calculus one that you can factor this principle out, you can factor the scalar out. And so therefore the derivative of C F is the set of F. Derivatives preserve scalar multiplication of functions. Therefore, derivatives are lit, the derivative operation is a linear operation that preserves the linear operations, which you can actually think of functions as vectors, right? They're vectors that, they're vectors because we can add them and we can scale them. If you think of, well, derivatives as a linear operator, then it should have a kernel, right? What's the kernel of the derivative? Well, of the derivative operator, well, it would be constant functions, right? Constants are those functions when you take the derivative, you get zero, zero being the zero function, the zero vector inside of these vector spaces are linear. So our limits, right? If you take a fixed real number A and you take the limit as X approaches A of functions, which are, which are say like continuous at X equals A, then the limit operator is linear. If you take the limit of a sum, this is the same thing as a sum of limits. If you take the limit of a multiple, this is equal to a multiple of limits. The limit operator preserves the vector, operations of addition and scalar multiplication. And it doesn't just stop there. The anti-derivative operation is linear. The definite integral, the indefinite integral operators integrals are linear. Series and sums, they also satisfy this linearity property we see right here. Linear algebra was everywhere in calculus. We just maybe didn't see it. We kind of had the same experience as Prince's Rapunzel in the movie Tangle, right? When she returns to her tower after running away for a few days, she's impressed by this symbol of the sun, which is the crest of the holy, of the royal family of the kingdom, right? It was significant to her, but she didn't know why because she saw it all over in the kingdom, right? But when she returns to her home, she realizes that that sun insignia is everywhere inside of her house. She's been painting it subconsciously her whole life. And it was always there. She just never saw the significance of it. The same thing is true for linear operations. It was everywhere in calculus. We never saw it. Now, one thing I also want to mention, property one, four, three here is that because linear transformations preserve addition and scalar multiplication, we can actually show that linear transformations preserve linear combinations. The image of a combination is itself a linear combination. And the consequence of this is if you take the image of a zero vector under a linear transformation, we'll always get a zero vector back. Not necessarily the same zero vector because the zero vector in X will over in Y.