 Hello, and welcome to this video on eigenvectors of a matrix. That's how you pronounce that word, E-I-G-E-N, vectors, eigenvectors. So what is an eigenvector for a matrix? Well, to understand this, we're going to take a little trip back in time to linear transformations. So remember that a linear transformation is basically just a fancy kind of function. It's a function that accepts vectors for input and produces vectors for output. And one of the most important, maybe the most important, notion about linear transformations is that they can always be implemented through matrix multiplication. So let's have a look at this particular linear transformation I have in the blue here that's going from R2 to R2. So this is a function that I'm going to be plugging in vectors of R2 or from R2 into this function. And then to get the output, I'm going to take the input x and multiply it by the 2 by 2 matrix, 2, 0, 3, negative 1. So I want to think of this in a couple of ways here, both algebraically and graphically. Graphically is very important for what we're about to discuss. So I have four sample inputs. And I'd just like to compute their images under T. Those vectors are 2, 0, 2, 2, negative 3, 0, and 0, 3. And over here on the right, you can see I've drawn each of these in orange on just a regular set of x, y axes in R2. So let's do some computing first. And as I compute, I'm going to try to draw the result. So T of 2, 0, well, that would mean I'm going to need to take the matrix 2, 0, 3, negative 1 and multiply that to 2, 0. And let me just pull this up here for a moment so I don't run out of room. And the vector that I get as an output would be 2 times 2 plus 0 times 0. That's a 4. And then 3 times 2 plus negative 1 times 0. That's a 6. So 4, 6 is my output. So I'm going to switch over to red pen here. And 2, 0 is right here. You can see that on the x-axis. But its image is moved around somewhat. It's going to go to 4, 6. So that would be this vector way up here. I'm just going to try to draw it like so. So there would be 4, 6. That is T of 2, 0. I'm actually going to erase that vector here in just a minute so the drawing doesn't get too crowded. But I want you to notice that 2, 0 gets moved quite a bit by this linear transformation T. If I start with the vector 2, 0, it is moved in a very significant way. It's moved over roughly 60 degrees. And it's stretched. So it's got a lot of movement that happens when I run that vector through T. So I'm going to go ahead and erase this now because I need to have some more room for the other vectors that are coming up. And let's do T of negative 3, 0. So that would be, again, I would take 2, 0, 3, negative 1 and multiply by negative 3, 0. And I've got enough room down here to write the results here. That's going to be a negative 6, negative 9. I may not have enough room to draw that one here, but I can at least get the right idea. So again, negative 3, 0 is the vector you see on the negative x-axis right there. But here's its image. It's going to go negative 6, negative 9. That would be, it's going to fall off the screen, but it's going to be moving in this direction here. So again, the pretty big difference between those two vectors, the original vector and its image, the transformation moves that vector around in a pretty significant way in R2. Let me clear the screen here and get that out of the way. Now, pay attention to what's going to happen next here. With this vector, I'm about to use 2, 2. The vector 2, 2 that I'm plugging in is the one that's right here sticking out at 45 degree angle in the first quadrant. But let's see what happens when I run it through my transformation. Well, same process. I'm going to multiply that vector times 2, 0, 3, negative 1 times 2, 2, and just do the matrix multiplication here. And let's see what I get. I get 4, and then the second component, I would get 3 times 2 plus negative 1 times 2. That's 4. So I'm going to draw the result here, the T of that vector is, interestingly enough, it's parallel to the original vector. So here is T of 2, 2. Now notice what really happened here. I started with this vector 2, 2. And when I ran it through the transformation, it didn't really move that much. In fact, it stretched, but it didn't rotate at all. So that vector is not exactly the output is not the same as the input. But notice that the output is locked onto the same line that the input is locked onto. Namely, this line y equals x, it appears. So that vector 2, 2 did not get moved in a really significant way by this transformation. Let's do one more here, and that's T of 0, 3. That's right here. I'll dump the result down here. T of 0, 3, again, I would multiply by 2, 0, 3 minus 1 times 0, 3. And when you calculate that, I'll do this over here, that you will find that gets, let's see, we get 0 up here. And I get negative 3 in the second component. So to draw the result, here is the original, and here is its image. It merely reflects it. So again, we have another example of a vector whose image is not really significantly moved. It's just simply reflected. And in particular, the original that I'm plugging in and its image lie on the same line. They are locked in the same overall place in R2. So these two vectors 2, 2 and 0, 3 have a special property that when I multiplied by this matrix, all I did was rescale. In other words, notice I started with 2, 2, and I got 4, 4. This is the scalar multiple of 2. That's the algebraic way of thinking about the geometric effect of saying locked onto the same line. And when I started with 0, 3 and multiplied by that matrix, I got 0, negative 3, just yet another scalar multiple. None of those other two guys did that. Multiply 2, 0, and negative 3, 0 by this matrix, and you get something that's very, very different than just a scalar multiple of the original vector. So this has got some pretty interesting geometric interpretations. Let's move to a general question that we're going to answer now. So the question we're going to address here is, what vectors x have the property that a times x is just a scalar multiple of x? Scalar multiple of x. In other words, what are these vectors such that when I multiply them by a, they don't really move around that much. They just simply move to a different location on the line that they started on, maybe reverse direction, maybe stretch a little bit, but no rotation and no significant movement. Well, this is the subject of this section here, so we're going to start with an important definition of the main character in this screencast and all the others. So let's let a be an n by n matrix, any square matrix we wish. We can think of this as a matrix that implements a linear transformation from rn to rn if we wanted to. So a non-zero vector x is called an eigenvector, eigenvector. You know the word vector. The prefix eigen is from the German that means self, like a reflexive pronoun, like self-correcting or something like that. A non-zero vector x is called an eigenvector for a. If I take that vector times a, I get a scalar multiple of x. It's traditional to call that scalar lambda, the lower case Greek lambda. So let's finish the definition. A is an n by n matrix. A non-zero vector x is called an eigenvector for a. If a times x equals lambda x, for some, importantly here, scalar lambda. So in other words, multiplying that vector by a matrix results in just simply multiplying it by a scalar. So that's exactly the effect that we were seeing in those last two vectors on the previous slide. Multiplying them by a didn't really change their trajectory at all. It just changed maybe the length of the vector or flipped its direction, but didn't change the overall direction that was moving them. So let's do some examples here. We have a long way to go with this. And so I'm going to start back with the original example. And that is this 2 by 2 matrix, 2, 0, 3, negative 1. So I have these 3 or 4 vectors here. And I just want to check to see if they are eigenvectors of a, first of all. Now again, to be an eigenvector of a, I just need to check to see if ax equals some scalar times x for any scalar that I wish. If it does, then I'm going to put a check mark. If it doesn't, I'm going to put an x or something like that. So some of these we've seen. Now we do know that 2, 2, for instance, is an eigenvector of a. And the reason for that is that a times 2, 2, and I took 2, 0, 3, negative 1, and multiplied by 2, 2, a times x, then we ended up getting 4, 4. Which is, of course, a scalar 2, lambda equals 2 here, times the original vector. Multiplying by the matrix, I simply rescaled and nothing more. 0, 3 was also an eigenvector of a, because if I take 2, 0, 3, negative 1, times 0, 3, then we found that we got 0, negative 3. And of course, that is a scalar times the original vector. The lambda in this case would be minus 1. Now 2, 0 we found was not an eigenvector for a. And that's because if I check and take 2, 0, 3, negative 1, times 2, 0, then what was the result? We ended up getting 4, 6. And that is not just a scalar times 2, 0. That's a pretty significant change in direction here. Now what about this last one that we haven't checked yet? 5, 5. Well, I don't know whether it is an eigenvector or not. I have to check to see if 2, 0, 3, minus 1, times 5, 5, is a scalar multiple of 5, 5. This would be a good chance for you to pause the video, work this out yourself, and think about what's going to go right there when we're done. OK, so I'm assuming you've done and tried this out for yourself. Here's how it plays out. And when I take this 5, 5 vector and multiply it by the matrix, I get 10. And 10. And interestingly enough, that is 2 times the vector 5, 5. So since the a times x is just a scalar times x, that makes this 5, 5 an eigenvector. You might have noticed that 5, 5 and 2, 2 are parallel to each other. They lie on the same line if you drew them in R2. And again, you might have also noticed that the lambdas that corresponded to those eigenvectors were both the same. That is no coincidence. We're going to talk a little bit more about that in the next screencast. But lastly, I want to do one more, a little bit more expansive example here with a 3 by 3 matrix. So I have this 3 by 3 here. And I want to show that 1, 2, 0 is an eigenvector for that matrix A and that 2, 1, 0 is not. This is no different process than we have been seeing before. So let's just set it up. I need to take a times this vector. So this is 4, negative 1, 6, 2, 1, 6, 2, negative 1, 8 times 1, 2, 0. Let's just quickly go through the multiplication on this. The first, this is going to be a 3 by 1 vector as a result. And I'm going to have 4 minus 2 plus 0. That's 2. The second component, I'll have 2 plus 2 plus 0. That's a 4. And the last one, I'm going to have 2 minus 2 plus 0. That is a 0. Now the result of that A times x is 2 times the original vector. And so therefore, that vector 1, 2, 0 is indeed an eigenvector for A. This is in three dimensions, so it's kind of hard to visualize. But if you think about that vector, multiplying by A wouldn't rotate it in any direction. It would simply stretch it by a factor of 2. Now on the other hand, if I take the other vector, 2, 1, 0, and multiply by 4, negative 1, 6, 2, 1, 6, 2, negative 1, 8. And here's the vector, 2, 1, 0. Let's just quickly go through. We may not even have to go through all the calculations here, but let's see what we can do. 4 times 2 here plus negative 1 is a 5. And the second component, I would have 2 plus 1. That's a 5. And the last component, I would have 4 minus 1. That's a 3. And that is not equal to any scalar times 2, 1, 0. That vector didn't just get stretched. It got bent around in three space somehow or another. So this is an eigenvector here. This vector is an eigenvector, negative 1, 2, 0. 2, 1, 0 is not. Now we have left a few things. There's quite a few things left undone. First of all, all we know how to do at this point is check to see if a vector is an eigenvector for a matrix. What we're eventually going to need to do is, given the matrix, find its eigenvectors, not to simply check a proposed solution. And that's coming up in a couple of videos. Thanks for watching.