 Welcome back to section 6.2, the characteristic polynomial for the textbook linear algebra done openly. In this example, I want to find the eigenvalues of another two by two matrix. You can see the previous example for a pretty benign example of calculating those. To find the eigenvalues of a matrix, we typically could calculate the characteristic polynomial. Let's do that for this matrix right here. From the get-go, it seems quite similar. You might not expect much different going on here, but the characteristic polynomial is the determinant of a minus lambda i, which you see right here. As it's two by two, we can just take the product of the diagonals and subtract those from each other. Doing that, we're going to get negative lambda times negative lambda minus negative one times one, which would simplify to be lambda squared plus one. In which case, you look at that thing, it's like, okay, we got to factor this to get the eigenvalues here, set it equal to zero. Factor, factors of one that add up to be zero. One plus one is two, negative one plus negative one is negative two. Oh, that seems weird. Maybe I don't know how to factor today. I guess I could use the quadratic formula. That always works out here, but I'm just going to treat this as an equation with two terms there. Subtract one from both sides. You end up with lambda squared equals negative one. And if we take the square root of both sides, we now see the issue that's going on here. We're going to get that lambda equals plus or minus the square root of negative one, AKA plus or minus I. And so it turns out that according to the characteristic polynomial, the eigenvalues of this matrix are plus or minus I. And I want you to notice here that this original matrix here is a real matrix, but its eigenvalues are actually not real numbers. They're imaginary numbers, right? These all sit inside of the field, the complex numbers. And this is sometimes you have to kind of deal with when you work with eigenvalues. That even if you have a matrix whose scalars come from a certain field, so let's say our scalars come from some field F. It might be necessary to extend the field. So this is often called in the literature an extension field. Sometimes you have to go to a bigger field in order to find the eigenvalues. And this is actually some interesting stuff that you can see in abstract algebra for which I don't want to dive too much into that at this moment here, but in particular as we look for real, if we have real matrices, we sometimes have to extend to the complex number field to find those eigenvalues. And this actually kind of is the one reason why we've justified working with complex numbers this entire course, that even though our main goal is gonna be on real matrices and real vectors, sometimes we have to use complex numbers as we study these eigenvalues and the like. So if we had these imaginary eigenvalues, what does that say about the matrix? Well, we can start finding eigenspaces. Could we find the I eigenspace? And the answer of course is yes. What we wanna do is we wanna look at the null space of the matrix A minus I I, horrible pun right there though, the little I and then the identity matrix there. So let's look at that for a second. If we take the matrix I minus I I, this would look like our original matrix. Remember, we're gonna take zero minus I, and we get minus one, one and minus I, like so. And let's try to row reduce this thing. And I wanna do this in the least painful way as possible. So I wanna row reduce this so I can find a basis for the null space. So let's switch the order of the rows here. And so this gives us one, negative I, negative I, negative one. And so with my pivot there in the first one, in the one, one position, I can get rid of the negative one below it by taking row two and adding to it I times row one. So we're gonna get a plus I right here. And so then we're gonna get, if you take negative I times I, that's actually a positive one. So we're gonna add one right there. And so as we row reduce this, we end up with one minus one or zero or zero. Now, it turns out I didn't have to do this calculation at all really, because one thing you should remember about Eigenvalues is Eigenvalues were chosen so that the matrix A minus lambda I is singular. If you have a singular square matrix, it means it has an echelon form with a row of zeros, which if I know there's gonna be a row of zeros, what that means is from the original problem, I could have just killed off my least favorite row and gone from there. That is the last row you could always ignore or in this case I'm just gonna take the, I actually want the last row because that's a one in there. We could have gotten to this a lot quicker. Anyways, just kind of a little trick there you can do with Eigenvalues. You can ignore some of the rows because we know it's gonna be singular. I mean, you do have to be a little bit careful, right? We have lots of rows because you have dependence relationships because it could be that some rows are independent of each other and some are not. So you can always get rid of the last row if you don't want it to kind of save you a step when you're doing these calculations here. So with this matrix where we're reduced, if we wanted to find a vector X, which will give us the Eigen space, the basis for the null space to take X1 to be X2, we can interpret of course X1 as a free variable, sorry, X1 is the dependent variable and X2 is the free variable. So we get X2 right here. Then X1 will be I times X2. And so we get X2 times I comma one. And so this right here gives us a basis for the null space. So our null space we can write as the span, it's gonna be the complex span of this vector I1. You have to be careful with this vector. It makes you feel cocky because you know, it makes you feel cocky because you think you went all the time. But we can actually choose this vector I1 to be an eigenvector. So this is an eigenvector, but this is gonna be our representative here, the eigenvector that we want for this matrix. Can we replicate this process for negative I, the negative I eigen space? That is we wanna find the null space actually of A plus I, I. And so when we do that, we have to look at A plus I, I, which looks like the matrix I minus one, one, I, like so. And so kind of repeating the row operations we did before, we can switch the order of one, I, I negative one. And like I was saying earlier, we can ignore the second row because we actually know there's gonna be a row of zeros popping up, so no arithmetic is necessary there. We get one, I, zero, zero. And so as we start looking for the external solution to that homogeneous system, we get X1, X2. Well, we get X2 as our free variable, X1 would have to be negative I, X2. And so fact, turn out the X2, we get negative I1, like so I guess that means I lose, something like that. Anyways, so this would give us the complex basis for our eigen space right here, which would just be the span of negative I1. And in particular, we get this eigen value right here. This is our eigen vector I meant to say, all right? Now I want you to sort of look at the connection between these two vectors here, right? We have the first vector whose eigen value was I, and the eigen vector was I1. Then for the second eigen value, we had negative I, and then the vector was negative I1, right? The relationship here between I and negative I, I want you to notice that these are complex conjugates of each other, complex conjugates. What about these two vectors, I1 versus negative I1? These are likewise complex conjugates of each other. And this is actually a pattern we see in much more generality here, that if we have a matrix A and all of its entries are real, so it's a real matrix, if you take A bar X, well, because of properties of conjugation, that's what the bar means. You can actually take the conjugate of the first matrix in the vector and this will equal AX bar. And this is because if A is a real matrix taking its conjugate, it doesn't do anything. So when you conjugate things, you get this right here. And so if AX, so if X is an eigenvector, this has the property that if X is an eigenvector, that means that AX will equal lambda X right here. So when you take the conjugate of this right here, you're gonna get that AX bar equals AX bar over everything. You get lambda X bar, but then you're gonna get lambda bar X bar. And so when you see what's going on here is that if you have a real matrix, eigenvalues, eigenvalues, they come in conjugate pairs. And that's kind of interesting. So once you find one, once you find one complex root, you find the other by looking at these conjugate pairs. But it's also true for eigenvectors that if you find an eigenvector for an eigenvalue, they're likewise gonna come in these conjugate pairs. And so one of the neat thing is is actually when you find a non real eigenvalue, that almost seems like sad news. Oh no, I like real numbers. But actually it's like, oh sweet. When you find one, when you find the eigenspace for that one eigenvalue, you don't have to actually calculate the second eigenspace. That is to say, when it came to this problem right here, we didn't need to do this part of the calculation because we actually knew that once we had this eigenvalue and this eigenvector right there, we're able to get the other eigenvalue by conjugation and we can get the other eigenvalue by conjugation as well. And so that's actually a neat little trick we can do working with complex eigenvectors and eigenvalues. And so I wanna come back to the original matrix A and kind of explain why did this matrix end up with these imaginary eigenvectors and eigenvalues, right? When it comes to what does this matrix do geometrically? We talked about previously how two by two matrices affect the plane. And you'll notice that this matrix right here has the effect that this is rotation by 90 degrees. It's a rotating matrix. So if we take the x and y axis right this, the effect is if you take a point, you're going to move it by 90 degrees counterclockwise there. Well, what if you have a vector like this? If you rotate that vector, it's gonna be up here. So if it was on the x axis, it's gonna rotate to be the y axis right there. But an eigenvector has the property that when you act by the matrix, you end up with something in the original span. So what line that goes to the origin will not be affected by rotation by 90 degrees. If we keep our real number of blinders on, we only see the real axes, then we're not gonna see any vectors who don't get rotated by this. And that's why we have to expand our horizon and go to this extension field, AKA the complex numbers, that all the real matrices will be, sorry, real vectors will be rotated by this matrix. Complex vectors will actually be, I should say imaginary vectors will be scaled by this matrix. And so sometimes in order to find the eigenvalues and eigenvectors, we have to extend to the complex numbers, even if the original setting was a real setting. And that might seem like, well, why do we wanna do that? Well, that's because at this stage, we're really mostly just calculating, we're calculating what the eigenvalues and eigenvectors are. We'll see very soon why we like them so much. And so even a complex eigenvector is still pretty awesome. And so we'll talk about some more of those in upcoming videos. Stay tuned for them. Bye.