 Welcome back everyone. We are continuing our discussion about section 6.1 eigenvalues and eigenvectors from the textbook linear algebra done openly. We had seen in a previous video that if we take the specific matrix 1652 that you can see on the screen there, and you times it by the vector 11, that'll equal the vector 77, which if you factor out the 7, you get 7 times 11. This is an example of an eigenvector and an eigenvalue, so ax equals lambda x, where here a is that 2 by 2 matrix x is the vector 11 and lambda is the vector 7. It turns out though that eigenvectors are not unique. It's actually very easy to come up with lots of eigenvectors once you find one of them. For example, if I take the matrix 1652, so the exact same matrix A that we had before, what happens if I multiply by the vector 22? Let's go through the calculation here. You're going to end up with 2 plus 12, and then you end up with 10 plus 4. If we simplify that, we end up with 1414, which if we factor out a 7, you'll get 7 times 22. You'll notice that the same vector showed up twice. We take A times the vector 22. We end up with 7 times the vector 22, which is prima donna in our ballerina right here. What's the significance of 22, of course? The relationship here is that the vector 22 is just two times the original eigenvector we have, which is 11. Taking an eigenvector and taking a scalar multiple of an eigenvector gives us another eigenvector. In general, if we have the equation Ax equals lambda x, you can replace x with any multiple of x. If you take A times Cx, because the properties of matrix multiplication, this C comes out, we can factor it, and so you get C times Ax. Well, because Ax is an eigenvector, Ax will become lambda x, and then as lambda and C are just scalars, you can bring C back in and you end up with lambda Cx. Once you have an eigenvector, any scalar multiple of any non-zero multiple of that eigenvector will give you another eigenvector. So eigenvectors are not unique, per se. And so when it comes to eigenvectors, like we have an eigenvalue in hand, like this matrix over here, it's eigenvalue, eigenvalue is 7. When we have an eigenvalue 7 in hand, we're not going to be interested so much in the specific eigenvector. We're going to be more interested in the so-called eigenspace, because after all, in that previous video, how did we find the vector 11? We looked at the null space of the matrix A minus 7i, and we found out that this null space was non-trivial. Now, this null space is what we call the 7 eigenspace, the 7 eigenspace of this matrix A. And so in particular, an eigenspace, given a matrix A, which is n by n, it is going to be the null space of the matrix A minus lambda i. And if lambda is an eigenvalue, this eigenspace will, the matrix A minus lambda i will have nullity greater than 1, and this eigenspace will have dimension greater than 1. The dimension of the eigenspace is what we call the geometric multiplicity of the eigenvalue. And so we can have multiple eigenvectors for a single eigenvalue. So it's going to be of great interest to us to try to, once we have an eigenvalue, to look at the entire eigenspace and not necessarily at individual eigenvectors. So I actually wanted to show you a quick example of this. How does one compute the eigenspace for a matrix? And when I say compute the eigenspace for a matrix, what I'm really interested in is, can we find a basis for the eigenspace? If we can have a basis, then we can, then we can describe every eigenvector for a specific lambda as a linear combination of these basis elements. This will give us a linear independent set of eigenvectors right here. So how does one go about doing it? Well, they give you the matrix A, that's going to be given to us. And for the moment being, let's suppose we know what the eigenvalue is. So we have an eigenvalue of 2. And so let's calculate a basis for the eigenspace. So to begin with, we have to compute A minus 2i. Remember 2i is just the matrix, which has twos along the diagonals everywhere else is just zero. So when you combine these together, we're just going to subtract two from the diagonals. So we get four minus two, which is two, one minus two, which is negative one and eight minus two, which is six. And then all the other numbers are left unaffected. You can just transcribe them below like this. Now, when you look at this matrix, it's like, whoa, every single row is identical. If I was going to row reduce this, which I am, what I would do is I would take row two minus row one, I would take row three minus row one to row reduce this thing. And when you do that, this would row reduce to the echelon form two, negative one, six, zero, zero, zero, zero, zero, like so. You can see that there's a pivot column in the first one, but we're going to end up with these two free variables right here. And so this tells us that the nullity of A minus 2i is going to equal two. So already what I can see right here is we've actually discovered that the geometric multiplicity of the eigenvalue two, that's going to equal itself to, it's just this number right here. It's the dimension. It's how many free variables turned out to be in this homogeneous system. So we got the geometric multiplicity, but I still want to find a basis. What we have to do is we have to find a basis for this eigen space. And we're doing that from this row reduce echelon, or what's not row reduce, but this one, this echelon form right here. We could divide everything by two and we could work with fractions from there. We've seen how to do that. I really want to postpone division if I can't avoid it. And so if we just look at that first thing as an equation, we get 2x1 minus x2 plus 6x3 equals zero. The idea is we could solve for the first one, because we often treat the first one as a pivot position, right? But honestly speaking, if you hate fractions as much as I do, I mean, I don't hate them. It's just, let's just say that I'm an introvert when it comes to fractions. I have a lot of fractions to make myself happy. We could actually treat the second variable as the dependent variable and then the other ones is free. That would actually result just by adding x2 to the other side. And so then we get that x2 equals 2x1 plus 6x3. And so we get as our generic vector in the null space. So its entries are x1, x2, x3. Well, x1 could be whatever it wants. x3 could be whatever it wants, because it's a free variable. But then x2 has to look like 2x1 plus 6x3 right here. Which then when we decompose it, we end up with x1 times the vector 1, 2, 0. And then we get adding to that x3 times 0, 6, 1. Like so. And so these right here give us a candidate for the eigenvectors we could have for this matrix. So let's actually verify that we did find eigenvectors for this thing after all. So we take the matrix A, we'll call this first one U in the second one B. So if you take AU right here, remember the matrix was, what was the original matrix? 4, negative 1 and 6, 2, 1 and 6. And then lastly 2, negative 1 and 8. Like so. Multiply it by the vector 1, 2, 0. And so when you multiply these out, you end up with 4 minus 2. There's a 0 there, I'm not going to write down the 0. You're going to get 2 plus 2. And then lastly, you're going to get 2 minus 2, which when you simplify this, you end up with 2, 4 and 0. Which factor out the eigenvalue 2, you get 2 times 1, 2, 0. So 1, 2, 0 is an eigenvector like we saw. And if we were to check the other one, so we do A times V. So I'm just going to copy down A again, 4, negative 1, 6, 2, 1, 6, 2, negative 1, 8. And times it by the vector 0, 6, 1 right here. If you go through the multiplication again, I'm not going to write down the zeros. We get negative 6 plus 6. We get 6 plus 6. And we get negative 6 plus 48. 6 times 8 there. Simplifying this, of course, you're going to get 0. Whoops, a daisy. You're going to get 0. You're going to get 6. Whoops, sorry, 6 plus 6 is a 12. And then we end up with 42. Is that what I wanted there? Let's see. Why did I do 8 times 6 there? I'm sorry. It should just be negative 1 times 6 and then 8 times 1. So let me fix that, everyone. There's a mistake there. Official JK. We're going to put that in there right there. JK for everyone to remember. So we get 8. Negative 6 plus 8. That's equal to just a 2. And so if you factor 2 out from there, you get 0, 6, and 1, which was our vector V from before. So despite my little blunder, I fixed myself here. And so we see that we did find eigenvectors 1, 2, 0, and 0, 6, 1. We can see that these vectors are in fact linearly independent of each other. They're not scale in multiples of each other. And so then in conclusion here, we've now found our 2 eigenspace, our 2 eigenspace here of A. This, of course, is the null space of A minus 2i. And as we've seen, the null space is the span of these 2 vectors 1, 2, 0, and 0, 6, 1. And particular are bases we found right here. All right. Now, as bases for a vector space are not always unique, there could be other bases for eigenvectors you could find. In fact, looking at my script, I actually deviated from how I did it in the textbook there. Look there for an alternative example if you want to. I mean, we do have a genuine basis of eigenvectors right here. And the one in the textbook is also a basis of eigenvectors. It just took a little bit different direction, and that's OK. And so we did find an eigenbasis here. So to find this basis of eigenvectors, we have to first compute the matrix A minus lambda i. And then we are going to row reduce it. We row reduce it to some echelon form, right? And then from that echelon form, we're going to find a basis for this null space of A minus lambda i. So finding the eigenspace of an eigenvalue with a matrix is really just finding the null space of that associated matrix. Just instead of taking the null space of the matrix A, we're taking the null space of the matrix A minus lambda i. And that's all there is to it. All right, so there's one more installment of this section, 6.1. Stay tuned for that one. I do want to show you a nice little shortcut you can use in the case of a triangular matrix. Stay tuned for that.