 Now the standard method of finding eigenvalues and eigenvectors relies on finding determinants. And as a result, it's painful. So can we find eigenvalues and eigenvectors without determinants? The answer is, well I hope so, otherwise this will be a very short video. The path to finding eigenvectors without determinants begins with the following. So suppose v is any non-zero vector. Let's show that if vav is a dependent set, then v is an eigenvector. And while we're at it, let's find the corresponding eigenvalue. So remember, definitions are the whole of mathematics, all else is commentary, and this problem talks about dependent sets and eigenvectors, so it'll help to have the definition of dependent set and eigenvector. So let's pull those in. So remember, a set of vectors is said to be dependent if one or more of the vectors in our set can be expressed as a linear combination of the others. Since there's only two vectors in our set, then we can write one of them in terms of the other one. So suppose our vector v is c times the other vector av. If c is equal to zero, our vector v is also equal to zero, but that's not true. We've been assuming that v is a non-zero vector. So c can't be equal to zero, and so we can divide whatever c times v is equal to av. And if it's not written down, it didn't happen. Let's go ahead and write that. If vav is dependent, then something times v must be a times v, whereas something is usually some real number. Again, definitions are the whole of mathematics. All else is commentary. This problem also asks about eigenvectors, so let's pull in that definition. And we see that this is actually our definition of what makes up an eigenvector, eigenvalue pair. So v is an eigenvector with eigenvalue a, whatever that scalar multiple is. Well, suppose we have a independent set vav, but if we add in the vector a squared v, we get a dependent set. And let's show that there have to be scalars r1 and r2, where a minus r1i times a minus r2i is equal to the zero vector. Now, this is quite a handful, so let's see if we can make this a little bit easier to understand. First off, let's multiply this out, a minus r1i times a minus r2i. So we'll expand that. Now, because we're dealing with matrices, there are certain things we can do and certain things we can't do. So here we have to have this product ai, which we get from multiplying this a by this r2i. And we also have this product r1i a, which we get from multiplying this r1i by this a. But because matrix multiplication is not in general commutative, we can't equate ai with ia. Or can we? Remember, i is the identity matrix, the do nothing matrix. So this is do nothing and then do a. Well, this is do a, then do nothing. Well, in both cases, you're just doing a, so ai is a, and ia is a. But wait, there's more. This is i squared. Well, that's i times i, that's do nothing. And then do nothing. Well, that's really just the same as doing nothing, so this i squared is just i. We still have the a squared. And I'm subtracting r2 minus r1a, and so I can rewrite that. And since r1 and r2 are just elements of our field, they are other elements. This minus r1 plus r2, well that's really something we'll call that p. And this r1r2, that's something else we'll just call that q. And what that means is if I have an expression like this, I can factor the left-hand side as an expression like this. All right, let's put it together. Since v, av, a squared v are dependent, we must be able to express one of these in terms of the others. Now, if vav was an independent set, then the new vector, a squared v, must be the one we can express in terms of the others. And so that means there must be some b's and c's and a squared v is c times v plus b times av. Now that can rearrange that a little bit. And since everything is right multiplied by v, I can factor that. But now I have this expression, and given some expression like a squared plus pa plus qi, I can factor it as a minus r1i, a minus r2i for some scalars r1 and r2. I can factor our matrix expression, which is what we wanted to show. So now suppose v is any non-zero vector, and we have this matrix equation, let's show that either v is an eigenvector for lambda2 or a minus lambda2 if is an eigenvector for lambda1. So definitions are the whole of mathematics. All else is commentary. Let's bring in our definition for an eigenvector eigenvalue. The useful thing we can do about mathematics is to engage in wishful thinking. This is sometimes known as a proof by cases. In this particular case, our wishful thinking might be this. Either a minus lambda2iv is the zero vector, or it isn't. So suppose a minus lambda2iv is the zero vector. We can expand that out. Now remember i is the do-nothing transformation. So this iv is really the same thing as v. And we can rearrange. And this is exactly our definition of what an eigenvector eigenvalue is. And so v is an eigenvector for lambda2. What this comes down to is this. Sometimes we're fantastically lucky, and the vector we pick happens to be the eigenvector. And if you want, you could live your entire life based on luck. Maybe you got lucky and got born to multimillionaire parents and went on to become the most powerful con man in history. But if you don't want to rely on luck, you have to rely on planning. And planning centers around considering all possibilities. So here, either this product is zero, or it isn't. So suppose a minus lambda2iv is w, where w is not the zero vector. Well, we still have our equation, and that gives me this equation, which we can expand. Again, i is the do-nothing transformation, so iw is the same as w. We don't need to include this i. And rearranging gives us the eigenvector eigenvalue relationship again. And so w, this a minus lambda2iv, is an eigenvector for lambda1.