 As a general rule, finding an eigenvalue for an n by n matrix will require solving an nth degree polynomial. And so for a 3 by 3, we'll need to solve a cubic equation. For a 4 by 4, we'll need to solve a quartic equation. And if we have something like a 5,000 by 5,000 matrix, we'll need to solve an equation of degree 5,000. But how? And so the obvious answer is we could use a numerical method to find solutions. However, this still requires finding the characteristic or minimal polynomial. And that by itself is going to be quite a task. And so the obvious question to ask is can we use a numerical method directly? And the answer is yes, sometimes. The basic idea works something like this. Suppose A is a non-defective matrix and λ1 is the largest of the eigenvalues. Then any vector x can be written as a linear combination of the eigenvectors. So Ax is going to be the linear combination where A is applied to each of the eigenvectors. But since Vi is an eigenvector for A with eigenvalue λi, we can rewrite this. And now, lather, rinse, repeat, if we keep applying A, we get... Now remember, we assume that λ1 was the largest of the eigenvalues. So λ1 to power k is going to be larger, significantly larger than λi to power k for sufficiently large k. So that means for sufficiently large k, this A to power kx is approximately just this first vector. Now once more onto the breach, let's apply A again. And again, since V1 is an eigenvector for λ1, then AV1 is going to be λV1. And let's rearrange these terms a little bit. And so we find that A applied to A kx is going to be approximately λ1 times A kx. In other words, A kx is an approximate eigenvector. And this suggests the following numerical method. We can find an eigenvector of A by picking any vector x, picking a large value of k, and then finding A to power kx, which will approximate the eigenvector. Well, let's try it out. Let's try to find an eigenvector for a 2 by 2 matrix. We'll pick our initial seed vector to be 1, 0, and we'll evaluate A to power... Oh, I don't know, how about 1,000 applied to x? Now, of course, it would be unreasonable to do this by hand, but fortunately we have machines. So we'll drop this into a computer and see what happens. So we run our program and our computer chokes. So what happened? For large k, the components of A to power kx will be very large and lead to an overflow error. And so the obvious solution here is to remember that a scalar multiple of an eigenvector is an eigenvector. And so let's scale our vectors every time they get too large. Now, we can use any scale factor we want. For example, we might scale down by a factor of 10 every time we got a component that exceeded 10. But let's normalize the vectors. This gives us a new algorithm. We'll find y, which is A applied to xk, and then we'll let xk plus 1 be the normalized version of our vector y. So again, we'll use our traditional starting vector 1, 0. We'll first compute y, which is A applied to x0, which gives us the vector 5, 2. And then we normalize that, and we'll keep four decimal places of accuracy. So again, we'll apply A to x1, which gives us a new value of y, which will normalize to give us x2. We'll apply A to x2, which gives us a new value for y, which will normalize to find x3. And because the magnitude of our component stays small, we can continue to apply this as far as we want. And if we apply this a thousand or so times, we end up with a vector that looks like this. Now, even if this is our eigenvector, we still want to find the eigenvalue. So let's think about this. Suppose xk is an approximate eigenvector for eigenvalue lambda 1. Then A applied to xk is going to be approximately lambda 1 times our vector. And so now we invoke our universal strategy. Every problem in linear algebra can be solved by reducing to a system of linear equations. So we found, or at least we think we found, an approximate eigenvector. And so to find lambda, we'll apply A to this vector. And if we compare the two sides, we actually get two equations where lambda 1 is the unknown. And if we solve these two equations, we find that lambda 1 is approximately 7.24 mumble. Now, this is actually an important check. If this really is close to an eigenvector, then what we should get is close to an exact multiple of it. And so here we do see that our output vector is close to about 7.25 times the original vector. Actually, we can be a little bit more clever about it. Remember, always ask, can we improve this process? Remember we've been normalizing the vectors so that V1 is actually a unit vector. And so that means the dot product of V1 with AV1 is going to be lambda 1. So that means we can take our dot product of our approximate eigenvector with AV1, which will give us our approximate eigenvalue.