 To find the largest eigenvalue of a matrix A, we can use the iterative method. We'll pick any non-zero seed vector x0, we'll let yk plus 1 be A applied to the vector xk, and we'll let xk plus 1 be the normalized version of yk plus 1. And under the right conditions for sufficiently large n, xn will approximate an eigenvector, and the corresponding eigenvalue will be the dot product of xn with A applied to xn. We should always consider the possibilities for improvement. Remember, we normalized axk to avoid overflow errors in our computations. So really, we only need to normalize it when there's a risk of an overflow error. And this means we can improve our algorithm. Let's again pick any non-zero seed vector x0, this time let ym plus k be A to power k applied to xm, and normalize, and this allows us to skip the computation of most of the xk's. So for example, let's say we want to find the largest eigenvalue of a 3x3 matrix. We'll use fast powering to find a high power of A, but not so high we run into overflow problems. So I can find A squared. A to the fourth is A squared times A squared. A to the eighth is A to the fourth times A to the fourth. And if we try to find A to the sixteenth, we'll end up with floating point decimals, so we'll stop here. Now this depends on your machine. I'm using just a spreadsheet, so anything higher than these numbers will get converted to scientific notation, and we'll lose accuracy, and we don't want to do that. So we'll pick a seed vector, 1, 0, 0, and we'll apply A to the eighth to this vector, and this is our y to the eighth. There's an obvious simplification we can make here, and then we can normalize this vector. Next, we'll find A x to the eighth, and here's the important thing to remember. We're applying this original matrix A and not this matrix A to power eight. And if we do that, we'll get a new vector, and our dot product will give us the eigenvalue. And now this leads to a natural question. Why bother with characteristic or minimal polynomials? After all, if we can use a numerical method to find an eigenvalue, why do we have to waste time trying to solve a complicated polynomial equation? And there are two problems with this approach. First, it only finds the largest eigenvalue, and it might not even find that. For example, let's suppose we apply the numerical method to the matrix 0, 1, 1, 0. So we find A squared, and since this is the identity matrix, that means that any even power of A is also going to be the identity matrix, so we can find A to the eighth immediately. We'll choose a seed vector, about 1, 0, and since A to the eighth is the identity matrix, we know that A to the eighth applied to X is going to be, which is already normalized. We'll apply A, our original matrix, to X8, and that's going to give us the vector 0, 1, and our eigenvalue is going to be the dot product of X8 with AX8. And this is all very well and good except for one problem. How do we know the eigenvalue isn't 0? Well, we applied A to what we thought was the eigenvalue, and what we got was not 0 times the original eigenvector. Because remember, if X is an eigenvector for A, then AX is going to be lambda X for some scalar lambda. In other words, this numerical method is completely wrong. And at least in mathematics, that's a good thing because it means that there's something interesting going on. So let's take a look at that next.