 We can find the eigenvalues for matrix A by choosing a random seed vector v, finding the minimal polynomial f of A where f of A applied to v gives us the zero vector. Now, every root of the minimal polynomial is an eigenvalue, and we could use these to find the corresponding eigenvectors. But we might not find all the eigenvalues, and so we might have to choose a new seed vector u linearly independent of the known eigenvectors and then find a new minimal polynomial g of A where g of A applied to u is the zero vector, and lather rinse repeat. As always, the question we want to ask is, will there be pi? Wait, wrong card. Can we make this more efficient? And so let's think about that. Suppose our seed vector v gives us a minimal polynomial f of A with eigenvalues lambda 1, lambda 2, and so on up to lambda k. Also, suppose the linearly independent eigenvectors form a basis. And note that here, by linearly independent eigenvectors, we mean all of the eigenvectors and not just the ones we happen to know about. Now we can express any vector x as a linear combination of the known eigenvectors plus a linear combination of the unknown eigenvectors. And notice the indexing. We have i eigenvectors, but we assumed we had k eigenvalues. And so you might wonder, why were the eigenvectors v1 through vi instead of through vk? And here's the important connection to make. If we apply our minimal polynomial to this vector x, what we get is a linear combination of the unknown eigenvectors only. Remember that's because our minimal polynomial can be written as a product A minus the known eigenvalues. And the factors can be rearranged. So if I apply f of A to a scalar multiple of any of the known eigenvectors, then I get the zero vector. In other words, f of A applied to my vector x will eliminate the components of x that are linear combinations of the known eigenvectors. And so this suggests the following modification to our procedure. First, we'll choose a seed vector v1, find a minimal polynomial f1 of A, the eigenvalues and the corresponding eigenvectors. Now if we don't find all of the eigenvalues and the eigenvectors, we'll choose a random vector x and then compute f1A applied to x. If this is a nonzero vector, we'll let v2 be f1A applied to x and proceed as before. And we'll extend our botanical analogy and we'll call this vector v2 a seedling vector. In effect, we've grown our vector x into a better vector for finding the remaining eigenvalues. So for example, let's say we want to find the eigenvalues and eigenvectors for a 4x4 matrix. We'll pick the most exciting seed vector possible, or maybe not. We find Av and our set vav is obviously dependent, and so our minimal polynomial will be Av minus i, which gives us an eigenvalue of lambda equals 1. And once we know the eigenvalue, we can find the eigenvector and we find we actually have two eigenvectors associated with it. So lambda equals 1 has two eigenvectors. Now because this is a 4x4 matrix, we could have up to four eigenvectors. So let's pick another random vector, and this time we'll go with the exciting new vector. Okay, so excitement is relative. But since we know lambda equals 1 is an eigenvalue, we can grow this seed vector into a seedling by finding A minus 1i applied to v. And so that's going to give us, and this will be our seedling vector. And we know something else. Since the minimal polynomial has a degree of at most 4 and u is already A minus i times v, this means the minimal polynomial with respect to u has degree at most 3. And what this means is that we only need to find u, Au, a squared u, and a cubed u. So we find those. And as before, we'll use these as column vectors and row reduce. We find our minimal polynomial, which turns out to be quadratic, and we can factor it. And we find two more eigenvalues, lambda equals 3, and lambda equals negative 2. And as before, once we know the eigenvalues, we can find the eigenvectors. So for lambda equals 3, our eigenvector satisfies A minus 3i applied to x gives us the 0 vector, so row reducing gives us and a corresponding eigenvector. And similarly, for lambda equals minus 2, our eigenvector satisfies A plus 2i applied to x gives us the 0 vector. So again, row reducing gives us the eigenvector. And now we have 1, 2, 3, 4 eigenvectors, and since a 4 by 4 matrix has at most four linearly independent eigenvectors, we've found all of the eigenvectors and all of the eigenvalues. But before we celebrate too much, we should keep in mind that this only worked because we actually had for linearly independent eigenvectors. And so the question to ask is, will there be cake? Who's writing these cards? The question we're supposed to ask is, what if we don't have a full set of eigenvectors? Well, maybe we don't have to worry about it. In the perfect world, every matrix would have a full set of eigenvectors. But we don't live in that world. So what do we do then? We'll take a look at that next.