 So remember that if m is a square matrix and v is a non-zero vector, then there's a least value k for which the sequence v, mv, m squared v, and so on, is dependent. Consequently, there's a non-trivial solution to a linear combination of these vectors equal to the zero vector, and we can use this equation to find the eigenvalues, and we can avoid having to find the determinant. Because remember, we want to avoid algorithms that require finding the determinant. But the important question here is, can we find all the eigenvalues? And so we'll introduce a useful theorem. Suppose m is a square matrix and v is a non-zero vector, and let k be the least power for which there is a non-trivial solution to linear combination equal to the zero vector. Then all roots of this minimal polynomial are eigenvalues. And the proof that is relatively straightforward. This vector equation can be factored, and this last part of the equation can't be zero. Because if it was, there would be a non-trivial solution to an equation of lesser degree, and we assume that k was the least power for which a non-trivial solution existed. And since this is not equal to zero, that means it's going to be an eigenvector corresponding to the eigenvalue lambda one. But we can rearrange the factors and the same argument holds. So every root of the equation is an eigenvalue. So for convenience, we'll call this polynomial a minimal polynomial of m with respect to our vector v. Now that's very good, but there's a more important and somewhat concerning result. Suppose my vector v is a linear combination of the eigenvectors for k distinct eigenvalues. Then the minimal polynomial for v has degree k, and the roots will be eigenvalues of the eigenvectors of the linear combination. We won't prove this, we have to leave something for exercises. But here's why this is concerning. Suppose we, by chance, picked a vector v that was a linear combination of some of the eigenvectors. Then our minimal polynomial would only find those eigenvalues, and we might end up missing many of the actual eigenvalues. So we'll deal with the easy case first. Suppose we have an n by n non-defective matrix. Then there are n linearly independent eigenvectors, so every vector in Rn can be expressed as a linear combination of the eigenvectors. So suppose we pick a random seed vector v, which happens to be a linear combination of some of the eigenvectors. By our theorem, we'll find all the eigenvalues of the corresponding eigenvectors, and so every seed vector gives us a set of eigenvectors. Now, since we know how many linearly independent eigenvectors we're supposed to get, we can just check to see if we have found them all. And if we haven't, well, let's pick a new seed vector that cannot be expressed as a linear combination of the known eigenvectors. Again, since our matrix is non-defective, this vector that we've picked must still be a linear combination of the eigenvectors, and this means it must include some of the eigenvectors we haven't found yet. And that means we'll be able to find some additional eigenvalues. And since the number of eigenvalues is finite, we'll eventually find all of the eigenvalues and all of the eigenvectors. So, for example, let's consider this 4x4 matrix. So we'll pick a random seed vector. No need to be fancy. We'll go with 1, 0, 0, 0, and we find that if we apply the matrix to this vector, we get the same thing. And that means that 1, 0, 0, 0 is an eigenvector corresponding to lambda equals 1. But since A is a 4x4 matrix, the characteristic polynomial will be of degree 4, and we should expect 4 eigenvalues. So how do we find the rest? To find the rest, it's important to remember that it's possible for an eigenvalue to correspond to more than one linearly independent eigenvector. So let's see if lambda equals 1 corresponds to multiple eigenvectors. And so let's find all eigenvectors associated with lambda equal to 1. So we'll solve A minus 1ix equals 0. So row reducing our coefficient matrix. We have solutions. And since we have two free variables, our eigenvectors for lambda equal 1 have the form. And so we'll take one of our eigenvectors as 1, 0, 0, 0, and the other one as 0, 1, minus 2, 1. So to find another eigenvector, we'll take a vector u that is linearly independent of the known eigenvectors. For convenience, we'll choose, well, how about 0, 1, 0, 0. And you know this is independent because it's online, so it must be true. So after you've satisfied yourself that this actually is independent of our two known eigenvectors, we can find Au, A squared u, and A cubed u. And we get our minimal polynomial, which we factor. And so we know the roots 1, negative 2, and 3 are eigenvalues. And we've already determined the eigenvectors for the eigenvalue 1. So we want to find the eigenvectors for the eigenvalues minus 2 and 3. So going through that process, we find that lambda equals minus 2 has eigenvector 1, 1, negative 3, 4, and lambda equals 3 has eigenvector. And since a 4 by 4 matrix has at most four linearly independent eigenvectors, and we found 1, 2, 3, 4 linearly independent eigenvectors, we found all of the eigenvectors and all of the eigenvalues. And if we put this all together, this tells us when to stop. Since an n by n matrix will have at most n linearly independent eigenvectors, we can stop looking for eigenvalues when we found n eigenvectors. And if we have a non-defective matrix, we will eventually find all of the eigenvectors. And so we're done with the problem. What's that? Question in the back? Oh, right. This is only for non-defective matrices. What if we have a defective matrix? How do we know when we found all of the eigenvalues? We'll look at that next.