 So what do we know about our eigenvectors once we've found them? So suppose M is a matrix with distinct eigenvalues lambda 1 through lambda k and corresponding eigenvectors v1 through vk. And let's see if we can prove an important property. If one eigenvector is a scalar multiple of another eigenvector, then it must be for the same eigenvalue. And again, part of the reason for proven mathematics is that it reinforces things that we should know or understand. And in this particular case, the important thing that we want to know and understand is what an eigenvalue eigenvector pair is. And so remember that we have an eigenvalue eigenvector pair when the matrix applied to the eigenvector gives us a scalar multiple of the eigenvector. And that means the one thing I do know about vi and vk is that since vi and vk are eigenvectors, then we know that the matrix applied to vi gives us lambda i times vi and the matrix applied to vk gives us lambda k times vk. What else do we know? We know by assumption that vi is some constant times vk. So when I apply the matrix M to vi, it's the same as applying the matrix M to constant times vk. Which means it's constant times the matrix applied to vk, and since vk is an eigenvector, that's going to be lambda k times vk. So we can put that together. Since vi is an eigenvector, we have the matrix applied to vi equals lambda i times vi. Since vi is constant times vk and vk is an eigenvector, then the matrix applied to vi is also c times lambda k times vk. And so that tells us that lambda i vi must be the same as c lambda k vk. But again, vi is some constant times vk, and comparing our coefficients of this vector vk, we see that lambda ic must be equal to c lambda k. And since c is not equal to zero, that tells us that lambda i must be equal to lambda k. Another important result emerges as follows. Suppose I have a matrix with distinct eigenvalues lambda 1 through lambda k, and corresponding eigenvectors lambda 1 through lambda k. We'll prove that if any eigenvector can be written as a linear combination of the other eigenvectors, then the remaining eigenvectors are dependent. So, let v1 be some linear combination of the other eigenvectors, where at least one of our coefficients is not equal to zero. Then if I just form the scalar multiple lambda 1 times v1, I'll just be multiplying every one of these vectors by lambda 1. But remember that the vi's are eigenvectors, and so the matrix applied to v1 is going to be lambda 1 times v1. And the matrix applied to v1 in the form of the linear combination that produces it, is going to be the same linear combination coefficients, each multiplied by the corresponding eigenvalue. And so this gives me two different expressions for lambda 1 v1. One from the scalar multiple of v1, and one from applying the matrix m to v1. So if I subtract them, I get zero as a linear combination of the vectors v2 through vk. Since at least one of the a i's is not equal to zero, and the lambda i's are distinct, then at least one of these coefficients has to be non-zero. And if we know that if we can find a non-trivial linear combination equal to the zero vector, then our set of vectors is dependent. And so, that tells us that our set of vectors v2 through vk is also a dependent set of vectors. Now this leads to an important result. Remember, if I can write one vector as a linear combination of the others, then the set of vectors is dependent. And so if any eigenvector can be written as a linear combination of the remaining eigenvectors, then the remaining eigenvectors will form a dependent set. So if I can write v1 as a linear combination of the eigenvectors v2 through vk, then what we just showed is the vectors v2 through vk are dependent. But that means I could write v2 as a linear combination of the remaining vectors v3 through vk. And the problem we just did shows that the remaining vectors v through through vk form a dependent set. And so that means I can write the vector v3 as a linear combination of the remaining vectors. And so on and so forth. Lather, rinse, repeat. But if I do that sooner or later, I run out of vectors. And what I'll end up with is the next-to-last vector as a linear combination of the last vector. And so eventually we'll find one eigenvector to be a scalar multiple of another. But this can only happen if they're eigenvectors for the same eigenvalue. And our underlying assumption is that these are eigenvectors for different eigenvalues. And what that means is that if we allow any eigenvector to be a linear combination of the other eigenvectors, our entire set of eigenvectors collapses down into nothing. And this can't be allowed to happen, so that tells us that a set of eigenvectors is independent. So if we put all of these things together, we obtain the following theorem. Let lambda1 through lambdak be distinct eigenvalues with eigenvectors v1 through vk. Then the set of eigenvectors v1 through vk is a set of independent vectors.