 One of the reasons eigenvalues and eigenvectors are so important is that they allow us to perform what's called an eigen decomposition of a matrix. So let's consider this in a special case. Suppose the eigenvectors of a form a set, v, and suppose we have some vector in the span of v. Then the matrix a, acting on the vector x, can be found easily. And that's because we can write x as a linear combination of the eigenvectors, and when we apply a to x, then the distributive property of matrix multiplication, or the fact that a is a linear operator, allows us to find this by applying a to each of the eigenvectors, and we know exactly what a applied to an eigenvector does. And so that gives us ax as a linear combination of the eigenvectors. And since we can define a matrix in terms of what it does to a vector by a linear transformation, this suggests we can use the eigenvalues and the eigenvectors to form the eigen decomposition of a matrix. Well, there are some necessary preliminaries. Remember that only square matrices have eigenvalues and eigenvectors. For now, in later course, you'll find out what happens if you have a non-square matrix. Then it follows that a has to be an n by n matrix. In addition, because we have to define a in terms of what it does to any vector in its vector space, then span of the eigenvectors must span the entire vector space, which means we must have n distinct eigenvectors. But if that happens, any vector x can be expressed as a linear combination of the eigenvectors, which means that x can be written as v times xv. Where v is the matrix whose columns are the eigenvectors, and xv is the vector of coordinates of x with respect to our set of eigenvectors. And consequently, the inverse acting on our vector x should give us this set of coordinates. So let's put these things together. Suppose a is an n by n matrix with n eigenvectors, and let v be the matrix whose columns are these eigenvectors. Then any vector x can be represented as the matrix v acting on the coordinates of x with respect to v. And so a acting on x is going to be v times lambda times the coordinates of x with respect to v, where lambda is the diagonal matrix whose entries are equal to the eigenvalues of a. But the coordinates of x with respect to v are the same as the inverse of v acting on our vector x, and so that gives us a acting on x is the same as v lambda v inverse acting on x. But if they do the same things to x, it follows that the matrix a and the matrix v lambda v inverse must be the same matrix. That's quite a bit today, just so let's take a look at an example. Let's take our matrix 1, 3, 2, 2. We'll find an eigen decomposition, and then to motivate all of this work we'll find a to power 100. So previously we found the eigenvalues and the eigenvectors for this matrix, and so our matrix lambda is going to be the diagonal matrix whose entries are the eigenvalues negative 1 and 4, and v is going to be the matrix whose columns correspond to the eigenvectors 4, negative 1, and 4. We'll find v inverse using the adjoint method. Our determinant of v is going to be negative 5, and so v inverse is going to be negative 1 5th, 1 5th, 2 5ths, 3 5ths. And so that tells us that our matrix a is v lambda v inverse. So I can copy that down, I know what the matrix v is, I know what the matrix lambda is, and I know what the matrix v inverse is. What about a to power 100? Well the important thing to remember is that v inverse times v gives us the identity matrix. So when we multiply a by a, that's going to put v inverse next to v, and so all of those intermediate factors will drop out. And what will happen is that in the middle we'll have lambda to the 100th power, and at the beginning and the end we'll have v and v inverse. And because lambda is a diagonal matrix, lambda to the 100th power is very easy to find. So we'll find it, and multiply before and after by v and v inverse, and that's going to give us our expression for a to power 100.