 One of the most important concepts of linear algebra is the concept of an eigenvalue and its associated eigenvector. And we'll define them this way. Suppose A is a matrix. An eigenvalue eigenvector pair is a value lambda and a nonzero vector x, where the matrix A applied to the vector x gets us lambda times the matrix x. And if we look at this carefully and we think about A as some sort of linear transformation, then what this equation tells us is that the linear transformation of A applied to x gives us a scalar multiple of x. Now it's very important to understand that the eigenvector x cannot be the zero vector. However, the eigenvalue lambda could be equal to zero. And one more note, this prefix eigen is the German word for characteristic, because in some sense these eigenvalues and eigenvectors are characteristics of A. Now before we try to find eigenvalues and eigenvectors, let's make a few observations. First of all, in order for there to be any possibility of having an eigenvector or eigenvalue, A has to be a square matrix. You should prove that this is the case. Next, suppose I have an eigenvector for a given eigenvalue, then any scalar multiple of that eigenvector will also be an eigenvector for that eigenvalue. And again, this is something you should be able to prove very easily. And finally, suppose I have two eigenvectors for a given eigenvalue, then their sum will also be an eigenvector for that eigenvalue. Well, let's see how we can solve this problem of finding eigenvalues and eigenvectors. So let A be the 2 by 2 matrix. Let's see if we can find all eigenvalues and eigenvectors of our matrix A. So if we want to find the eigenvalues and eigenvectors, we can try solving a system of equations. What we want is a linear transformation applied to a column vector x1, x2 to give us a scalar multiple of that column vector. And so this gives us the system of equations with corresponding coefficient matrix. We can row-reduce this system of equations. And remember that if x is an eigenvector for eigenvalue lambda, then so is any scalar multiple of our vector x. And that means there's going to be an infinite number of solutions, x1, x2. And so we must be able to parameterize the solutions. We need to have at least one free variable. Well, the only way that can happen is if we have a row of zeros someplace. So the system has to end with a row of zeros. And so it's necessary that this last term, 2 minus lambda times 1 minus lambda minus 6, must be equal to zero. So we'll solve that equation and find that we have solutions lambda equals 4 and lambda equals negative 1. And this gives us two eigenvalues, lambda equals negative 1 and lambda equals 4. So how do we find the eigenvectors? Remember that when our linear transformation acted on the eigenvector, it produced a scalar multiple of the eigenvector. And since we know the eigenvalues, this allows us to set off a system of equations. So for lambda equals negative 1, we know that our matrix acting on the vector x1, x2 gives us negative 1 times x1, x2. We can rewrite this as a system of linear equations. And then solve to get the solutions in parameterized vector form. Since any scalar multiple of an eigenvector will also be an eigenvector, we can let s equal 1 and get negative 3, 2 as our representative eigenvector. Maybe. This process is complicated enough and there's plenty of places to make mistakes, so let's actually verify that this works. So if we did everything correctly, then applying the linear transformation A to the vector negative 3, 2 should give us a scalar multiple of our original vector. In fact, it should be negative 1 times our original vector because that's the eigenvalue. So let's check it out. Let's multiply our matrix by our column vector negative 3, 2. And if we do that, we find we have 3 negative 2, which is in fact negative 1 times our original. And so A applied to our eigenvector gives us negative 1 times our eigenvector as we want it. What about our eigenvalue? Lambda equals 4. So again, for lambda equals 4, we'll want our matrix A applied to whatever the eigenvector is to be 4 times the eigenvector. So that gives us the system of equations negative 3x1 plus 3x2 equals 0 and 2x1 plus negative 2x2 equals 0. And we'll take our coefficient matrix and row reduce it and get parameterized solutions in vector form x1x2 equals s times 1, 1. And again, we'll let s equal 1 to get our representative eigenvector 1, 1. And since the first eigenvector work for the first eigenvalue, you can assume that this eigenvector works too. But I'd check it anyway. So again, the matrix A applied to the vector 1, 1 should give us a scalar multiple of the vector 1, 1. In particular, it should give us 4 times the vector 1, 1. So we'll apply our matrix to the vector 1, 1 and we'll get the vector 4, 4, which is in fact 4 times our original vector as we wanted. And so now we have two eigenvalues and they're corresponding eigenvectors.