 So what happens in a system of linear differential equations when the eigenvalues are repeated? Suppose that in our linear system, dx equals ax, the coefficient matrix A has eigenvalue lambda with algebraic multiplicity greater than 1. If lambda has geometric multiplicity greater than 1, we produce solutions of the form c1 e to the power lambda tv1 plus c2 e to the power lambda tv2 and so on, where v1, v2, and so on are the linearly independent eigenvectors. But what if we don't have enough undetermined constants? Well, let's consider, by analogy with our one-dimensional case, if r was a repeated root of our linear differential operator, we obtain solutions that were linear combinations of e to the rt, t e to the rt, t squared e to the rt, and so on. So maybe, if lambda is a repeated eigenvalue of a corresponding to eigenvector u, our solutions could be of the form e to the power lambda t times some vector, t e to lambda t times some other vector, and so on. Unfortunately this doesn't work. And after mucking around a bit, we find that we have to assume solutions of the form e to the power lambda t times t u plus v. So let's consider this system of differential equations. We'll rewrite in matrix form. We'll find the characteristic polynomial. We'll find the eigenvalue and eigenvector. And that gives us a general solution that includes c1 e to the power 2t times the vector minus 1, 1. But we should have a second constant. To get a second solution, we'll assume our function has the form e to the power 2t times t times the first eigenvector plus some other vector. We'll apply our differential operator. Since our differential operator applied to x, y should give us the same thing as the matrix applied to x, y, we also have, and we'll expand that out a bit, performing the matrix multiplication. And this gives us two expressions for the derivative of our function vector. This, which we obtained through direct differentiation, and this, which we obtained because it was supposed to be equal to the matrix applied to the function vector. So in order for these to be equal, the corresponding components have to be equal. So comparing our two equations, we see that the first components are going to be minus e to the power 2t minus 2t e to the power 2t plus 2e to the 2t times c1. That's our first component from here. Must be equal to minus 2t e to the power 2t plus e to the power 2t times c1 minus c2. So if we clean up the algebra a little bit, and since this has to be true for all values of t, it's necessary that the coefficients minus 1 plus 2c1 and c1 minus c2 have to be equal. And that tells us that c1 plus c2 has to be equal to 1. What about the second component? So if we look at the second component, e to the power 2t plus 2t e to the power 2t plus 2e to the power 2t times c2, those are the second components from the first equation, must be equal to 2t e to the power 2t plus e to the power 2t times c1 plus 3c2. Again, cleaning up the algebra. And again, this must be true for all values of t, and so that means that our coefficients 1 plus 2c2 and c1 plus 3c2 must be equal. And so we find, and we might be a little worried that that's the same equation. But actually, we should have expected that. Remember, we're going to be multiplying these solutions by some scalar multiple. And so a scalar multiple of a solution will also be a solution. And that means that this vector is not unique because any scalar multiples can get absorbed into the constants. In fact, you can think about this as an important check on your work. If you've done everything correctly, there should be an infinite number of values for this constant vector. And so any value of c1 and c2 that works will work for our constant vector. So let's make c1 equal to 0, and in which case, c2 will be equal to 1. And that'll give us our general solution, e to power 2t times t minus 1 plus 0, 1. And as before, any linear combination will give us a solution.