 So let's introduce a completely useless way to solve equations. Why would we introduce something that's completely useless? Because in mathematics, as in life, it's the journey, not the destination. So let's see if we can find a formula for the solutions to a system of equations. And we can do this by reducing the augmented coefficient matrix. Our row echelon form will give us the system of equations, and provided this coefficient of x2 is not equal to zero, we can divide by it and get our solution for x2, and then use back substitution to find x1. And this gives us a set of formulas for x1 and x2 in terms of the coefficients and constant of the original equations. And this result illustrates something about the nature of time. If you'd done this 300 years ago, people today would be talking about your rule. But since you didn't do this 300 years ago, people are talking about Kramer's rule. And this is because a Swiss mathematician by the name of Gabriel Kramer gave the first generalized statement of this form. And he essentially said the solutions to this system of linear equations are going to be this. And Kramer didn't include the proviso that the denominator can't be equal to zero. It was implied from the formulas. Kramer went on to give a similar rule for solving systems with three equations and three unknowns. And, as stated by Kramer, these rules are rather horrific-looking formulas. But let's see if we can simplify them a little bit. So a little analysis goes a long way. And in this case, two important pieces of analysis are, what do these formulas have in common, and where have I seen this before? And the first thing we might notice is both of these formulas have as denominator this expression. And, well, I have no idea where I've seen this before, but I do know that these are going to come from the solution to the system of equations. So maybe I'll write down my system of equations and get the coefficient matrix. And if we look at the determinant of our coefficient matrix, we get exactly the denominators of our two solutions. What shall we do next? As a general rule, figuring out the next step is a matter of intuition and insight, most of which are gained through having solved many, many, many problems. But there are some ways of gaining that insight and intuition without having solved quite so many problems. And here's a case where mathematics diverges from the real world. In the real world, wishing it were so doesn't work. In mathematics, it's sometimes useful. But you have to make the right wish. In general, the wish that works in mathematics is wishing things were more consistent. In this particular case, our denominator is a determinant, so we might wish that our numerators were also determinants. So if we look at our numerator for x1, we see that it has the factors a22 and a12. So we'll want to keep that. And it also has these factors b1 and b2, which seem to have replaced the a11 and the a12. Well, if I just replace them, then I find that the determinant of this matrix, which is the coefficient matrix with the first column replaced by the constant column, is going to give us the numerator of x1. And we make the same wish. I wish that the numerator of x2 was also a determinant. And wishing will make it so, provided we do a little bit of work. We're not quite so divergent from the real world in that respect. We see that if we replace our second column with the column of constants, then the determinant of the resulting matrix will in fact be the numerator of the fraction that gives us x2. We can generalize this result for any system of equations. So let a be the coefficient matrix of a system of equations, and axi be the coefficient matrix where the i-th column, corresponding to the coefficients of the variable xi, has been replaced by the constant vector. Provided our determinant is not equal to zero, then xi is going to be the quotient of the determinant of this axi matrix divided by the determinant of a. So let's say I want to solve the system of equations using Kramer's rule. So I need to find a bunch of determinants. So first, I have my matrix of coefficients, and we'll find that determinant. Next, we'll replace the first column with the column of constants and find the determinant, and that'll give us negative 22. Next, we'll take our coefficient matrix and replace the second column with our column of constants, and the resulting matrix has determinant 7. And we can substitute these values into Kramer's rule and find the solutions for x1 and x2. Now on the surface, Kramer's rule seems to be a great thing. Here's a way that we can solve a system of equations very easily. The problem is that when we actually try to implement it, we find it's actually very difficult to implement because we have to find all of these determinants. So in practice, nobody uses Kramer's rule to solve anything more complicated than a 2x2 equation. However, the generalization of Kramer's rule does give us a very important result. If the determinant of A is not equal to zero, then our xi's have unique solutions because the determinants have unique values. And the real importance is the journey, not the destination. And in this particular case, the journey tells us the following. Let A be the coefficient matrix for a system of equations. If the determinant of A is not equal to zero, then the system of equations will have a unique solution. On the other hand, if the determinant of A is equal to zero or fails to exist because our coefficient matrix might not have a determinant, then the system of equations will not have a unique solution. And there are two possibilities at this point. Either the system of equations has infinitely many solutions or the system of equations might have no solutions at all.