 Provided that it exists, we can always find the inverse of a matrix by solving a system of equations. And a good habit for linear algebra students is always identifying what system of equations needs to be solved. So if I want to find the inverse of a 2x2 matrix, I can solve the system with 4 unknowns. Another good habit for mathematicians or human beings in general is asking, how can we do this more efficiently? Now, one of the things we notice about this system is that our variables actually split into two sets. The first and third equations have the variables x1 and x3, but they don't have the variables x2 and x4. That is, the coefficients of x2 and x4 are zero. Likewise, the second and fourth equations do not include the variables x1 and x3, but do include the variables x2 and x4. Because the variables split this way, I can solve four pairs of variables. For example, since x1 and x3 only occur in the first and third equations, we can solve for them by solving the system ax1 plus bx3 equals 1, cx1 plus dx3 equals zero. And we can solve by performing a sequence of row operations on the augmented coefficient matrix. Likewise, x2 and x4 only occur in the second and fourth equations, so we can find them by solving the system of equations ax2 plus bx4 equals zero, and cx2 plus dx4 equals 1. And we can solve by performing a sequence of row operations on the augmented coefficient matrix. But it's important to remember that the row operations only depend on the coefficients. So whatever row operations we need to solve for x1 and x3, we need to apply the same row operations to solve for x2 and x4. In fact, the only difference between the two of them is going to be what happens to the constant terms. So what we can do is we can perform whatever row operations we need on a doubly augmented coefficient matrix. We can form this double-wide matrix where on the left-hand side we have our original matrix that we're trying to find the inverse of and the augmented portion corresponds to the identity matrix of the appropriate size. And if we row-reduce this double-wide matrix, the entries on the right-hand side will be the entries of our inverse. And so this leads to the following theorem. Let A be any matrix and I, the identity matrix of the appropriate size. If there is a sequence of elementary row operations that transform this block matrix, matrix A on the left, the identity on the right, into a block matrix with the identity on the left and some new matrix on the right, then this matrix B is going to be the inverse of our matrix A. So let's apply that. Suppose I want to find the inverse of the matrix 4, 5, 5, 6. So our theorem says that we can take this matrix and augment it with the appropriately sized identity matrix and then row-reduce to find the inverse. So we'll augment our 2 by 2 matrix with the appropriately sized identity matrix and we'll perform our row reductions. And after all the thus settles, we have the identity matrix here on the left-hand side and this thing on the right-hand side is the inverse. Or so I claim. So here's a good rule of life that's also applicable in mathematics. Anytime somebody comes around and says, hey, here's a better way of doing something you've already done. You should ask yourself, does it actually work? And so we claim that this matrix, negative 6, 5, 5, negative 4, is the inverse matrix. Well, let's see what happens when we multiply it. So if we multiply on the right by this matrix, we get the identity matrix. And so this really is the inverse. At least if we multiply on the right, we get the identity, so this is a right inverse. We should verify that this also works as a left inverse. What about a 3 by 3 matrix? Here's where this blocked matrix form is particularly useful because if we wanted to find the inverse of a 3 by 3 matrix, we'd need to set up a system of 9 linear equations, which is not impossible. It's just a little tedious. So here I've compacted that system of 9 linear equations down to a 3 row system. So let's augment our original coefficient matrix with the 3 by 3 identity matrix and we'll perform a sequence of row operations. And again, after all the dust settles, on the right-hand side, if we've done everything correctly, we should have the multiplicative inverse of our matrix.