 So now an interesting thing happened when we found the determinant of this 3 by 3 matrix. We computed it this way, but notice that the determinant is 2 times 3 times 4 minus 5 times 1. But that's the determinant of this 2 by 2 matrix, 3, 1, 5, 4, which is the determinant of part of the original matrix. And this particular type of expression shows up a lot when we find determinants, and so it's worth defining a minor as follows. Let M be a square matrix, the i-jth minor, designated M i common j, is the determinant of the sub-matrix formed by deleting the i-th row and j-th column of M. So let's find M to 2 for this given matrix. Definitions are the whole of mathematics, all else is commentary, so let's pull in that definition. So what we want to do is we want to delete the second row, second column, and that leaves us with the matrix 5, 3, 2, 3. The minor itself will be the determinant of that matrix. We can use the minor to find the determinant of any matrix. That does require we establish one minor step here. Suppose we have an n by n block matrix where I'll write that matrix as some coefficient here in the upper left-hand corner of row of zeros, a column vector v, and some smaller square matrix. Now, using linearity, we know that the determinant of this matrix is going to be the sum of the determinant of two matrices where I'll break this first column apart into a0 and 0v. But now notice the second matrix. There's a row of zeros, and since the second matrix has a row of zeros, its determinant is 0, and so the determinant of our original block matrix is the same as the determinant of the slightly simpler matrix. And because of linearity, we can remove the factor of a from the first row, and so I just need to find the determinant of this matrix, and we leave the rest as an exercise for the viewer, hint, induction, and this gives us the following theorem. If we want to find the determinant of a block matrix where we have a row and a column that's almost all zeros with the exception of the leading term, the determinant is going to be that leading term times the determinant of the miter. So that's great if our first row only has one non-zero entry, but what if our first row has more than one non-zero entry? And for that, we'll use linearity twice. First, we'll expand along a row or a column, and then we'll expand in a way that we might describe as being perpendicular. We'll see what that means in a moment. So let's say we want to expand the determinant along the first row. So let's take a look at that first row, which we can write as a vector sum 273 is, and so we'll make each of these the first row of a new matrix and leave the remaining entries unchanged. Our determinant can be written as the sum. Now let's take a look at that first determinant. If we expand along our first column, this will be, but notice the second and third matrices have a row of zeros, so the determinants are zero, and so the only thing left is this first matrix. Similarly, if we expand the second matrix along the second column we get, and again the row of zeros means that these two matrices don't matter, the third matrix will be, and so the determinant is expressible in terms of three other determinants, and so we can apply our theorem, and there's a problem, our theorem only tells us what to do with the first row, first column entry is non-zero. Well, that's okay, we just have to rearrange the columns. So for the second matrix, if I switch the first and second columns, that will move this non-zero entry to the first column, the first row position, and since we've switched adjacent rows, that will change the sign of the determinant. Equivalently, we'll be subtracting the determinant now. For the last matrix, I have to switch the second and third rows, which introduces a factor of minus one, and now I'll switch the first and second rows, which introduces a second factor of minus one, and so the determinant of this matrix is the same as the determinant of this matrix, and now our theorem applies. So the determinant of this first matrix is going to be two times the minor. The second matrix will subtract seven times the minor, and then we'll add three times the minor. And since these are two by two matrices, we can compute the determinants directly and get our final answer. And so we have a general approach, sort of. This suggests the following approach to computing the determinant. We'll multiply any row or column. We'll multiply the row or column entry by the minor, and then add or subtract the products of the minors with the entry. And there's just one little problem. How could we decide when to add or when to subtract? We'll take a look at that next.