 Let me actually generalize the notion of how we calculated determinants before. The calculation we did is actually a special case of how one usually calculates a determinant. But before we do the more general calculation, let's do another definition. So again, we have an n by n matrix. We're going to define the ij cofactor, which we're going to call cij. The cofactor of the matrix A, it's going to be take the ij minor matrix, compute the determinant of that ij matrix. And then there's this factor negative one to the i plus j power. So you add together the entries, the ij position, and then that'll give you a power of negative one. And so I want to point out that the formula we had for determinants can be simplified if we use cofactors, like this right here, because we had the the A11, that's the entry in the 11 position, then we had a plus determinant of A11, the minor. And so using this notation, that's just a c11 cofactor. Then the next one's going to be A12 times c12, the cofactor again. And although there was a negative sign before, the negative sign really is hidden inside of the cofactor because of this alternating factor right there. And so the determinant formula that we used before actually looks like A or i1 to n, where you get A1, I'm sorry, we should call this a j, because we're talking about the column is changing the row, always changing the row. Always stays the first one. So A1j times c1j. The formula with this cofactor for the determinant comes a lot more simple. And when it comes to cofactors, we do want to keep track of the sign plus minus plus minus plus. And so it's just this, it's just a sum of the indices of the position, but a pattern that can be useful for people as you're working with cofactors is if you go along the first row, it'll look like plus, plus, minus, plus, minus, plus, minus, plus, minus, plus, minus, plus, minus, plus, minus, plus, minus, this alternating scheme like we saw before. But if you were to go along any row, right? Any row, it will look like plus, minus, plus, minus, plus, or it might look minus plus minus plus minus. It depends on which row you were on. The first row will do plus minus plus minus. The second row will do minus plus minus plus. The third row will do plus minus plus minus. The fourth row will do negative plus, negative, plus. So it depends on whether you're an even row or odd row. Also, if you look along columns, you get plus, minus, plus, minus, plus. You might get minus, plus, minus, plus. Same basic idea. Odd rows will do plus, minus, plus, minus. Even rows will do negative, plus, negative, plus. And so why do we care about these cofactors? Well, the way that people often compute determinants is actually using a technique due to Laplace here, what we'll call the Laplace expansion. So if we have an n by n matrix A whose entries are the i, or the aij numbers right there, then it turns out we can compute the determinant by doing a so-called cofactor expansion along any row or any column. The definition of the determinant we took before was across the first row, but you can cofactor expand across any row or any column and that gives you the determinant. And so if we were to expand across the i-th row, we would have the basic formula right here. j would range from one to n, that is you go across all the columns, you'll take the aij number and the cij position. And so what that means is the row would be fixed. You do the i-th row, i-th row, but the column changes. The first column, second column, third column, n-th column. And that's true for the entry in the matrix and the associated cofactor. But you can also expand across the j-th column, right? For which case the column then becomes fixed. That should be a one right there. Let me fix that. A1j, c1j, plus two a2j, c2j, then it'd be a3j, c3j, a4j, c4j all the way down to anj, cnj. We get this closed form for right here, for which case we add up together all of the aijs and the cijs. And so when you compare these two formulas right here, they almost look identical, right? If you look at the sigma aij, cij, that part looks the exact same. The difference, of course, is in the dummy variable. When you have a fixed row, it's the columns that will change, the variable j. If you have a fixed column, it's the rows that will change. You have a fixed j as i changes, because you can expand across any row or column. Let's come back to the example we did earlier, that three-by-three example. And let's consider how we could compute the determinant if we expanded across the second row, right? So this time along, we're not going to expand across the first column or the first row, we're going to do the second row this time. So how does that work? Now, pay attention to the cofactors. The first row is plus, minus, plus. Since we're in the second row, it's going to be minus, plus, minus, all right? And so if we expand across the second row, we're going to get minus zero times the determinant of the minor, which if we take away the first column, we get seven, zero, negative one, negative six, right? But admittedly though, since the first number is a zero, we honestly could care less on what does the minor look like, because zero times anything will just give us zero. So I'm going to jump down there in just a moment, we get a zero. For the next one, we're going to get plus three, right? The cofactor here is a plus, and if we keep track of things, right? This first entry in the row is the two, one position, and two plus one gives us three, which is an odd power of negative one. The second position is the two, two position. And so two plus two is an even number. Evens give us positive powers of negative one. And for the sake of later, we're going to have the two, three position. Two plus three is equal to five, which is going to give us a negative about right there, all right? But let's go back to the, to doing the minors, right? The plus minus is part of the cofactor. If we were to kill off the second column, we're always killing off the second row. We get five, zero, nine, and negative six, five, zero, nine, and negative six. And then finally, I didn't predict my negative sign perfectly there. Finally, we're going to get minus, like we said before, zero times the minor. If we take off the third row, the third column, I mean, if we take off the third column, the minor looks like five, seven, nine, and negative one. But I want to confess that I'm doing way too much work in this expansion here. Because the first entry had a zero in the second row, I don't care about that minor. You're just going to times it by zero. And because the last number in the row was a zero, this minor doesn't make a difference whatsoever either. It's only the middle minor that's going to make any difference here. This actually is reasons why I would want to expand across the second row, because the second row has a lot of zeros in it. If we continue on here, we got a zero for the first one, we'll get three times, we get five minus, five times negative six, that's just a negative 30. And then we're going to get zero right there. And then we get minus zero again, right? The zeros make life easier for us. We like zeros when it comes to determining calculations. So we end up with three times negative, sorry, getting a little ahead of myself there, three times negative 30, which gives us the negative 90, which was the determinant we calculated before. The row or column you choose does not make a bit of difference on what the final result is going to be. The determinant is unaffected by expanding across rows or columns. But the computation is affected, right? When you come to this matrix that you see in front of you here, there are some rows and columns that are much better than others. The second row was nice because it had two zeros. The third column would equally be nice to expand across, because it has two zeros right there. We first did the first row, which was nice because it had a zero in it. Worst case scenario would be the third row. There's no zeros there. And then the second column, sort of worst case scenario. So if you can pick and choose your rows or columns, it's good advice to pick a column or a row which has a lot of zeros in it. So let's consider the following situation right here. So we have a five by five matrix, kind of big, right? And normally a determinant calculation can be pretty horrible because of this recursive definition. If we look at sort of like the complexity of this, worst case scenario, we're going to have to expand across like say the first row maybe. Across the first row, you're going to have to do five four by four determinants. Each of those four by four determinants potentially is going to be four three by three determinants, which those can determine to be three, two by two determinants. And then if you keep track of all this, the difficulty of computing determinant grows with a complexity of big O in factorial, which from a complexity point of view is horrible. This is not good, not good, the general calculation. But when there are zeros present, one can utilize a maximum row or column with zeros to simplify the calculation here. So notice the first column of our matrix has a three and lots of zeros, right? So if we cofactor expand that, we have to pay attention to the signs, we get plus minus plus minus plus. Immediately the signs of zero don't matter much of a bit of a difference, but we do care about the first position. This determinant will equal three times the minor where we take away the first row in first column, right? And that minor is two, negative five, seven, three, zero, one, five, zero, zero, two, four, negative one, and then zero, zero, negative two, zero. All the other minors we don't have to consider because they all have a zero in front of them in terms of the linear combination. So the other cofactors are irrelevant because of the factors of zero. So we really love factors of zero. With that original matrix, the fifth row was also a good choice because of the same principle, but I like the first column because I'm less likely to mess up the sign of the cofactor. Well, for this now, this next, this minor here, I want to sort of do the same thing again. Can I pick a row or column that has a lot of zeros in it, the maximal amount? Well, the first column again becomes somewhat preferable because we have a two in the first position and then zeros everywhere else. And so because of that, I can get three times two times a minor for which we take away the first column, the first row, and then our minor row becomes one, five, zero, two, four, negative one, zero, negative two, zero. Like so. So again, maximizing the number of zeros helps simplify this calculation a lot. So we get three times two, of course, which is six. Admittedly, that three should distribute onto all of the cofactors that showed up in the expansion. But because they're all zero, we didn't have to worry about them other than the first one. So now in this three by three minor, which row or column do we care about? The first column does have a zero in it, which is nice. And I can also notice that the second, not the second, the third row has a bunch as two zeros, and so that is more preferable than the first column. But I'm actually going to elect to choose the third column there. Because it has two zeros, which is nice, but also has a negative one that's supposed to a negative two. So I kind of like that a little bit better in terms of multiplication. So let's pick the third column to expand across. We have to be careful about our cofactors. So I like to kind of play this as like a little maze we're playing with our cofactor sign. So we start off the plus and we're looking for a path to get to the numbers we care about. So we have a plus, we have a minus, we have a plus, we have a minus, we have a plus. Right. And so, I mean, we can make this little path along the matrix with our plus minuses alternating each step we take. Admittedly, I mean, we could do this just by the positions, right? This first one is the one three position, one plus three is four, which is even. So we get a plus. The negative one is in the two three position, two plus three is five, that's an odd number, so we get a minus. And this is in the three position, three plus three is six, which is even, so we get a plus. That works, but again, I always visualize this path of plus minuses down the matrix. So because of the zeros there, we don't have to worry about any of them except for the one in the two three position right there. So we're going to get six times. The cofactor sign is negative one, but we times that by negative one. And then we times that by the two by two minor, where we're going to take away the third row, third column and then we also take away the second row. So we get the minor one, five, zero and negative two. Of course, negative negative one is just a plus one right there. And so times that by six, that'll be a six. And then for the last part, since it's a two by two minor, the determinant, we can just use the formula we had from before for which case we get one times negative two minus five times zero. And so that simplifies to be six times negative two. And so we end up with a negative 12 in the end. And that's the determinant of this matrix. Five by five, we potentially had to do like 120 of these things. But because of our selection of the zeros, we actually able to calculate it. Not too bad. Not so horrible of a calculation. Now I want to mention that the example we just did was nearly triangular, right? There were a lot of zeros below the diagonal of this matrix. And so one could actually imagine how one does the determinant of a triangular matrix. In fact, the determinant of a triangular matrix is just going to be the product of the diagonal entries. So for example, if you had the matrix, let's say two, five, three, zero, seven, negative four, and zero, zero, negative two. Because of how the Laplace cofactor expansions work, one only has to worry about the diagonal entries. The determinant will just be the product of those diagonal entries. And so therefore we get two times seven times negative two, which turns out to be negative 28. Triangular matrices are a cinch to calculate determinants for. We like triangular matrices for that reason.