 Um, so what I want to mention here is that if you were performing row operations to a matrix, how does that affect the determinant? So the three, the three operations we know about a replacement scaling and what happened here, uh, that should be our interchange right there. Whoops. The daisy, uh, so if we perform the replacement operation to a matrix, so that means we have, we have a row and we add a multiple of another row to it, how does that affect the determinant? And the cool thing is it does nothing. Nothing, nothing, nothing replacement is free. It doesn't cost anything to do a replacement matrix in terms of, uh, determinant calculations. And the basic reason is the following. Um, if you transformed A to get to B by doing a row operation, right? So if a multiple of one row of A is added to another row to produce a matrix B, then we're saying that B equals E times A where E is the elementary matrix associated to this replacement operation. Well, if you were to take the determinant of this thing, the determinant of B, this will equal the determinant of EA sports. Um, in which case then you get the determinant of E times the determinant of A. And so what is the determinant of E? Well, if you're doing a replacement row operation, uh, that means it's going to be either an upper or lower unit triangular matrix. You'll have ones along the diagonals, zeros on one side of the diagonal and zeros on the other side except for a single nonzero number. But as replacement matrices are triangular, their determinant is the product of their diagonals. And as their unit triangular, the term is going to equal one. And that's why it has no effect. Uh, you get the determinant of B is just one time determinant of A. Scaling is a little bit different. Um, if you were to scale a row of A to get B, um, the corresponding matrix would look like a bunch of ones along the diagonal except for a single C and you get zeros everywhere else. Um, this scaling matrix, it's also triangular. Therefore, it's determinant will be the product of the diagonals, which will be a bunch of ones and a single C. And so what happens is that when you scale a matrix, you're going to multiply the determinant of A by C to get the other one. So one has to be a little bit cautious about this because we're going from A to B. Um, but what do you multiply the two? So let's say we're interested in the determinant of A. So we want the determinant of A, but we didn't row reduce. We do a scaling operation. If you want to find A, you have to kind of work this thing a little bit backwards. I guess what I'm trying to say is this thing, you typically will think of it as the determinant of A is equal to one over C times the determinant of B. So it's actually the inverse operation. Um, this was kind of like when we factored elementary matrices in the past. Um, you actually, whatever the raw operation you did, the inverse is the one that shows up in the factorization. Um, so you have to take a reciprocals. I will show you a trick using the multilinearity of the determinant and the forthcoming examples to help you avoid confusion. Should it, like if I scale by two, do I times determinant by two or one half? Turns out there's an easy trick one can do. And so I'll kind of show you how that works later on. And then the last one interchange. Well, if you interchange two rows, that affects the determinant by a factor of a negative one. And so again, this is backwards. If you want to switch to the determinant of A, you have to divide by negative one, which is the same thing as multiplying by negative one. So the good thing here is that when you switch the inverse multiplication by negative one does the exact same thing. So you don't have to worry about it. Um, when it, since replacements are free, um, there's no change to determine it. So if you forget to take the reciprocal, no bigs, no big deal right there, um, because it's free. This one right here is the problematic one. But like I said, if you use the multilinearity of the determinant, you can do it quite nicely. So I wanted to finish this lecture by doing a couple of examples of using row reduction to help us compute determinants, uh, because the cofactor expansion is, is very difficult for large matrices. So the basic idea is if we want to do this determinant here, uh, so just write down the numbers like you did before, uh, we're going to do row operations as if we were solving a system of linear equations and just remember the toll that you have to pay when you do a row operation. Okay. So the first thing here, notice, uh, that if you consider the pivot in the one, one position, if I was row reducing this, I would want to get zeros below this pivot. So I would take row two and add to it two times row one and I would take row three and add to it row one. So we're going to get plus two minus eight and plus four and then we're going to get plus one minus four and plus two. And so because we're doing, we just did two row replacements. Those have no consequence to the determinant. Therefore this matrix will be, I'm sorry, this determinant will be equal to the original one where we row reduced it. We get zero, zero, negative five. We get zero, three and two, um, after that step right there. And so if we were to continue to row reduce this thing, uh, the next elementary operation I would want to do is I'm going to, I'm going to interchange rows right here. Uh, because again, I want to pivot my pivot position is now we're looking at the two, two position. So switch the rows there and so you get one negative four, two, you get zero, three, two, and then you get zero, zero, negative five. Now there is a cost associated to doing the interchange while replacement is free interchange requires you multiply everything by negative one. So just remember to put a negative sign in front of the determinant and that takes care of it. Um, you'll now notice that as we've row reduced this, this matrix is now an echelon form. And what I actually care about is the following. This matrix is now a triangular matrix, uh, which case to find the determinant, I can actually multiply, multiply together the diagonal entries. So we're going to get a negative one. We got that from the interchange will get one. That's the first entry times it by three times it by negative five, which are the second and third diagonals. If we multiply those together, we end up with a positive 15, uh, which is the determinant of this matrix. That was pretty slick. Uh, that's a lot easier than the Laplace expansion, which be fair, we did use the Laplace expansion once you wrote row reduced to a triangular matrix, then expand it from there. And so that's kind of our goal to row reduce it into row reduced it into a triangular matrix and then take the product of the diagonals. Uh, so what about this one? Um, if I were to do this, I want to put this in row reduced echelon form. There's a couple options. You could interchange the first and second row if you want to one in the first position. Uh, but actually what I'm going to do is the following, I'm going to notice. I'm going to notice that the first row is actually everything's divisible by two. Everything's divisible by two. And so if I was thinking this as a system of equations that I'm trying to row reduce, I would scale that row by one half. All right. And that approach is perfectly fine. Uh, theorem, uh, theorem 527 that we mentioned earlier tells us exactly what we want to do if you want to scale the first row by one half. But my experience personally and also working with students is that sometimes when we scale by one half, we forget. Are we trying to scale by one half or by two? And so I want you to think of the following way. Um, when you look at this determinant by the multilineary property mentioned before, there's a common factor of two in the first row. So factor two out of the first row. So the first row becomes one negative four, three and four, and then we didn't do anything to anything else. So you factor out the two from the first row. So don't think of it as you did the scaling operation. Think of it as you factored out of the first row and then it'll be much more natural. Should there be a zero? Should there be a two or a one half out in front? It should be a two because when you factor out a two, you should still have a two that sits out in front. So now thinking in terms of row operations, we have a one right here. And so I want to zero out things below it. So I'm going to take row two minus three times row one. I'm going to take row three plus three times row one and I'm just going to take row four minus row one. So we get minus three plus 12 minus nine minus 12. Now we're going to get plus three. Same numbers, different signs negative 12 plus nine and plus 12 right there. And then we're going to get minus one plus four plus minus three, excuse me. And then minus four. Now because row replacement is free. I don't have to worry about any costs that's paid by doing the operations. I just have to simplify the matrix. It's really cool. It really is. So if you do the second row, we get zero three negative four and then negative two for the third row, we get zero negative 12 positive 10 and 10. And then for the last one, I'm going to scooch it up a little bit for the last one. We're going to get a zero. We're going to get a zero. We're going to get a negative three and we get a positive two like so and that looks pretty good. What they also want to mention is that we were row reducing along the way, which we could do that. We absolutely could, but another thing I want to mention is what if we were to cofactor expand along the first column right here? Since we have all these ones and zeros by the cofactor expansion, if you go across the first row, the minor is going to be this thing right here. And so cofactor across the first the first column there, you're going to get two times one times the minor three, negative four, negative two, negative 12, 10 and 10, zero, negative three and two. So we actually don't need the first row anymore because when you times one by this minor, we get this thing over here and then when you times zero by all the other minors, they disappear. So we actually can get the following thing. Of course, two times one is just going to be a two right there. So we can kind of drop the first row once we got that pivot position ready to go. Now if we go on our pivot position is right here. If we wanted to, we could factor out a scalar multiple right here because everything's divisible by two. I don't really care to do that right now because what I'm just going to do is do another row replacement. Or maybe maybe should we should we? Why not? Let's do it. It's good practice. So let's factor out a two from the second row because everything's divisible by two right there. So if you factor out the two, we're going to get two times two because two times one was two and so then you get three, negative four, negative two. Then if we factor out a two, you're left behind with negative six, five, five and then we left the third row alone. So now we're going to do replacements. We're going to take row two plus three times row one because I already have a zero below on the third row. I don't need to do really anything else other than that. So we're going to do our little markers here. We get a plus six. We get a minus 12 and we get a I'm sorry. What am I talking about here? Back up, back up, back up. We don't want to do. We don't want to do plus three. Sorry, we want to do. Plus two times row one, two times three gives us six and so I guess this plus six will get minus eight and then we're going to get minus four right there. So in the front, we have two times two, which is four. We'll get three, negative four, negative two. We're going to get zero. Eight take away, sorry, five take away eight should be a negative three and then five take away one is sorry, five take away four is one and then we end up with zero, negative three two. We didn't do anything to those ones right there. All right, and so then again, if you co-factor expand the first row or the first column here, right? We only have to look at this minor now right here. And so because everything else is a zero along the way, you're going to end up with four times three times the two by two minor, negative three one, negative three two. For which we could try to simplify this thing using row replacement, but as it's a two by two, we might just choose to just calculate the diagonals like we did before. Four times three is 12. We're going to get negative three times two, which is negative six minus negative three, so that's a plus three. So we end up with 12 times negative three, which gives us a determinant of negative 36. As I want to mention, this calculation I did a little bit differently than the book. Feel free to take a look at the book, which the link is in the description of this video here and you can see how an alternative approach, I just kind of did more row replacements along the way, but this all kind of works out great. Let's do one more example where we're going to kind of combine row operations with the cofactor expansions we did before. So with this matrix right here, I noticed there is a, there's a two in the second position, a second row, and there's also a negative two in the fourth row there. So I keep that negative two quite easily by taking row four plus row two. And so that's going to give us plus two, plus five minus seven and plus three. And so in terms of our matrix, we get nothing happen to the first row or the, or the second or the third. Lastly, though, the fourth row changes to get a zero. We got five plus five. Who can read that? I don't know four minus seven, which is negative three and then negative two plus three, which is a plus one. So we perform that row operation right there. And to notice now the first row, the first, not the first row. I keep on saying that the first column, we all have, we just have zero two zero zero, right? So if we were to cofactor expand this row, we have to pay attention to the signs. So it starts off plus minus plus minus. So when we cofactor expand, we're going to get a negative two in front. And then there's only the three by three minor that's left one, two, negative one. We ignore the second row because we're on that one right now, three, six, two, and then zero, negative three and one. Like so then everything else will zero out. So notice we could have chosen to interchange the two rows. So we get the two at the top. And then when we cofactor expand, we'd have a positive, but there's a cost to interchange, right? It's a negative sign. It's kind of wonderful how these different techniques end up doing the same thing. So you have a lot of liberty on deciding how you want to do things. So I just cofactor across the first column because we had zeros there at all. So you don't really have to interchange that much. You just have to replacement is free factor to get scalars and then you can interchange or just use cofactors to make sure you have the right sign. All right. So for this matrix right here, I want to do another replacement. I'm going to take row two minus three times row one. So we get minus three minus six and then a plus three right there replacement is free. So it does not affect the determinant whatsoever. We're going to get one two negative one. We get zero zero five and then we get zero negative three one like so. And so my advice would be now to cofactor expand across the first column. I said it right that time booyah and since that coefficient of one will end up with let me slide this up. We're going to get negative two times one times it by the two by two determinant zero five negative three and one. And in this situation, you could keep on going with like the with the row replacements with the other row operations or some of the other properties, but it's it's a two by two. I'm just going to take the difference of the diagonal products because that's a nice little trick for two by twos and so negative two times one is a negative two. So we get zero times one, which is zero and we're then it's going to subtract five times negative three, which ends up with a negative or positive 15. And so negative two times 15 gives us a negative 30 as our determinant. So I hope these examples we just saw give us some pretty good experience on how we can simplify determinant calculations of large matrices like three by three four by four five by five and etc. How we can combine the multilinearity of the function with the with the row replacements and also with the cofactor expansions when we combine all of these techniques together, we can actually compute determinants very effectively. It really becomes no more difficult than solving a system of linear equations. And so if you have any questions with this video, please post them in the comments below and I'll be happy to answer them for you. If you like this video, please give me a like subscribe. If you want to see some more updates in the future and we will talk some more linear algebra next time. Have a great day everyone. Bye.