 In the previous video, we learned about the inversion algorithm, which is a process we can use to compute the inverse of a non-singular matrix, and the idea came from the following idea. If we take a matrix and we row-reduce it to the identity matrix, that same sequence of elementary row operations that converts A into the identity will convert the identity into A inverse. And then how do we get that? Well, doing a row operation, elementary row operation, is equivalent to multiplying by an elementary matrix on the left. So if we do row operation 1, 2, 3, all the way up to P, then we can factor the identity matrix in the following way. Times in A by E1 does the first operation, E2 does the second operation, all the way up to EP, which does the last operation. And so we have this product of elementary matrices multiplied by A is equal to the identity. Well, this tells us that the product of EP up to E1 must be the inverse matrix A. In other words, A inverse is equal to EP times EP minus 1 times EP minus 2 all the way down to E2 and E1. Well, we can turn this back into A by taking the inverse of both sides. If we take the inverse of the inverse, that is the double inverse, that gives you back the original matrix. So A inverse inverse is the matrix A. On the other hand, when you take the inverse of the right-hand side, because we have a product of matrices, the shoe sock principle comes into play. That is, you first put your socks on then your shoes, but then you have to take your shoes off before your socks. So you're going to reverse the order of this product, so you're going to put E1 first, then E2, and at the end you're going to get EP, but you have to take the inverses of each and every one of these things. So you take E1 inverse, E2 inverse, all the way up to EP inverse. Now these matrices E1, E2 up to EP are elementary matrices, and their inverses will likewise be elementary matrices. So this same algorithm that found the inverse of A also gives us a factorization of A as elementary matrices. And so this is the idea we're going to do. We're going to show you how to do an elementary factorization. And this is going to be based upon a previous example we did in the previous video. We did example 3, 4, 5, which you can now see on the screen. I'm going to have it zoomed out so we can see the entire process all at once. I do apologize if it looks a little bit small on your screen right now. Remember, the matrix A was a 3 by 3 matrix. Its first row is 0, 1, negative 3. Its second row is 1, negative 2, 5. And its third row is negative 5, 4, and 3. You can take a look at the video for which we found the inverse of that matrix, which we took by row reducing. You can find the inverse of it down here. But we're not focused on the inverse this time. I'm focusing on the operations that got A turned into the identity. So the first thing we did was we interchange rows 1 and 2. Then we replaced row 3 with row 3 plus 5 times row 1. We then replaced row 3 with row 3 plus 6 times row 2. We then scaled row 3 by one-tenth, followed by replacing row 1 with row 1 minus 5 times row 3, followed by replacing row 2 with row 2 plus 3 times row 3, and then lastly replaced row 1 with row 1 plus 2 times row 2. Now it's the sequence of operations we took. Now let's take each and every one of those row operations and turn it into an elementary matrix. So the first one was to interchange rows 1 and 2. So we take the standard identity matrix and we replace or we interchange rows 1 and 2 when we get the following, 0, 1, 0, 1, 0, 0, and 0, 1, 3. That's a typical interchange matrix. Now most of the operations we did were going to be replacements. So if we wanted to replace row 3 with row 3 plus 5 times row 1, we're going to take the identity matrix, which has ones along the diagonals, zeros everywhere else, except in the 3, 1 position, we are going to put the number 5, right? So we're going to take the 3, 1 position, third row, first column, we're going to put instead of a 0, we're going to put a 5 there because we're replacing row 3 with row 3 plus 5 times row 1. This is why I always wrote the things the way I did, row 3 plus 5 times row 1, so that we get the right coordinates right here, 3 comma 1. Then the next operation we did is we replaced row 3 with row 3 plus 6 times row 2. So we're going to take the 3, 2 position this time, 3, 2 position, and we're going to put a 6 in that spot to perform this row operation. That's the associated elementary matrix. Then the next operation we did was we scaled row 3 by 1 tenth, so that just means we take this diagonal matrix, ones along the diagonals, except in the 3 position, since we're scaling row 3, we're going to put a 1 tenth in there, and that's how we can take care of scaling matrices. Scaling and interchange are fairly straightforward. It takes a little bit of getting used to the replacement ones, but this is also something we get used to, and that's honestly most of the examples here. Then the next ones we took row 1 and we replaced it with row 1 minus 5 times row 3. So in the 1, 3 position, we're going to put a negative 5. So we get a negative 5 right here in the 1, 3 position, first row, third column. Next, we're going to replace row 2 with row 2 plus 3 times row 3. So this tells us in the 2, 3 position we're going to get the number 3. You just put the coefficient in there. And then the last one we're going to take, we're going to replace row 1 with row 1 plus 2 times row 2. So in the 1, 2 position, we're going to put the number 2 to represent this elementary row operation. So we have these 7 matrices, and I claim we can factor A using the inverse of these 7 matrices. So go in the same order, go in the exact same order we had from before. So the first matrix, which was an interchange matrix, we're going to label that one first. But we have to take the inverse of the matrix. Now the good news is with interchange matrices, they're equal to their own inverses. Interchains have the property that E inverse is actually equal to E. So we don't have to make any substitution for any change for the interchange matrix. Then we had the replacement. We replaced row 3 with row 3 plus 5 times row 1. That's going to come second because that was the second operation we performed. But notice the 5 we had before will switch to its inverse, which is a negative 5. So we make that change right there. Then the next matrix was also a replacement matrix. We replaced row 3 with row 3 plus 6 times row 2. So we're going to have that exact same matrix, but instead of a plus 6, we're going to have a negative 6 because that's the inverse operation. The fourth operation was scaling. We scaled row 3 by 1-tenth. So that comes next in our sequence here. But instead of scaling by 1-tenth, this is going to be scaling by 10. The inverse operation, we take the reciprocal, the reciprocal there, which is 10. Then the next one, the fifth one in our sequence, was another replacement operation. That goes next in the sequence because we subtracted 5 times row 3 from row 1. The inverse operation will have a positive 5 in that position, followed up by the next matrix right here. This was a replacement matrix where row 2 was replaced by row 2 plus 3 times row 3. So that plus 3 will turn into a negative 3. And then lastly, the final replacement matrix, because we took row 1 plus 2 times row 2, we'll put a negative 2 instead. So we took all of the inverse operations. Again, interchange matrices are their own inverses. Scaling matrices, you just take the reciprocals. And then for replacement matrices, you take the negative or you switch to sign. If it was negative, it becomes positive. If it was positive, it becomes negative. And this then shows how A can be written as a product of elementary matrices, which in this case, A turned out to be a product of seven elementary matrices. And I want to tell you that this factorization is not unique, that if we took a different sequence of elementary row operations to row reduce A into the identity, then we could get a different factorization. So this factorization does depend on the sequence of elementary row operations you used. And so I just want to remind us that the matrix A looked like 1, 0, excuse me, 0, 1, negative 3 was the first row, negative 3. The second row was 1, negative 2, and 5. And then lastly, we're going to get negative 5 for the third row, 4, and 3. And I'll leave it up to the viewer here to double check that the product of these seven matrices is in fact equal to the original matrix A, which gives us this elementary factorization of the matrix. Now in our next section, we're going to talk a lot more about matrix factorizations. Factorizations of matrices are very important, and this elementary factorization is the key principle behind the LU factorization, which we'll see in that section.