 In the non-singular matrix theorem, we saw that an N by N matrix A is invertible if only if it's row equivalent to the identity, the identity matrix there. Now, it turns out that the process of row reducing A into I can produce the inverse matrix A inverse, and this is going to come about from using elementary row operations to convert A into I. But we can also turn these elementary row operations into matrix multiplication using elementary matrices that we introduced in the previous video. So let me explain that. So let's suppose we have an invertible matrix A. So let A be an invertible matrix. We don't need to say A twice there. Let A be an invertible matrix. Then any sequence of elementary row operations that reduces A to the identity will also transform the identity into A inverse. So how are we going to do this? So we know by the non-singular matrix theorem like I just mentioned that if A is non-singular, it'll be equivalent, row equivalent to the identity matrix. So there's some sequence of row operations that gets you there. So it's like, okay, we have A which we're going to call it A0 for a second, we perform a row operation, we get A1, we perform a row operation, we get A2, and we do this, we'll say we'll do this P times until we get AP, which that's the same thing as the identity matrix. So we have some sequence of row operations that does that. Now, associated to each of these row operations as an elementary matrix, we're going to call it EI, for example. So what this tells us is the following, that well A0, that's just an A, right? A1, this can be factored as E1 times A. A2, this can be factored as E2 times E1A1. Then the next one, this would look like E3 times E2 times E1A. We can continue down the line until we end up with the following. We get EP times EP minus 1, all the way down to E2E1 times A. Then this is supposed to equal the identity. That's what I'm trying to illustrate right here. That each row operation we perform can be rewritten using matrix multiplication where EI is an elementary matrix. So now we have this factorization of the identity matrix. But by redoing the parentheses, notice if we put all the elementary matrices together, we have a product. Let's say that the product of all those matrices, EP, EP minus 1, all that way down to E1. Let's call that matrix E for a moment, right? Then what we've said here is that E times A is equal to the identity. Which means, hey, this guy right here must have actually been the inverse of A. So the product of all these elementary matrices is the inverse of A. So A has an inverse and we can find it by multiplying together these elementary matrices. How do we put that into an algorithm? Well, the theorem we just proved which we call the inversion algorithm, it's a process. The algorithm is gonna be a process for computing matrix inverses. So consider the following situation. We have A augment the identity. So what we saw previously is that the, we saw in the proof just on the previous slide right here, that A inverse is the product of all these elementary operations, right? And so notice if we take this, let's actually expound upon this for one more second. So if we take EP all the way down to E2, E1, and you times that by say the identity, which of course doesn't really do much, that gives you A inverse. So compared to what we saw earlier, right? Multiplying by elementary raw, by elementary matrices does an elementary operation. If we take the same sequence, if we take the same sequence of elementary raw operations that converted A into the identity, that same sequence of elementary raw operations will take the identity into A inverse. And so if we take the augmented matrix A augment the identity, then when we row reduce A into the identity, this will row reduce the identity into A inverse. And so when in doubt, row reduce, that seems to be the solution to every linear algebra problem. So let's consider the three by three matrix A, given as zero, one, negative three, one, negative two, five and negative five, four, three. If we were performing row operations here, what would we get? Now at this point in the series, I've been skipping over row operations a lot because this becomes quite elementary, but here, in this problem, I do wanna emphasize it here because the elementary raw operations is the key, right? So if we were to do some raw operations, what would we do? Our first pivot in the one, one position as indicated right here has a zero in it. We want something else, a one would be great. So let's interchange rows one and two. This then puts a one in the pivot position. To get rid of the five below, we're gonna take row three and we're gonna replace it with row three plus five times row one. So we're gonna get a plus five minus 10, plus 25 and a plus five. I ignore the columns that zeros in them. And so then when we come down here, negative five plus five is zero, four take away negative 10 is a negative six, three plus 25 is 28, zero plus zero is zero, zero plus five is five and zero plus one is one right there. Scroll up a little bit. So now we can move our pivot position to the two, two position. We wanna get rid of that negative six that's below the pivot. So we're gonna take row three and add to it six times row two this time. So we're gonna add six, we're gonna subtract 18, we're gonna add six and then everything else is a zero. Six minus six is a zero, 28 minus 18 is 10, zero plus six is six, five plus zero is five, one plus zero is one right. Now that finishes the forward phase of our Gauss Jordan elimination. The next thing to do is then to scale. So we're gonna scale row three by 10, that makes 10 go to one, that makes six go to six tenths, which is three fifths, five would go to five tenths or one half and then one would go to one tenth like so. Now we want to get rid of the things that are above the pivot position. So to get rid of this five for example, we are going to take row one minus from it five times row three. So we get minus five, minus three. So this next one here, we have to get a minus five halves and then the next one's gonna be minus one half. If we work with the fractions there, five take away five is zero, zero minus three is negative three right there. So you're gonna take one minus five halves which is negative three halves and then zero minus a half is negative one half there. Next, we have to get rid of this negative three right here. So we should take row two plus three times row three. You might have wondered, why don't you just do both of those at the same time? Well, although we could have, I'm trying to emphasize step by step. What is that sequence of row operations here? So I am gonna separate those two steps so that each step is one step along the way. So we're gonna get plus three, we're gonna get plus nine fifths plus three halves and then the last one there is gonna be plus three tenths. So that will go to zero. We're gonna get one plus nine fifths which is 14 fifths. Zero plus three halves would be three halves and zero plus three tenths would be three tenths. And so we're almost there. The last thing to do is move in our pivot position back to the two-two spot. We need to get rid of that negative two. So we're gonna take row one plus two times row two this time. So we get plus two. We're going to get, be careful here, 28 fifths. For this one here, the next one we're going to add six halves or if you prefer, we're just gonna add three. And then for the last one, we're gonna add six tenths or again, if you prefer, we could do three fifths. In the end, it's all gonna be the same. You're gonna get the identity matrix over here and then negative three plus 28 fifths. Let's see, three becomes 15 over five and so that combines to give you 13 fifths positive. The next one, you're gonna get three fifths positive and then for the last one, you're gonna get negative five fifths plus negative five tenths plus six tenths, which is one tenth. And so in the end, we get it. And this right here is our inverse matrix. So we see that A inverse is gonna equal 13 fifths, three halves, negative one tenth, 14 fifths, three halves, three tenths, three fifths, one half and one tenth. Like so, this is the inverse matrix. Now, if you want to, you could factor out a scalar of one tenth. That'll leave behind 26, the next one will be 15, one. Let me give myself a little bit more space to write this thing here. Then the next row will be 28, 15 again, three and then the last one would be six, five and one. And so this would then give us our inverse matrix. If you like the whole number, you can factor out the tenth. You can't throw away the tenth because you need it. If you took A times just this matrix right here, you get tens along the diagonals, zeros everywhere else, you need the one tenth. So if you prefer, you can write it like this. And don't be too worried that, oh no, the inverse matrix has lots of fractions. I'm so scared of fractions. Remember what's the whole point of inverse matrices? These are the reciprocal matrices. These are the matrices when you multiply by, we'll cancel out the multiplication by A. Essentially, multiplying by the inverse matrix is matrix division. When you divide whole numbers, you get fractions if things don't divide into things evenly. That's gonna happen with inverse matrices. You're gonna get fractions sometimes. It's not such a big deal. Now, you'll get fractions if you're working over the real numbers, complex numbers too. Nice thing about some of the finite fields we've looked at like Z2, Z5, Z7. Fractions are never actually necessary, in which case you could then simplify these things in that regard if you so chose. But the inversion algorithm is a very nice method for finding the inverse of a matrix. If you augment the singular, non-singular matrix of the identity, you'll row reduce that to get the identity augment inverse right here. But what if A was a singular matrix? Well, then it won't be a row equivalent to the identity. You'll get something else. And so what's interesting about the inversion algorithm is the following. It's a decision problem, right? It can determine is the matrix singular or non-singular. It can tell you yes, it's singular or yes, it's non-singular. It'll determine by that. If you get anything other than the identity in this spot of the matrix, that means it was a singular matrix. But in the case that it turns out to be non-singular, you'll also have the inverse matrix in hand. And so you could slam this into your favorite calculator and compute it and then interpreting the REF here, you'll see, oh, identity matrix is non-singular. Here's the inverse matrix, yay! Or if it's like, oh, this wasn't the identity, then we can record that the matrix was in fact singular. It's pretty impressive and that this algorithm doesn't take anything more than just a few row operations.