 If we don't want to use it to solve systems of equations, why do we even care about it then? Turns out that Kramer's rule is a formula that has great theoretical benefits, even if practically you don't want to use it for computations. And so I want to show you an example of how one can use Kramer's rule in a theoretical sense. So again, we have an in my in matrix and we're going to define now the etiquette of the matrix. In some linear algebra textbooks, this is called the adjoint of the matrix. But unfortunately adjoint can be used to describe other things like for example, sometimes people use the adjoint to describe the choice. So because adjoint is a little bit ambiguous in linear algebra, we're going to choose to use the word etiquette here, which seems like a made up word. I would recommend you not use that in Scrabble, unless of course your official dictionary for the game is this textbook. But etiquette does have the advantage that the abbreviation ADJ is the same whether you use adjoint or etiquette. So it kind of helps clarify the context here. The adjoint of the matrix will be the matrix whose entries are cofactors. So the one, one entry is the cofactor one, one. The one, two entry is going to be the two, one cofactor. And this progresses onward, right? The last entry, the one in entry is the in one cofactor. And so I do want you to notice that this right here is the transpose of the cofactor matrix. That the cofactors will look like they're in the wrong spot right here. That the one, two position has the two, one cofactor. And the two, one position has the one, two cofactor. The reasons for that will be a little bit more, make a little bit more sense in the next. So as an application of Cramer's rule, we can actually compute a formula for the inverse of a matrix. So if A is an in-by-in matrix, then the, then the adjutant has the property that A times the adjutant, well actually the adjutant commutes with the matrix. And this will equal the determinant times the, and so in particular, if the determinant is non-zero, it's a non-singular matrix, then you can divide by the determinant. And this gives you a formula for the inverse because now we found a matrix whose product with A is equal to the identity. If it walks like an inverse and quacks like an inverse, then we have the inverse matrix right here. The inverse of A will be one over the determinant times the adjutant of A. And so again, this formula, this equation we saw above, it works even when the matrix is singular. And so I wanna kinda point your attention to a two-by-two matrix. Take this to be our matrix A, just a generic two-by-two. So if we go around start computing the adjutant, the adjutant, the adjutant of A is gonna look like the matrix where you're gonna get C11, C1, sorry, C21, make sure you do them backwards here, the C12 and C22. All right, so what happens here is that if you're looking for adjutants, we have to do cofactors and the one-one cofactor means take away the first row, first column, you're left with just a D. And so I'll record that right here. If you do the two-one cofactor, two-one means take away second row, first column, you're left with just a B. But remember that cofactors have signs built into it. Some of them are posms, some are negative. And so you end up with this negative B right here. The next one, the one-two cofactor, take away first column, sorry, first row, second column, you're left with a C. But this likewise will have a negative, this is a negative cofactor, so you end up with a negative C right here. And then finally, if you do the two-two cofactor, second row, second column, you're left with just an A. That one is positive, so you get the matrix A right here. Now it feels like I've seen this before somewhere, right? Don't we know the formula for A inverse for a two-by-two matrix, you take one over AD minus BC times by this matrix right here, D negative B, negative CA right here. Now this number in the bottom, we've mentioned it before, this is none other than the determinant of the matrix A. This matrix right here is none other than the adjo kit of this two-by-two matrix. And so this inverse formula we had for, this inverse formula we have is none other than just this theorem five of the special case of two-by-two matrices. So what I want to show that this works in general real quick. We've been using it for two-by-two matrices a lot, but we actually could do something like this for even larger matrices as well. And the argument is basically the following. The jth columns of A inverse is gonna be a, it's gonna be a vector X such that, make sure my color's right. A X equals, A X equals E J. And so in this here, X is the jth column of A inverse. A inverse has the property that A times A inverse gives you the identity. So therefore, if we take the jth column of A inverse, times it by A will give you the jth column of the identity. So A X equals E J right there. Well, if we apply Kramer's rule, so by Kramer's rule, which we saw before, by Kramer's rule, the ith entry of X is given by the following. So X I is given as the determinant of A I evaluated E J over the determinant of A. So we're thinking of this right here, A X equals E J as a system of equations we have to solve because we don't know the rule, we get a formula that looks something like this. But if we think of it in terms of cofactor expansions, by cofactor expansion, if we expand across column of A I E J here, because we know that the ith column of A I here is gonna look like this vector E J. If we cofactor expand the ith column, we end up with the following, let me slide this up a little bit. We're gonna end up with the expression that the determinant, the determinant of A I E J, this is gonna equal negative one, plus a negative one to the I J power times by the determinant of A J I right here. And so basically what happens if you cofactor expand, because again, this A I E J, it looks like you have an A one, you have an A two, and then somewhere in the middle you put this E J continue on. If you cofactor expand across this column right here, since you just have a bunch of zeros in the one eventually, all of the cofactors for these ones are just gonna disappear, except for there's the one right here. We have to pay attention to the sign, and so you're gonna get negative one to the I plus J position, because that one right here resides in the I J position. And then if you look at the minor, everything that cancels out, the associated minor is gonna be the A J I minor, because once you take away that ith column, A I E J looks just like the minor will look, and so what you see right here is this negative one to the I plus J power times the determinant of the minor A J I. This is just the cofactor of A, although the entries got swapped around, it's a J I instead of an I J. And so because of that, we then get that X I right here is equal to C J I, and as this was the ith entry in the column right here, when you put this all back together, we see that the I J entry of A inverse, this is gonna equal C J I over the determinant of A, coming back up to this formula right here. And so that verified the formula we had before. So using Kramer's rule, we're able to compute a formula for the inverse of the matrix, one we've already been using. For two by twos, it's not so bad. I wanna show you how this would work for three by threes. This one's a little bit more involved, so I have a lot of the details already presented to you on the screen. Suppose we have a three by three matrix A, which is giving us two, one, three, one negative one, one, four, negative two. Well, since it's three by three, there are gonna be nine cofactors and to find the etiquette, you're gonna have to calculate each and every one of them. So there's the one, one cofactor for which you take away the first column, first row, sorry, first row, first column. That'll be positive. By the usual determinant calculation, you're gonna get two minus four, which is negative two. We're gonna do the one, two cofactor. So we take away the first row, second column, then you're going to get this two by two determinant. It's gonna be negative this time. You're gonna get one times negative two minus one. That gives you a positive three when you're done. You then will do the one, three cofactor. Then you take one times four plus one. That's gonna be a five. And you do this six more times. You get all the different cofactors. The two, one cofactor is 14. The two, two cofactor is negative seven. Two, three will give us negative seven. Three, one will give us four. Three, two will give you one and the three, three will give you negative three. So those are all the cofactors. You can double check the details yourself. So with this in mind here, with the co, with the etiquette, you're gonna take the cofactors with the transpose of where everything goes. So for example, the one, two cofactor is gonna go in the two, one position. And likewise, the two, three cofactor is gonna go in the three, two position. Make sure you get those on the right locations. So we get this, this is our etiquette. There's a lot of work that goes into computing all those cofactors. I don't wanna minimize that calculation, but there's these nine cofactors. And then once you have this etiquette, we can double check. If we multiply this matrix by the original matrix A right there, you can double check there. Negative two times two, that's a negative four. 14 times one, that is a 14 and four times one, that's positive. Positive four cancels with the negative four giving us a 14 right here. Next, if we take this row times the second column, you're gonna end up with a negative two minus 14, negative two minus 14 plus, plus 16. And so yeah, 16 minus 14 minus two, that gives us a zero. Sorry, I kind of stumbled there a little bit. If you take this first row times the third column, you get negative six plus 14 minus eight, that gives you zero again right here. You can go through the arithmetic and see that this matrix, which we computed for the etiquette, multiplies by A to give you 14 down the diagonal and zeros everywhere else. So this is just 14 times the identity matrix, I3. And so if we divide the etiquette by 14, which 14 is the determinant of this matrix one, we end up with the inverse matrix, which you can see, all right? That is a way of computing the inverse matrix. It's a little bit more drawn out than what we've seen in the past because we can take A, augment the identity and row reduce this, you'll get the identity, augment A inverse. This method I think is gonna be the more preferred method in general, but be aware that one can use Kramer's rule to calculate inverses of matrices. So while in practice, it's not very practical to solve systems or compute inverses via Kramer's rule. I wouldn't recommend doing it. You will be asked to do it in the homework and you do need to do it that way because those are the instructions, but that's mostly to kind of see the comparison why this one doesn't work as effectively. The method of row reduction is actually in general much more effective and we've mentioned that already and that's why I'm not a huge fan of Kramer's rule. You should just use row reduction. It's gonna be more effective here. It's only in the case of a two by two where it's almost the same, even though row reduction still is more efficient. On the other hand, it's the application of Kramer's rule that's really important here and the application of adjudicates in the theory of linear algebra. So for example, if A were an integer matrix, like you see in this example right here, all of its entries are integers, then its determinant is gonna be an integer and all of its cofactors are gonna be integers as well. So if A is an integer matrix, I can actually predict, I actually know for a fact that its adjudicate, which is just a bunch of cofactors, will likewise be an integer matrix as well. Because to compute cofactors, we just have to do multiplication and add and subtract. We never do division when we do cofactor computations. Same thing with the determinant. The determinant's gonna be an integer as well. And so no division's required. What about the inverse here? There is a bunch of fractions there. That's because I had to divide by the determinant, which was 14, and some of those numbers were not divisible by 14, as you see in terms of the computation. But what if the determinant turned out to be like plus or minus one? If the determinant of A was equal to plus or minus one, then when you divide by one, that doesn't do anything. And when you divide by negative one, that just changes the sign. If you knew your determinant was plus or minus one, then any system of equations you solved whose determinant was plus or minus one, the solution is gonna be an integer vector. And in this situation, if the determinant was plus or minus one, the inverse of that matrix by this formula would also be an integer matrix. So if you were trying to write a homework question where students had to compute the inverse of a matrix and you wanted the answer to be adorable little integers, you know, all you had to do was just start with a matrix whose determinant is plus or minus one. Hum, I wonder if there's a math instructor who's ever used that to create homework questions for his students so they can feel happy when they get whole numbers at the end. Hmm, hum, hmm. I wonder if you looked at some past homework questions if you computed determinants, would any of those be determinant one or negative one? You'll notice I've actually used those a lot in the homework. Now, that's sort of like a one application of this. That's not how people use it all the time. But I just wanna kind of point out to you that it's the theoretical benefits of Kramer's rule that are really why we care about it so much. This book focuses more on the computational aspects of things, but we should always be aware of those theoretical aspects as well. So I do want you to have some exposure to Kramer's theorem and you'll see this in the homework as well. And so that ends our lecture today. Thanks for watching. If you haven't done so already, please do subscribe so you get updates about future videos, particularly about linear algebra and other projects I'm working on. Feel free to like this video if you have any questions, comment below. I'll be happy to answer those questions. And other than that, I'll see you next time. I have some fun with Kramer's rule here. I've done my job correctly. You'll hate using it as much as I do. Wink, wink. See you next time, bye.