 In previous video, we talked about nice algebraic properties of matrix multiplication. It's associative, it's distributive, it has a multiplicative identity. Those are some of the nice algebraic properties, but you'll notice that compared to things like addition, which we talked about matrix addition, there were some things omitted from that list. For example, matrix multiplication, we did not say was commutative. In fact, I said so much to the opposite. Matrix multiplication is non-commutative. We cannot assume that A times B equals B times A. Because first of all, these products might not even be compatible whatsoever. So if A is a matrix of the form M by N and B is a matrix of the form N by P, that's what's necessary. You have to have the same number of columns and A has the equal number of rows and B. This will give you an M by P matrix. But on the other hand, if you switch the order around, you're going to get an N by P times an M by N. And so that, you can't cancel out the P and the M right there necessarily, unless they happen to be the same number. So it's very possible that the product might not even make sense. AB could exist, but AB might not exist. And so obviously a defined matrix cannot equal an undefined matrix. But even in a situation where the product could be reversed, right? Let's take, for example, you have like a two by three matrix and you times that by a three by two matrix. In that situation, you get a two by two matrix. If we reverse the order, we're going to get three by two, and then you times that by two by three. In this situation, the two matrices, so this would be for AB. This one right here would be for BA. In this situation, the product is compatible even when you reverse it. But you get a three by three matrix. Clearly a two by two matrix and a three by three matrix cannot equal each other. So even if the reverse order was existent, they might be different sizes, all right? So let's then get to really the heart of it, right? Even in a situation where it's like a square matrix. Let's say that you have N by N and N by N. What if they're both square matrices? Well, then you're going to get an N by N, whether you look at AB. And then if you look at BA, you'll get N by N times N by N. Which is N by N. So this is actually a setting if they're both square matrices of the same size. This is a setting where we actually could have even the possibility that AB equals BA. But I have here on the screen a counter example that shows you that we cannot assume that AB equals BA. So take for example the two by two matrices, one two negative one three and zero negative three one one. If we multiply these matrices together, A times B, you'll take the first row times the first column. You'll get zero plus two, which is two. You'll take the first row times the second column. So you get negative three plus two, which is negative one. You'll take the second row times the first column. You'll get zero plus three, which is three. And then you'll take the second row times the second column. You'll get three plus three, which is six. So this is the product AB. On the other hand, if we swap the order of A and B, we get BA right here. First row times first column, you get zero plus three, which is three. That already we can see is a disconnect with AB. You can also stop there, but let's keep on going. The first row times the second column, you get zero plus negative nine. Next second row, first column, you're gonna get a zero, one minus one. And then lastly, second row, second column, you're gonna get two plus three, which is five. And so we can see that these two matrices completely disagree with each other. So we cannot assume that AB equals BA. Matrix multiplication does not commute. It's a non-commutative operation. Now, that doesn't mean that matrices never can commute with respect to multiplication. It just means we can't assume it. We need some other evidence to suggest. Because I can give you some examples of commuting matrices, but this doesn't happen in general. We can't assume that matrices commute. So we have to be cautious to avoid such a thing. Now, let's look at something else that we cannot assume. So when it comes to, like, say addition, if you had something like the following, a matrix A plus B is equal to A plus C, or something like that. If you were in this situation, you could be like, oh, I'm gonna subtract A from both sides, whoop, whoop, those cancel out, and you end up with B equals C. So this is called the cancellation principle, that when you have, well, if we use this to solve linear equations all the time, we can cancel out the operations here. We have to be careful that matrix multiplication, we cannot cancel things. What I mean is the following. If we have the following equation, AB equals AC, so this is established truth, AB equals AC. We cannot assume that even if this happens, we can't assume that B equals C. So there exist matrices AB and C, so that AB equals AC, but B does not equal C. We can't cancel out the A here. And so let's first look at an example of why we cannot necessarily cancel. So let's take a matrix A, which will look like 0, 2, 0, negative 1. B will look like 1, 2, 3, 4, and C will look like 5, 6, 3, 4. All right, what happens when we multiply these things together? And let's do this in detail, right? If you take the first row times the first column, you're going to get, in that case, you get A0 plus A6, so you'll get a 6. Take the second column here. You're going to get A0 plus an 8, so you get an 8. Take the second row times the first column, you're going to get A0 minus a 3. And then you're going to, second row, second column, you'll get A0 minus a 4. So the product AB equals 6, 8, negative 3, negative 4. All right, let's do the same thing with A and C. If we take first row of A times the first column of C, you're going to get 0 times 2, which are 2 times 3, which is 6. 0 plus 2 times 3, which is 6. Take the second column now, you're going to get 0 plus 8. Okay, now you're going to take the second row times the first column, you're going to get 0 minus 3, like so. And then last one here, you're going to get 0 minus 4, which is negative 4. And so let's look at this. AB is equal to the 2 by 2 matrix, 6, 8, negative 3, negative 4. AC is equal to the matrix, 6, 8, negative 3, negative 4. So we can then establish the fact that AB equals AC. But look at the, look at the matrices B and C right here. B and C do not equal each other. So we cannot guarantee that B and C are equal, even though A times B equals AC. Now, if you're paying attention to this example here, you can barely see what the trick was, right? You'll notice that the first column of A was a column of zeros. And so when we did the matrix multiplication, basically we just ignored the first row of B and C. So if you take away the first row of B and C, what happens? Oh, look, their second rows are identical. And so that's really where this similarity came about. But it turns out that despite how, like in retrospect, we can see how we are able to cook up examples where AC equals AB, but B and C don't equal each other. But it turns out we can do more exotic examples than this, where even though A might be having no zeros in it whatsoever, we can still produce this phenomenon. Turns out this is the basic idea, but we can hide it in many ways. We can hide the proverbial dirt under the proverbial rug right there. So we cannot assume that B equals C, even if AB equals AC. So matrix multiplication is non-cancelative. We cannot cancel matrices. Because after all, if you had an equation like AB equals AC, how would you cancel A? The usual strategy would be something like the following. Well, I'm going to divide both sides by A so that the A's cancel, and then you get B equals C. Why can't we do that with matrix multiplication? Well, in order to divide by A, we would need matrix division, which is not something we have introduced right now. And so while we'll talk about what matrix division would mean in the next video, what this example shows us so far is that matrix division might not be possible for everything. Let's use an example of integers. If I take 0 times 7, and I set that equal to 0 times 3, that is a true statement, because both of these things are just equal to 0. So 0 times 7 is equal to 0 times 3. I can't cancel out the 0 and end up saying 7 equals 3. I can't do that. If I could divide by 0, I would enforce that the real numbers 7 and 3 are actually equal to each other. Because of this reason, we don't divide by 0 because it's non-cancelative. The same issue apparently is going on with matrices, that there are some matrices, when you multiply them, it's kind of like multiplying by 0, that we can't undo the process. And we'll talk about more of this in the next two lectures as we talk about matrix inverses. A third property that I want to mention that we cannot assume for matrix multiplication is similar to property number two about cancelative is we actually cannot assume the zero product property. The zero product property, remember, this is the property that if you multiply two things together that's equal to 0, it means that one of the factors was equal to 0. Now this is a key thing that we use like when we say solve polynomial equations. If you had something like the following, let's take x squared plus x minus 6 or something like that, this was equal to 0. How do we solve this equation? Well, we look for a factorization of the polynomials you get like x plus 3 and then x minus 2 equals 0. At this moment, we use this so-called zero product property or zip, zap, zap, you know, I don't know what the vowel should be, you can call it whatever you want. But we use the zero product property, which tells us the only way this product would equal 0 is if one of the factors equals 0. Well, if the first factor equals 0, we would get x plus 3 equals 0, which says x would equal negative 3. If the second factor equals 0, we get that x minus 2 equals 0, which would say that x equals 2. And so the two solutions to the equation would be negative 3 and 2. And this is using the zero product property. That is the only way that two real numbers can multiply to be zero is if at least one of the factors was already equal to zero. It turns out for matrices, we do not get that luxury. That is possible to construct a product of two matrices that's equal to zero, but neither of the matrices themselves is equal to zero. Such an example, consider the following. Let's take the matrix 0, 2, 0, negative 1 and the matrix 4, 2, 0, 0. In this situation, when you multiply them together, you'll take the first row by the first column, you get 0 plus 0, which is 0. The second column, you'll get 0 plus 0, which is 0. Take the first row, first column, you'll get 0 plus 0, which is equal to 0. Take the second column, you get 0 plus 0, which is equal to 0. And so notice that A is not the zero matrix, neither is B, but their product turned out to be zero. Now again, this example, it kind of seems underhanded here because it's like the first row was zeros here and the second, sorry, the first column was zero and A and the second column, the second row B was zero. And so you can see that because of those rows and zeros, they basically annihilated everything else. That was non-zero, so that the product turned out to be zero. So it's like, okay, I can see what you did there. But again, like we talked about with the non-cancellation, I did an obvious examples with rows and columns of zeros. I can actually, if I was forced to, I could come up with two by two matrices with no zero entries whatsoever, whose product is still zero. This is the essential problem that you have these secret rows and columns of zeros, and I don't know, I don't wanna define what I mean by secret right now, but we can hide the dirt under the rug and we can still get products of completely non-zero matrices that give you zero in the end right here. So we cannot assume the zero product property. So matrix multiplication is non-zip zappities up. Again, pick whatever vowel sounds you want for the zero product property. We cannot guarantee that principle for matrix multiplication. Now I don't want you to get this bad taste of matrix multiplication. Multiplication for matrices is pretty awesome, but unlike multiplication, say of real numbers or complex numbers or multiplication in a field, matrix multiplication does not have all of the nice, niceties of a field, but there are some important things we still need to know about matrix multiplication. And so some three things we should never assume about matrix multiplication. We cannot assume that it commutes. We cannot assume that it cancels and we cannot assume the zero product property. There are situations, of course, where they apply, but it doesn't apply in general. So we can only apply these principles in special cases and by default, we should assume that they do not hold for matrix multiplication.