 Suppose A is an n-by-n matrix. There's a sequence of elementary matrices that can be used to row-reduce A to R. But since elementary row operations can be undone by elementary row operations, each EI has an inverse that is also an elementary matrix, and so we can write A as a product. Consequently, any square matrix can be expressed as a product of elementary matrices applied to its row-reduced form. So, now suppose A and B are n-by-n matrices where the determinant is not equal to zero. Then the row-reduced forms are both the identity matrix, so we can express A and B as a product of elementary matrices with the identity matrix. And remember, our theorem says that if we have a sequence of elementary row operations, the determinant of the matrix is the product of the determinants, which gives us an expression for the determinants of A and B. Since I is the identity matrix, we can include or omit it anywhere in any product of matrices. So, if we find this product, A, B, we can simply drop the identity matrix. Taking the determinant of both sides, we can find the determinant of, well, let's go ahead and include that identity matrix because we have our theorem that says the determinant of a product of elementary matrices applied to a matrix can be separated as, and since the determinant of I is equal to one, we can include it or omit it anywhere in any product. So, let's put it in here. And notice we have both the determinant of A and the determinant of B, so we can rewrite this as, and this proves that if the determinants of A and B are not zero and A, B exist, then the determinant of the product is the product of the determinants. What if the determinant of A or B, or both, is zero? We can make a similar argument if we introduce something we could do as a row operation, multiplying a row by zero. Now remember, multiplying a row by zero is not a row operation, but if we allow that, in that case we, well, let's leave that as a homework problem. And putting our results together gives us an important theorem. Let A and B be square matrices of the same size, then the determinant of their product is the product of their determinants. This leads to several important theorems. If A is a square matrix, then if A inverse exists, the determinant of A inverse is the reciprocal of the determinant of A. We can prove this by, well, definitions of the whole of mathematics, all else is commentary, do your own homework. In addition, suppose A is an n by n coefficient matrix, the corresponding system of equations has a unique solution if and only if the determinant of A is not zero. Now, while these are important theorems, they do require us to compute the determinant. And this leads to an important idea. We can compute the determinant of an n by n matrix using cofactors. Never do this. Okay, do it when you're asked to do it, but on anything larger than a 2 by 2, it requires a lot of work. For example, suppose A is a 20 by 20 matrix and we want to decide whether the corresponding system of equations has a unique solution. Without going into the details, if we row-reduce A, we'll need to perform about 3,800 multiplications, or we could use our theorem and compute the determinant. If we compute the determinant of A using cofactors, we'll need to perform about a lot more multiplications. And real-world applications of linear algebra typically use much larger matrices. And so as a general rule, using the determinant to solve a problem generally requires more work than solving the problem another way.