 So, just a quick recap of the topics we looked at the last time. The last time we looked at the definition of the determinant and we looked at several properties and today we will continue this discussion on the properties of the determinant and then we'll move on to discussing about norms. So, just to recall, if you take a matrix of size m cross m, the square matrix, then we define the determinant through this cofactor expansion. So, we consider the matrix Aij, which is the matrix obtained by deleting the i-th row and the j-th column of A. This would be an m minus 1 cross m minus 1 square matrix and we call the determinant of this smaller matrix as the minor associated with the element Aij and we define Cij to be minus 1 to the i plus j times the determinant of Aij. This is called the cofactor associated with the element Aij. Then the determinant is defined as the summation j going from 1 to m of Aij times Cij and this can be computed for any i, i equal to 1, 2 or all the way up to m and similarly it can also be computed as the summation i equal to 1 to m of Aij times Cij and this can be computed for any j, 1, 2 or all the way up to m. This is a recursive definition because the determinant of an m by m matrix is defined in terms of Cij, which is a determinant of an m minus 1 by m minus 1 matrix, which will in turn be defined in terms of the determinant of an m minus 2 cross m minus 2 matrix and so on and so we need to know what the determinant of a 1 cross 1 matrix is and that is equal to the element itself. So with this we know how to calculate the determinant and the last time we saw several properties of the determinant. So for example, if two rows of the matrix are equal, then its determinant equals 0. If you subtract a multiple of one row from another row, then that leaves the determinant unchanged. So what this means is that if you do these row reduction operations, that does not change the determinant of a matrix. Also, if a has a row of 0s, then determinant of a equals 0. If a is triangular, then the determinant of a is the product of its diagonal entries, which means that the matrix is equal to the product of the pivot elements that appear when you can compute the row reduced echelon form of the matrix with an extra plus or minus 1 depending on how many row exchanges you did in order to compute this row reduced echelon form. If you didn't do any row exchanges or if you did an even number of row exchanges, then it is plus 1, otherwise it's minus 1. If a is a singular matrix, then determinant of a is 0 and if a is non-singular or invertible, then determinant of a is not equal to 0. And the last property where we stopped is that the determinant of the product of two matrices, AB, is the product of the two determinants. Determinant of a times the determinant of b. A consequence of this is that if a is invertible, then the determinant of a inverse is 1 over the determinant of a. The next property is that the determinant of the matrix a transpose is equal to the determinant of a. So taking the transpose of a matrix does not change the determinant. So a related interesting question is what are properties of a matrix that remain unchanged when you take the transpose? So you can see that from this, you can see that the determinant of a matrix is one property of a matrix which remains unchanged by the transposition operation. Similarly, the trace of a matrix is another property that remains unchanged by taking the transpose because the trace of a transpose is the sum of the diagonal entries and the transpose operation doesn't change the diagonal entries. It only exchanges the off diagonal entries. And so the trace of a transpose is equal to the trace of a. There are many other properties that remain unchanged by taking the transpose operation and we'll discuss those more later in the course. Now, how do you show this? So the rank is also a property that remains unchanged by the transpose operation. Okay, we'll later see that the rank is the number of non-zero eigenvalues of a matrix and so the a and a transpose have the same number of non-zero eigenvalues and consequently, they also have the same number of zero eigenvalues counting multiplicities. Another question is whether a and a transpose will have the same eigenvalues. Okay, so that is something we'll see later in the course and it will turn out that a and a transpose actually have the same eigenvalues. Now, obviously, if a were a singular matrix, then it means that the rows of a are linearly dependent, so determinant of a is zero and if the rows of a are linearly dependent, it's a square matrix. So the columns of a will also be linearly dependent or in other words, the rows of a transpose would also be linearly dependent. So determinant of a transpose would also be zero. So it is clearly true for the case of singular matrices. Okay, a is singular if and only if a transpose is singular or ranked efficient. So we'll consider the non-singular case. So we've seen in the last class that a matrix, any square matrix A admits a decomposition of the form PA equals LDU where P is a permutation matrix, L and U are lower and upper triangular matrices with ones on the diagonal and D is a diagonal matrix. Such a decomposition can be computed for any matrix and we later explicitly study matrix decompositions where we'll also outline how to compute such decompositions. So this is true, okay? You can do this decomposition, you have to take this on faith if you haven't seen decompositions like this, you may have seen decompositions of the form A equal to LU, which is a slightly different decomposition and this P just accounts for the fact that you may need to do some row exchanges in order to find that decomposition. Okay, so if I take the determinant on both sides of this equation, what I get is determinant of P because determinant of a product of two matrices is the product of the determinants, the determinant of the left hand side is determinant of P times the determinant of A is equal to the determinant of L times the determinant of D times the determinant of U. Now if I take the transpose of this equation, then that will read as A transpose P transpose is equal to U transpose D transpose L transpose and so if I take the transpose and then take the determinant, you get determinant of A transpose determinant of P transpose is equal to determinant of U transpose determinant of D transpose determinant of L transpose and now because L and U are triangular matrices with ones on the diagonal determinant of L and determinant of U, determinant of U transpose and determinant of L transpose, they are all equal to, so also D is diagonal implies that D is equal to D transpose, so that determinant of D equals the determinant of D transpose. So what we then have is that the right hand sides of the two are actually equal, this is equal to this and so we have determinant of A transpose, determinant of P transpose is equal to determinant of P times the determinant of A, so but then P is a permutation matrix and so P is permutation matrix which implies that this is true for all permutation matrices, P P transpose is equal to the identity matrix. So if you want to see this, I mean if you think of permutation matrix as performing a row exchange then P transpose undoes these row exchanges, you can also look up properties of permutation matrices, that's a good exercise to do on its own, but P P transpose is equal to the identity matrix, so I have that determinant of P, determinant of P transpose is equal to the determinant of the identity matrix which is equal to 1, which means that the determinant of P is plus or minus 1, but also in fact, no this is in fact a separate point that for a permutation matrix, the determinant of a permutation matrix is always plus or minus 1, so that means that if the product is 1 and this is plus 1 then this should also be plus 1 and if this is minus 1 then this is also minus 1, so this implies that determinant of P and determinant of P transpose plus 1 or minus 1. Let me write it a little more clearly, okay so because of that this, if this is plus 1 this is also plus 1, if this is minus 1 then this is also minus 1 which implies that determinant of A equals the determinant of A transpose, okay the next property is what happens to the determinant when you scale a matrix, C times A equals what? Because every row is getting scaled by C and it is linear in any one of the rows, so you can pull out a C from each row and it becomes C power m times determinant of A. Another property that you are probably aware of is that the determinant is the product of the diagonal of the eigenvalues of the matrix, so we haven't formally defined eigenvalues yet, but this is a connection that I want to make right now which we will revisit later where lambda i is the eigenvalue, this also means that you've seen that the determinant of a matrix, the magnitude of the determinant of a matrix is the volume of the parallel pipe it formed by the columns of A and what this means is that the parallel pipe it defined by the columns of A can be transformed into a rectangular solid or a cuboid with edges given by of length equal to the eigenvalues of this matrix A. The next property is that if you take any orthonormal matrix, so for any orthonormal matrix the determinant is plus or minus 1 and in particular a permutation matrix is an orthonormal matrix and therefore its determinant is also equal to 1 or minus 1. Okay, yeah, welcome. The next property is that if B is non-singular determinant of B inverse AB equals determinant of A. So we've already seen this because determinant of B inverse AB is the determinant of B inverse times determinant of A times determinant of B but determinant of B inverse is the same as 1 over the determinant of B so those two cancel and you're left with determinant of A. This transforms, so you take a matrix A and you find a matrix C which is equal to B inverse AB. This is called a similarity transform and it has lots of very nice properties which we will see later but one thing you can note right away is that the determinant is a property of a matrix which does not change when you apply a similarity transform on the matrix. Of course another crucial property of the determinant is that it allows you to compute A inverse. So suppose A is a non-singular matrix and suppose we compute this matrix A tilde which is equal to the adjoint of A and defined to be this matrix containing those cofactors C11, C12 up to C1m, Cm1 up to Cmm with a transpose. So this is called the adjoint of a matrix A then we have that if I compute if I take this matrix A tilde and suppose I take its i-throw A i tilde transpose this is the i-throw and I multiply it by A i you can go back to the definition of the determinant and verify that this is exactly what the determinants definition is doing. So I'll just scroll up for one second to just show you that. You can see that the determinant as defined here is the inner product between two vectors one vector being the i-th column of A and the other vector having these factors Cij, j going from 1 to m. Okay so if I take the transpose then that corresponds to picking the i-throw of A tilde. So if I do A i so this is the i-th column of A then this will be equal to the determinant of A and it's true for i equal to 1, 2 up to m and similarly if I take A i tilde transpose times Aj what will this be equal to any idea? Sir I can't give a formal proof sir. Yeah so if you if you think about it this A i tilde transpose Aj is actually the determinant of a matrix where you have replaced the i-throw of A with Aj transpose. So this is a determinant of a matrix whose two rows are actually identical. So this determinant has to be zero. So we see that if I take this matrix A tilde times A then this is equal to the diagonal entries of this product are all equal to the determinant of A all the off diagonal entries of this product are equal to zero. So this is equal to determinant of A times the identity matrix which means that A inverse is equal to 1 over the determinant of A times A tilde. So the determinant allows you to find the inverse of a matrix although this is not this is typically not the way you want to compute the inverse of a matrix. So for instance if you're using MATLAB this is not how MATLAB would compute the inverse of a matrix but it gives us a formula or at least one way to try to find the inverse of a matrix. So those are some of the properties of the determinant that I wanted to discuss. Before I move on to norms maybe very briefly I'll talk about a few uses of the determinant. The first is that it can be used to test invertibility. If determinant of A is zero then the matrix is not invertible if it's non-zero then it is invertible. The second is that if you consider the family of matrices A minus lambda I now this is a family of matrices because lambda can be anything. So let me just say it's a any complex number for now. So as I vary lambda over the complex plane I'll get different different matrices and I look for what values of lambda will make this A minus lambda I a singular matrix. So I look for solutions to determinant of A minus lambda I equal to zero. Now if I expand this out this this will turn out to be a polynomial of degree n which means that it has exactly n roots counting repetitions and therefore there are exactly n values of lambda which will make this determinant equal to zero. For any other value of lambda in the entire complex plane this determinant will be non-zero or A minus lambda I will be a non-singular matrix and these n roots give us the n eigenvalues. So determinant is also something that helps us in computing the eigenvalues of a matrix. The other uses are that I already mentioned that determinant gives us the volume of the magnitude of the determinant is the volume of the parallel pipet formed by the columns of A. The next one is that the determinant can actually be used to obtain a formula for each pivot element in the row reduced echelon form. I'll discuss this more later when we talk about matrix factorization but for now for the sake of completeness I'll put down this thing. It gives a formula for each pivot element in the row reduced echelon form and finally but not the least for sure is that the determinant gives us a measure for the sensitivity of a solution to y equal to Ax to changes in y bar. So specifically if you are given a set of linear equations y equal to Ax and you're asked to find x and suppose tomorrow somebody came back to you and said that y was measured incorrectly and so there is a small perturbation and the actual y is some y prime then you've already given them your solution then you can compute how sensitive your solution is to this error in y. Similarly if they said that the a that was used to obtain y was not a but instead it was a prime then you can actually compute how sensitive the solution is to this perturbation in the matrix A. So this is a this is another sort of chapter by itself and again we will study this later in the course.