 Welcome back everyone to our lecture series based upon the textbook linear algebra done openly. As usual, I'm your professor today, Dr. Andrew Misseldine. It's great to have you. Today, we're in section 5.2 of chapter 5 about determinants, and in particular, we're going to talk about some properties of determinants. The determinant is a function from matrices to scalars. That is, given a square matrix, you'll give you back a scalar. As the set of matrices is a vector space, and the set of scalars is also a vector space, a one-dimensional vector space, Monji, but a vector space nonetheless, it makes sense to ask the question, oh, is the determinant map a linear transformation? Seems like everything else has been, right? Matrix transposition is linear, the trace map is linear, all these other things seem like everything's linear, right? What can we say about the determinant? Well, the determinant's not actually linear, but it's something related to that, something we call multilinear. And so let me talk about that and explain exactly what that means. So suppose we have two vector spaces, v and w, and they're the same set of scalars over the field f. And if we take a map, a linear map between v and w, what that means is it's a map that preserves the vector operations, the linear operations. So we're looking for maps that preserve vector addition, that is the image of a sum is the same thing as the sum of the images. And it also preserves scalar multiplication, that is the image of a scalar multiple is a scalar multiple of an image. It doesn't matter whether we scale after or before the transformation, we get the same thing. And it doesn't matter of whether we add after the transformation or add before the transformation, we get the same thing. That's a linear transformation. So what does it mean to, what about determinants, right? Well, let's form a new vector space we're gonna call vn, where n is any natural number, that could be zero, one, two, three, four, any positive integer. We define vn to be the set of all list of size n. So we have this n list, v1, v2, up to vn. These are vectors that live inside the vector space v. And so we're just taking a list of vectors. That's all we mean right here. This itself makes a vector space. And in fact, the dimension of vn is just n times the dimension of v. So basically we're just taking n copies of the vector space that already exist. And we're making a new vector space like that. And in terms of addition, what you do is if you had two tuples like this, v1, v2, up to vn, and there's like a w1, w2, up to wn. Well, in terms of addition, you'll add together component-wise. In terms of scalar multiplication, you would just distribute the scalar into everything and then do scalar multiplication in the vector space. We can make a vector space out of multiples of a vector space. And in fact, we've already been doing this along the way. Like if we take the vector space v to be the column vectors fm, then in fact, v to the n is gonna be the vector space f to the n by n. That is, it's the space of all n by n matrices. And so vn is just a matrix space, which is itself a vector space, right? And so the reason we introduce that notation is that's how one describes what we call a multi-linear map. So we have two vector spaces, v and w, over the same field, f. And so again, pick your favorite natural number n, so we can talk about v to the n. We say that a map is multi-linear. It's a map b from vn to w. So we have a list of n vectors as our input and we have an output here, w. So what I want you to think about this is that we think of we have like n input variables, right? This is something that often is discussed in like multi-variable calculus. Single-variable calculus focuses on one variable in, one variable out. Multi-variable calculus might have multiple variables in and multiple variables out. And so this is a situation where we're taking n variables into the function and we output a single function. So we can think of this as like a multi-variant function right now. We take n vectors into the machine and then it outputs a single vector. So think of this as a multi-variant vector function. Well, if it's multi-variant, we can say it's multi-linear if for some fixed choice i, what we're gonna do is we take vectors v1, v2 up to vn. So we have all these n vectors v1, v2 up to vn. And so the transformation when you take x and you map it here to this list, this should be a b right there. Sorry about that. X maps to b of v1, v2 up to vi minus one, x vi plus one up to vn. So you'll notice that the vector vi did not show up in this list. We actually mapped x to the ith position like so. We mapped x to the ith position. Then we had this fixed list of vectors that filled in all the other spots. If this is a linear transformation, so say that again, the multi-variant function b is multi-linear if for each i and each choice of vectors v1, v2 up to vn, the function x mapping to b of v1, v2 all the way up to vn, except that vi, there was no vi, there was just an x. This is a linear map. That can be really difficult to take in right here. The way we try to say this is that multi-linear, well, let's say it this way. When we say that a map is linear, it's linear in the ith position. It's linear in the ith position exactly when this condition happens. It's linear when this thing happens right here. That is you fix everything except for the ith variable and it's linear in that variable. All positions are fixed except the ith position and then it'll be linear in that perspective. And so then things are multi-linear. They're multi-linear if it is linear in all variables. That's a slightly different way of defining this thing and that one might be a little bit more digestible there. We say that a map is bilinear if it is multi-linear with two variables. And we've seen some examples of these in the past. So for example, the dot product on Rn is an example of a bilinear map because if we take any vectors u, v and w, so here w is fixed, well then by properties of the dot product we can distribute w onto this. And so we end up with u dot w and v dot w. And so this preserves addition in the first coordinate. And likewise, if you take cu dot v, this is the same thing as cuv. So if we take all of this stuff together, whoops, this equality and this equality tells us that the dot product is linear. It's linear in the first factor. But then let's come over to the second one here. If you fix the first coordinate u and then when you distribute it over a sum, that gives you that u dot v plus w is equal to u dot v plus u dot w. And so that tells you that in the second factor, addition is preserved. And likewise, u dot cv is equal to c times u dot v. So the dot product, it preserves scalar multiplication in the second factor. It preserves vector addition in the first factor. And so these things tell us that the dot product is linear in the second factor. And since it's linear in the first factor and in the second factor, we say it's a by linear map, all right? And so this is the dot product on the real numbers. The Hermitian product on complex numbers is not exactly by linear. It does preserve addition in the first and second factor. It does preserve scalar multiplication in the second factor, but it doesn't exactly preserve scalar multiplication in the first factor. The issue is for complex numbers, if you had something of the form cu dot v, this is actually equal to the complex conjugate of u dot v, which was not necessarily the original scalar if that was a non-real number. And so we don't actually say the Hermitian product is multi-linear or bilinear here. It's often referred to as being sesquilinear. Sesquilinear means, if you look at the derivation of the word there, sesquil means like one and a half. So it's not bilinear, but it's more than linear. So it's one and a half linear. We call it sesquilinear. We're not gonna worry about that too much in this context. I just wanna present some examples of bilinear maps and before we talk about the derivative. The tensor product or the so-called outer product we talked about before is likewise a bilinear map. If we look at real numbers, Rn times Rn, this is a bilinear map for the same reason that the dot product was because of the first one right here, it's going to preserve addition in the first factor. It preserves scalar multiplication in the second factor. So it's gonna be linear. It's linear in the first factor. But likewise, if you look at the second factor, it distributes u times v plus w is equal to u tensor v plus u tensor w. And likewise, in the second factor, scalar multiplication is preserved. So it's linear. It's linear in the second factor as well. And so putting this together, we see that the tensor product is bilinear. Again, this outer product is bilinear for real numbers. It's only sesquilinear for complex numbers because again, it'll preserve addition in the first and second factor. It'll preserve scalar multiplication in the first factor, but you take conjugate scalars in the second factor. So it's only gonna use sesquilinear again. All right, so really what I want to talk about in this chapter is determinants. Determinants are not linear transformations because we have the following issue, right? If you take the determinant of a plus b, this is not in general the same thing as the determinant of a plus the determinant of b. You can't just add determinants and get the determinant of the sum of matrices. So it's not a linear map because it doesn't preserve addition in terms of matrices. It doesn't preserve matrix addition, but it is multi-linear if we think of a matrix as a list of column vectors. So in terms of that regard, but it's also true that the determinant of a is equal to the determinant of transpose of a. This comes from the Laplace cofactor expansions that we had talked about in the previous lecture that expansion, because if you can expand across any row or any column, well, if you take the transpose, expanding across a column is the same thing as expanding across a row. So the determinant is not affected by the transposition operation. And so the determinant map is multi-linear if we think of the column vectors because a matrix is a list of column vectors, but it's also multi-linear if you think of it as a list of row vectors. We actually get that freedom in doing that. And so since it's multi-linear, what that means is it will preserve addition if all but one row is fixed. So it preserves, it preserves addition if all but one row or column is fixed. Or in other words, if we have matrices A, B and C, which they're both, they're all three of them are N by N matrices. If they only differ in a single row, say the R-throw and assume that the R-throw of C is obtained by adding the corresponding entries in the R-throw of A and B, in that context, then you'll get that the determinant of C is equal to the determinant of A plus the determinant of B. I'll show you an example of this on the next slide in just a second. Likewise, if A and B are matrices that differ only in a single row, so all the other rows are fixed, and assume that, and let's say that the differing row is the R-throw, and assume that the R-throw of B is obtained by scaling the corresponding entries in the R-throw of A by some scalar C, then we can factor that C out of the row. So when it comes to determinants, we don't factor the scalar out of the matrix, we factor scalars out of a single row, out of a single row, not out of the whole matrix. So let me show you how that might work. So for example, if we have these two matrices right here, so there are three by three matrices, you see you have a four, a five, a zero, so the first row is identical, the second row, three, negative one, two, those are identical, and so the first rows are the same, the second rows are the same, but the third rows do differ. We get a one, two, three for the one, and we get zero, four, negative two for the other. So the multilinearity of the determinant tells us that we can add together the third row, leaving the first two rows fixed, the first two rows here are fixed. You can add together the third row, so you get one plus zero, two plus four, and three minus negative one there. So the sum of these two determinants will be the same as the determinant of four, five, zero, three, negative one, two, and then one, six, and one. And I'll let you verify this fact on your own if you want to, pause the video right now if you need to, but the determinant of this matrix right here will equal the sum of these two determinants right there. All right? And so this property of determinants allows us to calculate determinants if this is helpful to us whatsoever. Also, when you look at this matrix right here, you'll notice that the second row, everything's divisible by two, you have a four, six, and eight. You can factor that two out of the row, and it gets in front of the whole matrix. You don't factor the scalars out of the whole matrix, but individual rows, that's how it affects the determinant. Honestly, this first one, multilinearity of rows, we're not gonna use that one so much. It's okay, but we're not gonna use it that much in calculations, but multilinear of scalars, we're gonna use this one all the time.