 Welcome back to our lecture series, Linear Algebra Done Openly. As usual, I'm your professor today, Dr. Andrew Misseldine. This will be our first video for chapter 2 entitled the Algebra and Geometry of Matrices. Now, up to this point, we have used matrices for one purpose and one purpose only, to encode information about linear systems. We first introduced, for example, augmented matrices in section 1.5 to do exactly that. Now, in chapter 2, we started using matrices to represent sets of vectors in a slightly more compact way. This happened first in section 2.2, where we introduced the matrix vector product so that the matrix equation, which was introduced in section 2.2, could then encode the vector equation that was previously introduced in section 2.1. This then also allowed us, the matrix vector product that is, allowed us to define linear transformations using matrices. And I alluded to the fact that every linear transformation could actually be described as a matrix transformation. So matrices encode linear systems, they encode vector equations, they encode linear transformations. They seem to encode all of these linear structures. But we also introduced chapter 2, the column and null spaces of a matrix. Now, these vector spaces came about essentially by solving problems about spanning vectors or linear dependence of vectors. But we attributed those vector spaces to the matrix and not to the collection of vectors associated to it. This was our first inkling that matrices deserve to be studied in their own right, not just as tools to better understand vectors. In fact, we will turn this paradigm on its head that in fact, viewing column vectors as in by one matrices, everything we've studied so far about vectors can actually be viewed as a subset of the theory of matrices. For example, we can add in scale matrices just like column vectors. We're gonna see exactly how to do that in this video. So for this reason, we can view matrices as just kind of like beefier versions of column vectors, which can then lead to discussions about vector spaces of matrices. Again, that's the main topic of this video at hand. But we'll also see in subsequent videos, particularly the next one, that matrices can transcend the vector operations as we will introduce a general matrix multiplication, which will be the foundation for the rest of this chapter on matrices. So let's talk about those, the generation of vector operations to matrices that I mentioned just a moment ago. So when we describe matrices, one thing to remember is that we have to describe the size. What are the dimensions of this matrix? We often call this an M by N matrix. M representing the number of rows in the matrix, and then the N here representing the number of columns. We will always mention rows before columns going in reverse alphabetical order right here. Another convention that's to be very common when describing matrices. In chapter two, we often describe a matrix as a collect, and we describe the matrix by its column vectors, and that way we were encoding a set of vectors as a matrix. We're gonna go beyond that now. Described matrices will often describe a matrix by an individual scalar inside of the matrix. So if we have the matrix A, we'll say that the scalar of A in the I, J position where I represents the I throw and J represents the J column, we'll say that the generic entry of the matrix A will be A, I, J. We will use a lowercase A because we have a capital A for the matrix, and it won't be bolded, there won't be any little arrow over it or anything because this is not a vector quantity, this will be a scalar. And so the matrix A, we can then say that the generic scalars inside of matrix A is A, I, J. Same thing for B here, we're saying that the generic scalar of B will be B, I, J, the B, I, J being the number in the I, J of the matrix B. Well, we say that two matrices of the same sizes are equal if term by term, component by component, each of those numbers are the same. So the one, one positions agree. The two, two positions are the same. The two, one positions, the three, one positions, the one, two positions, all of those positions are the same. We say that the two matrices are equal. That's exactly what we meant by when vectors were equal to each other. The order that you listed, the vectors mattered. That matters for matrices as well. We can define matrix addition and we do this term by term. So the sum of two matrices, A plus B, again, if they have the same sizes here, well, I have to tell you, what is the, what's the scalar in the I, J position of A plus B? Well, you just take the scalar from the I, J position of A and you add it to the scalar from the I, J position of B and that's the matrix sum. We can define scalar multiplication similarly term by term. If we have some scalar C inside our field of scalars and then the matrix C times A will be defined by its generic entry, C times A, C times A, I, J will be the I, J position of the scalar product C times A. Let's look at some specific examples. So let's take a matrix A, which is two by three and let's take another matrix B, which is given as a two by three matrix as you can see on the screen right here. If we wanna add together the matrices, we add them term by term. So we take the one, one position, add them together. We get three plus zero, which is three. Then we do the one, two position, always row then column. That's gonna be nine plus five, which is equal to 14. Then we do the one, three position, which is one plus six, which is a seven. Doing the second row, we're gonna take the two, one position for both matrices, negative two plus three, which is equal to one. We then take the two, two position, which will be four plus one, which is a five. And then lastly, we're gonna take the two, three position, six plus one, which gives us a seven. And so we just add together matrices term by term, exactly how we did it with vectors. Now, one thing I should mention here is that if you wanted to add together the matrix A with the matrix C, notice C is actually a two by two matrix here. We can't really add these things together because while we could add the one, one spot, that would be five. We could add the one, two spot, that would be a 10. We could do the two, one spot, that's a negative two. And we could do the two, two spot, that would be an eight. What happens when we try to add together the one, three spot? Well, C doesn't have one, so there's some incompatibility there. Same thing with the two, three spot, C doesn't have one. So how do we define such a thing? We can't. And so because A and C have different sizes, the sum of A plus C is not possible. We'd say that this sum is undefined or to use moniker from calculus, we would say that this sum does not exist. All right. Scale multiplication couldn't be easier for these situations either. If we take the matrix two times B, that means we're gonna times two by the matrix B that was listed above. And we're just gonna times each component by two. So we take two times zero, which is zero. We take two times five, which is 10. We take two times six, which is 12. Two times three is six. Two times one is two. And then again, two times one is two, like you see there. That's how scaling multiplication is gonna work. If we can do, first we can do, excuse me, if we can scale matrices and add matrices, that means we can do linear combinations of matrices. So for example, we could do the linear combination A minus two B. Take the exact same matrix A from above and then take the matrix two B that we computed just a moment ago and then subtract these things. So you're gonna take three minus zero, which is three. You're gonna take nine minus 10, which is negative one. You're gonna take one minus 12, which is negative 11. You're gonna take negative two minus six, which is negative eight. You'll take four minus two, which is two. And then lastly, you'll take six minus two, which is four. And so linear combinations of matrices makes sense because we can add and scale them. And so as such, it makes sense if we can talk about linear combinations, we can talk about spans. Can we span a set of matrices? Look at the set of all possible linear combinations of a given set of matrices. Absolutely, we can. And so because we have spans, we can talk about, we can talk about the spans of matrices. We can talk about spanning sets of matrices because we have linear combinations. We could talk about linear independence. Is there a way to combine matrices together to produce the zero matrix in a non-trivial way? We can answer questions about that. We can talk about subspaces, vector spaces of matrix. And so let's introduce that. For example, we, you know, if F is our typical field right here, we're gonna take the set M, or sorry, F to the M by N to be the set of all M by N matrices whose entries come from F. And this will form a vector space using the matrix addition and scalar multiplication we introduced just a moment ago. For the same reasons as works for column vector in FN, we'll see that vector addition will be associative, it'll be commutative. It'll have identity, which will be the zero matrix in that context. There will be, you know, A minus A would give you the zero matrix that scalar multiplication will distribute over vector addition and it'll distribute over scalar addition. We have associativity of scalar multiplication and we have an identity. One times a matrix will still give you that matrix back. So these operate vector space of matrices. And so it's kind of weird because like I was saying earlier, vectors can be viewed as matrices just as in by one matrices. But we can also think of matrices as vectors because they belong to vector spaces. It's just instead of writing your vector as one really long array, we just write it as a rectangular array. And so we can identify them in that way. Now we of course are gonna want to write them as rectangles as opposed to just one really tall column vector is because as we pertinent as we talk about matrix multiplication in forthcoming. Well, if we have a vector space, then we need a basis. What would be the standard basis for F to the M by N? Well, we're gonna introduce the matrix EIJ, capital EIJ. These are gonna be called the unit matrices right here. And the matrix EIJ will be the matrix which has a one in the IJ position and it has zeros everywhere else. This is analogous to how we define the vectors EI before where the vector EI would have a one in the I position and zeros everywhere else. So EIJ as a matrix has a single one and zeros everywhere else. Therefore, if we take E, the set curly calligraphy E there to be the set of all unit matrices as you go from one, one to M by N, you go through all the possibilities. This gives us the standard basis of F to the M by N. And as this set E itself contains M times N many vectors we actually can see that the vector space of matrices of M by N matrices can be identified with column vectors which have M times N many positions. Again, instead of writing things in a bunch of columns or rows you could just stack them all on top of each other. And so as a vector space these two sets are really indistinguishable. Just two different ways of representing the same information because whether you take a matrix over here that's M by N or you take a column vector which has N by N, you're still gonna have N by N bits of data inside of these vectors here the column vectors versus matrices. And so in terms of how many numbers can these objects hold, it's gonna be the same vector spaces the two things are the same thing. But like I keep on saying, the reason we talk about this vector space is in addition to addition and scalar multiplication we get a new operation which we did not have for vectors and that's the idea of multiplication.