 Let A be an imbying matrix. Now associated to every matrix are four vector spaces, which we call the four fundamental spaces of the matrix. Now in this series, we've introduced two of those fundamental spaces already. The first of which was what we call the column space. The column space of a matrix, remember, is the span of the column vectors of said matrix. The second fundamental space that we introduced was called the null space, which was the solution set to the homogeneous system AX equals zero. In this video, we introduced the third of the fundamental spaces with one more to come. This is what we call the row space of the matrix A. This will be denoted as row of A for short. Now the row space is going to be the dual to the column space. In particular, we take the vector A transpose. Remember A transpose swaps rows to columns and columns to rows, and we're going to compute the column space of A transpose, and that will then be what we call the row space of A. That is, more specifically, the row space of A is equal to the column space of A transpose. Why do we call that the row space? Well, remember what the transposition operation does. It turns columns into rows and rows into columns. The column vectors of A transpose are just the row vectors of A. Then you're like, okay, I get it now. The row space is the span of the row vectors of A in the same way that column space is the span of the column vectors of A. Okay, it makes sense. Why don't you just say that for the first place? Well, there's a reason why we're introducing the transposition in this situation to find our row space. This really has to do with the fact that we are dropping some breadcrumbs in our forest for Hansel and Gretel to follow to a magical, beautiful candy house, which we call the fundamental theorem of linear algebra. This house does not have a witch inside of it. It has only goodness. Now, in order to prepare for the fundamental theorem of linear algebra, which will actually connect all four of these fundamental spaces together, we have to define row space using transpositions. And this is for an orthogonality condition that will be more clear when we get to that in chapter four of this lecture series. In the meanwhile, though, you can think of the row space. Okay, it's the span of the row vectors. But again, there's also one important caveat I should mention that if you're talking about a complex matrix, for complex matrices, we never, ever, ever use this symbol transposition. We actually always use the star, the conjugate transpose. So if you have a complex matrix A, then the row space definition A would modify to become the column space of A star, the conjugate transpose, which is the transpose, but also conjugation. Now, the columns of A star are not exactly the rows of A. They are, but there's also conjugation. And therefore, when you calculate the row space of a complex matrix, you take the span of the conjugates of the rows of A. So the thing is you have to remember to take the conjugates and why is it so important we take the conjugates? Again, follow the breadcrumbs to the fundamental theorem of linear algebra. It'll be much, much more clear in the future why we care about conjugates so much. We need the conjugation to make the orthogonality condition true for complex matrices. And so that has the consequence that the row space of a complex matrix is not exactly the span of its rows. It's the span of the conjugates of its rows. Now, for a real matrix, conjugation's invisible. So you can see it as, oh, okay, you're just taking the span of the rows there. So the row space is, in some respect, a complementary space to the column space. And as such, we call the dimension of the row space the co-rank. And this is similar to what we do in trigonometry. Talk about like cosine, cotangent, cosecant. Why the prefix co? Co here just is short for complementary. That's the complement function. And so the co-rank is the complementary dimension to rank, which is the dimension of the column space. Now, to compute the co-rank, right, you're trying to find the, to find the co-rank, you're finding the rank of a transpose, right? So think about that for a second. The co-rank, the co-rank of a is gonna equal the rank of a transpose. Which a transpose, this is gonna be the number, the number of pivots that we're gonna see inside of a transpose. But it turns out that the number of pivots in a transpose is actually gonna be the same number of pivots as in a, right? This is gonna be p. The reason is that for the row space, basically we're gonna count the number of pivot rows, which is the same as the number of pivot columns for the matrix. And so co-rank, which is the dimension of the row space here, in some respect, isn't a new quantity. So you don't actually hear a lot about co-rank, because co-rank you'll actually see is always equal to the rank. So I actually can amend my previous statement right here. The rank and the co-rank of a matrix are always gonna equal each other because they're always just equal to p. Because the rank counts the number of pivot columns. The co-rank counts the number of pivot rows and that's the number of pivots. The two things are the same thing. But if you do see discussion of co-rank, that's what it means. It's the dimension of the row space, the number of pivot rows. Now how do we actually do it so? So when we try, when we introduced the column space and the null space, we also have seen that there are ways of computing bases for the column space and for the row space. Now for the column, for the null space, excuse me. So can we find a basis for the row space of a matrix? What procedures can we use here? Well, one thing is we can just use the fact that we have that the row space is the column space of a transpose. So what you could do is you could just compute a transpose, you could row reduce that and then the columns of a transpose that are pivot columns would then give you a basis for the column space of a transpose. Then you could turn those back to rows and you would then get a basis for the row space. That's perfectly acceptable. You could do that. But the thing is to do that, you would have to row reduce a transpose. Can you find a basis for the row space of a using the matrix A? Can we row reduce A to find a basis for the row space? That's the question at hand here. I don't want to use this intermediate matrix A transpose if I don't have to. Now the problem is if you just row reduce A transpose, you have to find the pivot rows. And so much like the column space, if we could find the pivot rows of A, those would coincide with the basis for the row space. We could just grab those rows that coincide with those pivot rows. But unlike the columns, one thing I should mention is that when you row reduce matrices, the order of the rows can switch around when you row reduce things. When you do row reduction, columns never move. They can change their entries, but they never move around. And therefore if column one was a pivot column, that will never change. But for a matrix as you row reduce it, rows can go in different orders. So actually the pivot rows of A might not be the first two rows. They could be the last two rows and we just switch them up to top. And so if you just throw a matrix inside of a calculator and you say RREF, and then you see like, oh, the first two rows are pivot rows. That actually is a little bit, that that's not true necessarily. You might be interchanging rows and you didn't know it. So the only way we can do the same algorithm we did for the column space is we keep track of all of the row operations, which admittedly I don't want to do that. We've kind of reached the point where I want to do as much on my calculator as I can. So I don't want to do row operations all the time. So can I get a basis for A using maybe like the RREF? And you can because it turns out that if two matrices A and B are row equivalent, then their row spaces will be identical to each other. It kind of makes sense because the row space is the span of the rows. And so if two matrices are row equivalent, then their row spans should be the same. Perfectly makes sense. I mean, we could provide a formal argument to this thing, but if two matrices are row equivalent, they'll have the same row spaces. And why this is useful is if we grab an echelon form of a matrix A, so say U is an echelon form of A here, then the non-zero rows of the echelon form will form a basis for the row space of U. So that is in the echelon form, we can see exactly who the pivot rows are and there's no confusion about interchange. But since U is equivalent to A, the row space of A is equal to the row space of U and so we can use a basis for the row space of U to give us a row space of A. So that sounds complicated, but it's actually super, super simple. This will be a very simple, simple way of finding the basis. And so what I wanna do is we're gonna take a four by five matrix A which you see here on the screen and I'm gonna find a basis for the row space, the column space and the null space all from the RREF of this matrix. And it's actually pretty efficient. The only difficulty is row reducing the matrix which if we use technology, we can do that pretty quickly. So take our matrix A and we're gonna row reduce it. So here's our A right here. We're gonna row reduce it to our row reduced echelon form right here, which we're gonna call that U. This works for any echelon form. And so let's identify the pivot positions. So the pivot positions are gonna be the first three rows. So we see there's a pivot position in the one, one, two, two and three, three spot. So what this tells us is that to find a basis for the row space of A, we're gonna take the first row of the RREF, the second row of RREF and then the third row of U right there. So we don't take the rows of A, we take the rows of the RREF because we don't know if we interchange things around. Things could have got mixed up. We don't know, but we do know in that in U, the pivots are the first three rows. So we're gonna grab those. And so this then forms a basis for the row space. The row space of A is then equal to the span of these three vectors right here. And I did deliberately write them horizontally because we like to think of them as row vectors which often will write horizontally like so. Now, if you don't like fractions, right? So you have like a 15 11th, a 7 11th, 5, you know, that's like a surpy there, but anyways, 5 11, 6 11th. So if you don't like fractions, be aware that you can always replace a spanning vector with any non-zero skill or multiple of that same spanning vector. And that won't change the span. That won't change linear independence. So that is the first vector right here. They have just one zeros and negative ones. I'm good with that. Well, we're gonna keep that one. But for the second one, because we have this like row, we have a bunch of fractions, you could just multiply the row by the least common denominator, which will be 11 here. In which case, you could then replace the vector zero, one, zero, 15, 11, seven, 11th with zero, 11, zero, 15, seven. Those vectors with the exact same span, but then you can avoid some tedious fraction arithmetic if you don't want that. And then for the third row, you could also replace zero, zero, one, five, 11, six, 11th. You could times that by 11 and get zero, zero, 11, five, six. And so this right here gives us a basis. This gives us a basis for the row space. And therefore, if we take the span of these vectors right here, we will get the row space of our matrix A. And that's all it is to finding the basis of the row space. You reduce the matrix to echelon form. You grab the non-pivot columns, excuse me, the non-pivot rows, and that's your basis in from the echelon form. And then rescale them if you want some whole numbers or something like that. That's all one has to do. How does one find a basis for the column space again? Remember that you still have to look at the pivots. So the first three columns are pivots. And so then you're gonna grab the first three columns of A and that'll give you a basis for the column space that you see right here. We grab columns one, two and three from the matrix. I'm gonna zoom out a little bit so you can see both of these on the same screen. So you grab the first three columns of A because those were pivot columns and that gives you those. So we actually will grab vectors from A, the column vectors of A to get the column space of A. We don't do that for the row space because like I said, because of potential interchange, we don't actually know where the pivot rows are in A but we do know the pivots in the RREF and since they'll have the same row space, you get that. One thing I should mention here is that the column space of A is not the same thing as the column space of this echelon form. When you reduce matrices, you do change the column space. You don't change the row space though and that's why we can grab things from the echelon form for the row space, but for the column space, we have to grab it from the original matrix. But the good thing is that while you row reduce, pivot rows might move around, pivot columns don't move. And so that's how that works here. For the column space, you use the inertia of the column of the pivot columns to find columns of the original matrix. But then for the row space, you use the invariance of the row space to get you a basis from the echelon form. Now, just by contrast, let's also talk about the null space here. This one's probably the hardest of all of these bases to find but we can also find this from our echelon form. So the thing is we look at the non-pivot columns to find the basis for the null space. So we take these vectors right here. So what we're gonna do is we're gonna get two vectors associated to our free variables, which our free variables are gonna be in the fifth and fourth position. So we get one, zero, zero, one. Now the first three entries, we're gonna look at this row associated to these pivots right here and then take the opposite sign. So we're gonna take one, negative 15, 11th and then negative five, 11th. That gives you the first vector. And again, if you don't want the 11th there, times everything by 11, and you'll get the vector 11, negative 15, negative five, one, and zero. You can get away with that. I guess that should be an 11 there as well. And then for the other one, we're gonna grab a one, we're gonna grab a negative 7-11, so switch the sign, and then a negative 6-11, looking at these things right here, just the second column. And then again, if you don't want the fractions times everything by 11, you're gonna get 11, negative 7, negative 6, zero, and 11. Have a slight typo there, sorry about that everyone. And so then we find a basis for the null space as well. So all three of these bases of the fundamental space, this can come from the RREF. From the RREF, we can extract the basis for the null space. From the RREF, we know which columns of A to grab for a basis of the column space. And then for the row space, we can actually grab pivot columns to find a basis for the row space. There is a fourth fundamental space known as the left null space, which we will introduce in chapter four. Its basis is a little bit more technical and it'll involve some orthogonality conditions which we have not introduced yet, which is why we'll delay that one until.