 Welcome back everyone to our lecture series based upon the textbook linear algebra done openly. As usual, I am your professor, Dr. Andrew Missildine. Good to have you here today. This video lecture is going to focus on section 4.6 from the book entitled, The Funnel Oil Theorem of Linear Algebra. As the name probably suggests, it's sort of a big deal. We only like to give names to theorems, so we actually want to remember them beyond just the page, 5.6.7, what have you. And then also the term fundamental is quite... It's not used that often in mathematics. And we have the Funnel Theorem of Calculus, Funnel Theorem of Rhythmic, Funnel Theorem of Algebra. It's meant to sort of represent a big deal. And often on very all-encompassing, this is what the focus of this class was about. Now, I don't know if I necessarily say this theorem necessarily focuses on everything and a typical first semester of linear algebra. Certainly there's two more chapters in the textbook, but it does summarize and encapsulate a lot of what we've done up to this point. And so to be... Before we start talking about the theorem itself, let me remind you about the fundamental spaces of a matrix. And we've talked about some of these already in this lecture series. So for this discussion, let A be an M by M matrix. It does not have to be square. And it's the matrix representation of some transformation T, for example. And let's say that that transformation had, I should say the matrix had P pivots in it. Well, what is the null space of that matrix? Well, as a reminder, the null space represents the set of all solutions to the homogeneous system of equations. So X, we want all vectors X, such that AX equals the zero vector. This is our null space. Null of A is just short for null space. And the null space should be viewed as a subspace of the vector space fn. So man, as a reminder here, this transformation T, this is a map from fn to fm, where f is just some field in an MR positive integers. Like, zero is okay as well. So they're non-negative integers, I should say. And so the matrix that's represented in this transformation would be an n by n matrix. It's null space is the set of all vectors, which multiply by the matrix on the right to give you zero. And this is a subspace of fn. The dimension of the null space is what we mean by the nullity, the nullity of a matrix. And this is gonna equal the number of non-pivot columns in the echelon form there. Because for the null space, each free variable in the system of equations will correspond to a vector which helps generate the null space. And so we get each of the vectors in the basis by pulling them from the free variables present here. And this was one of the first, I think this actually was the first fundamental space we had talked about for matrices. The second one is the column space, which is the second one listed here, the column space. The column space is the set of all vectors of the form ax, where x is just some generic vector from fn. As a subspace, this will be a subspace of fm. Because if you take a vector with n components and multiply it by a, that'll transform it into a vector of m components. Originally, we had to find the column space to be the span of all, the span of the column vectors of the matrix A. This, what we have written on the screen is equivalent to that. The dimension of the column space, this is what we call the rank of the matrix. And it's gonna equal the number of pivot columns inside the matrix. So if p is the number of pivots, we get those. All right. And then the third one that we had introduced before was the so-called row space. Officially speaking, the row space was defined to be the column space of A transpose, which of course can be defined as this A transpose. We're gonna call this vector y now. y is a vector in fm. But it can be a little bit more helpful to think of it in the following way. We're gonna think of all of the row vectors y, so that is all the row vectors y transpose times A, where again, y is a generic vector from fm. And so with this perspective, this is a subspace of the vector space fn. Again, I kind of prefer the second approach because we like to think of the row space as consisting of row vectors as opposed to the column space, which consists of column vectors. Now, the dimension of the row space of A, this is gonna be what we call the co-rank of A. And similar to the rank, it's equal to the number of pivots because the dimension of the column space comes from the number of pivot columns in the echelon form. The dimension of the row space comes from the number of pivot rows. And these two things are actually equal to each other. This is not a coincidence. This is actually part of the fundamental theorem we'll talk about in a moment ago. The rank and the co-rank are always equal to each other. Now, I should caution you that when we talk about the transpose, we're referring to a real matrix. If we were a complex matrix, we'd have to replace this with A star. And likewise, this would become y star as well. The main difference here is that we would then have to be taking the conjugate of all the complex numbers involved. And so there's a slight distinction. We're not really gonna do much examples in this lecture using complex numbers, but be aware that if one was working with the row space of a complex matrix, we do need to make sure we take conjugates of these vectors here. Now, the fourth and final fundamental space, which I'm now gonna reveal to us now, is what we refer to as the left null space. The left null space. And this is the fourth fundamental space that gets ignored often. Actually, in preparation of this lecture, I was looking through a lot of other linear algebra textbooks that had access to and many, many don't even mention the left null space whatsoever. Now, it's defined analogous to how we define the row space. So the left null space is in fact going to be the null space of a transpose, the adjoint matrix to A here. Again, for complex matrices, we're gonna have to do a star, but everything else would change accordingly. Now, if you think of it as the null space of A transpose, what we're doing is we're looking for all vectors Y such that A transpose Y equals the zero vector. And this of course is naturally viewed as a subspace of FM. Although the way we actually prefer to think of the left null space, and this is actually where it gets its name, is we're actually gonna think of the set of all vectors Y such that Y transpose A equals zero. So we actually wanna think of it as the set of all row vectors, which if you multiply A on the left, you get zero, where the regular null space is all the column vectors which you multiply on the right of A to get zero. And so that's hence the name left null space, you multiply it on the left. And you wanna think of the row space and the left null space as row vectors, not necessarily column vectors. I mean, it doesn't make much of a difference whether you write them as rows or columns, but we do in terms of matrix multiplication, it will matter. And in terms of the products, these should be row vectors for these two fundamental spaces here. And by analog, we call the dimension of the row space the conality, the dimension of the left null space. So we'll denote this as L in L of A for left null space. And just like the null space, this thing will be computed as M minus P where M is the number of rows in the matrix and P is the number of pivots. So we're trying to count the number of non-pivot rows, excuse me, in the matrix. And this gives us the dimension of the left null space. Well, if you were to reduce this to echelon form, the non-pivot rows are gonna correspond to rows of zeros. And this actually helps us kind of better understand what the left null space is doing. The left null space is essentially measuring how much the column vectors of A do not span the vector space FM. Much in the same way that the null space measures how much the column vectors are not linearly independent. So kind of let me explain that a little bit more there. We've seen that for a matrix A, the columns of A are independent if and only if the null space is trivial. So we actually use that to help us kind of understand when a linear transformation is one to one. Come down here for a moment. So we've seen that the transformation T is one to one if and only if it's kernel, the kernel of T, which is none other than just the null space of its standard major, it's a null space of its standard major representation. This is one to one exactly when this is the trivial vector space, zero, zero. And in fact, this will happen if and only if the columns of A are linearly independent. So this idea of linear independence of columns coincides with injectivity of a linear transformation and they're connected by the triviality of the null space. So, and in more generality, the dimension of the null space of the matrix A, it's nullity, measures the size of the null space but it also counts the number of free variables in the linear system AX equals B, kind of mentioned that before. This is of course the number of the non-pivot comms in my SP. So let's talk about the left null space for a moment then. Similarly, the columns of A span, they span FM if and only if the left null space is trivial. And so if we kind of write this down, the transformation T is in fact onto if and only if the so-called co-kernel of T, which the co-kernel is just really just, it's also the left null space. These are all the same thing. It's the left small state for the matrix, it's the co-kernel of the transformation. When this thing is trivial, then the transformations onto which happens exactly when the columns, columns of A span FM. So the null space measures the linear independence of the columns of A, the left null space measures the spanning capacities of the columns of A. And so the dimension of the left null space of A, it's so-called co-nullity measures the size of the left null space, but also counts the number of rows of zeros in the row reduced echelon form. In this, and this of course is just M minus P like we mentioned before. The presence of a row of zeros in the echelon form of A allows for the possibility of inconsistency in the system of equations, AX equals B. And it's dependent upon the choice B, right? If the echelon form of A has a row of zeros, then there might be some choices of B that makes AX equals B consistent, and there might be some other choices that make it inconsistent. Another way I like to think of the left null space is that essentially the left null space is the subspace of FM for those vectors B which definitely make AX equals B inconsistent. So let me give you an example to try to explain what's going on there. So consider the following three by three matrix A. It's the matrix three, negative three, negative two, first row, second row, negative five, four, three, and third row, one, five, negative two. So I want you to consider the vector, the row vector seven, four, negative one. If we multiply these two matrices together, thinking of the row vector as a one by three matrix by the usual dot product multiplication between these matrices here, we're gonna get for the first position, seven times three, which is 21, minus four times five, which is 20, minus one. That's the first entry, which if we just simplify that right now, that gives you a zero. Take the row vector times the second column, we get negative 21 plus 16 plus five. Five and 16 is 21, minus 21 gives us another zero. And then lastly, seven times negative two is negative 14. Four times three is 12, and negative one times negative two is plus two. Two and 12 is 14, minus 14 gives us zero. And so we see that the product of this vector, we'll call this vector here Y, the product of the row vector Y by the matrix is zero. And so therefore, this would show that the vector Y lives inside the left null space of the matrix A. That's what I mean to be in the left null space. Now in terms of inconsistency, come to this, look at the system of equations you see in front of us with the augmented matrix. So this would be the system AX equals here Y. A is the same matrix as before. If we try to row reduce this thing, let's go through the steps here. I see there's a one here at the bottom. And so I'm gonna actually interchange the rows, put that one on the top. So we have one, negative five, negative two, negative one. We're gonna get for the next row, nothing happened, negative five, four, three, and four. And then the last row was the first row, three, negative three, negative two, and seven. And so now we have our pivot in the one, one position. Now to zero everything out below, I'm gonna take row two and add to it five times row one. I'm also gonna take row three and subtract from it three times row one. So this will look something like plus five minus 20 minus 10, and then minus five right there. And then for the third row, we're gonna take three minus three, we're gonna get plus 15, we're gonna get plus six, and then we're gonna get plus three. So we didn't do anything for the first row, so leave it alone, one, negative five, negative two, negative one. For the second row, we get zero, negative 16, negative seven, and negative one. And then for the next one, we're gonna get zero, 12, four, and 10. Like so, all right? And so let's see what we can do now. So the next thing to do will be look at the next pivot right there. Let's see, oh, I made a mistake there, I'm sorry. I felt, my spider sense was tingling, something was going wrong here. So for the second row, we need to times everything by, we need to times everything by five. So five times five is a 25. So we go right there, two times five, what's five, one times five is there. So that changes, this isn't a negative 16, that would be negative 25 plus four. So I guess it's negative 21, my bad, right there. What I wanna do next is I'm going to scale the second row. I see the 21 and the seven, so I'm gonna divide everything by a negative seven. And then for the third row, I wanna do similar things, but I'm gonna divide it by four. And if we do that, again, first row stays the same, whoops. So for the second row, if we divide everything by negative seven, we're gonna get three, one, and one seventh. And then we get zero, three, one, and we divide it by four, so we're gonna get five halves. You see the issue that's now approaching us here, right? If you want to just cancel out, if you take row, let's do that in red here. If you take row three and subtract from it row two, you're gonna see that there's some inconsistency going on here. Just copying down the first and second rows, we're not doing anything to them this time. And so we get zero, three, one, and one seventh. But then when we subtract the second row from the third row, we get zero, zero, zero, but then we get two fifths minus a seventh, sorry, five halves minus a seventh, which should give us like 13, 14ths, I think. But whatever it is, it's not zero. And so this tells us that AX equals Y is in fact inconsistent. And so this is what I meant earlier by vectors and then left null space are definitely inconsistent. If you take anything in the left null space, AX equals Y will always, always, always be inconsistent.