 So this is a fundamental theorem of linear algebra. Okay so r of a is the column space of a and it has a dimension equal to r. So again a is in m by n, the null space of a and it has a dimension equal to n minus r. The column space of a transpose also known as the row space of a and it has a dimension r. Then the fourth subspace is the null space of a transpose, the left null space of a and it has dimension m minus r and the fifth point is that these two points I already mentioned above, let me put it this way, is the orthogonal complement r of a transpose in r to the n and r of a is the orthogonal complement of n of a transpose in r to the n. This is called the fundamental theorem of linear algebra. Okay so now the next thing I want to discuss is about the rank. Sir can you please summarize what each statement of this fundamental theorem means just a glance. Yeah see the first four statements are more definition okay r of a is defined to be the column space of a and think of this dimension of the column space of a equal to r as basically a statement that is saying that there are linearly independent vectors in the columns of a and the null space of a are the set of vectors that map to zero and its dimension is n minus r okay and that happens because the null space of a is the orthogonal complement of r of a transpose so these two together should always add up to the dimension of r to the n which is the total dimension of the space of which these two are orthogonal complements of spaces and the row space r of a transpose is essentially the range space of the transpose of a which means you've exchanged the rows and columns and the point is that the r of a transpose and r of a always have the same dimension I put this in red so this is another very important point which is something that I said we'll come we'll actually show this later on but this is another point which is in no way intuitively obvious to me but regardless of which matrix you pick the row space and the column space have the same dimension or the row rank and the column rank are always equal and what what the next statement is saying is that n of a transpose is the left null space so it's the null space defined on a transpose and again because r of a is the orthogonal complement of n of a transpose the dimension of the left null space and the dimension of r of a must add up to m and so if the dimension of r of a was r the dimension of n of a transpose must always be m minus r and these two are basically again coming from the definitions of how you define n of a it is a set of all vectors that are orthogonal to the rows of a and therefore it is it is the orthogonal complement of r of a transpose and r of a is the orthogonal complement of n of a transpose okay sir may sir can you take a simple example of two by three matrix like one two three four five six and maybe just just tell what they all represent like take one two three four five if you I mean that's actually not a good example too many numerical examples but since you requested I quickly look at this particular example so you can see that uh the the columns of this matrix there are two columns that are linearly independent and of course since they're in two dimensions the third column will really will be linearly dependent on these two columns so this is a two cross three matrix so the r of a it's the column space of a and in this case it's actually equal to r square because these two columns together can already span the entire r square and so the null space of a which is a set of all vectors that map to zero it has dimension n minus r which is three minus two which is one and you can see that if I take the vector um so you have to actually work it out but um okay sir I got it I think I got it yeah it has dimension one so there is a there is you can find a basis okay so I'll be doing a couple of these problems problems in the problem system so they'll be more obvious tomorrow you can yeah so similarly yeah just for the sake of completeness r of a transpose okay now this is if I take a transpose it's these vectors one three five the span of these vectors two four and six and these are linearly independent vectors so it has dimension two and these these two vectors are a basis okay so you can just directly take these two vectors as the basis for r of a transpose and n of a transpose has dimension zero so if I want to find a vector which one multiplied by these two vectors so that is if I do alpha times one three five plus beta times two four six and I said that equal to zero this is only possible if alpha equals beta equals zero okay yes sir fine I got it okay so the next thing I want to talk about there's not much time left but at least I can put down the definition so rank so the rank of the matrix we already defined that it is the dimension of the range space of the matrix and it's equal to the number of linearly independent columns in A and the remarkable fact that I mentioned earlier is that rank of A equals the rank of A transpose this is what we often refer to as row rank equals the column rank so related to the rank is the property that if you take the system of linear equations Ax equals B then it can have either one solution or no solution or infinitely many solutions these are the only three possibilities so it can never have two solutions for example and it has at least one solution rank of the augmented matrix equals the rank of A so basically when this happens it means that B is in the column space of A because adding B is not changing the rank of A and so this is at least one solution and it has no solution if rank of AB is greater than rank of A okay how do we find the rank of A it is through these things called elementary row operations so I won't go through these in detail again I'm assuming that you've seen you've seen how to compute the rank of a matrix in your undergraduate program and so you know how to do these but maybe in the homeworks I will give you an example a couple of matrices so where you can go over the motions of computing the rank by doing the row reduction and just refresh your memory on how it is done but these elementary row operations have the property that they preserve the rank they don't change the rank so that's the reason why they come to the rank then that tells you what the rank of the matrix is rank of the original matrix is because none of the operations you did on the matrix changed its rank so what are these elementary row operations you can exchange rows you can scale a row by a non-zero scalar if you multiply a row by zero you may change the rank so you're not allowed to multiply by zero but you're allowed to multiply by any non-zero scalar addition of a scalar multiple scalar multiple of a row to another row three elementary row operations which will result in what is known as RF for the row reduced echelon form sir yeah yeah so in the first statement rank of a b equals to rank of a it means like even though you're adding one more vector to the a making yes so your dimension is not getting changed ultimately so it means that vector whatever b you have added it's independent of either of the vectors in the a is it mean that sir say that again so when you're making a b augmented matrix you're adding the column matrix of b like you're adding the vector b to the vector a like you are yes so that means when it is equal to rank of a it means the dimension is not getting changed so the dimension of the column space is not changing yes so that means the vector b is independent of like it's a dependent on the one of the vectors of a yeah it's not linearly it is not linearly independent of the columns of a if b was linearly independent of the columns of a then appending b will definitely increase the rank by one and so that is the no solution case so that is you're adding one more extra dimension after adding base at the second point yes yes so that means that because the rank increased okay it means that the point b cannot be reached by just taking linear combinations of columns of a and that's the reason a x equals b will never have a solution so this row reduced echelon form it reveals the rank it is equal to the number of non-zero rows in the row reduced echelon form okay so we're out of time in the next class we will discuss further about this rank and related properties and so that concludes what I wanted to say today are there any other questions yeah sir I wanted to know in n like you said matrix is a linear transform so in any linear transform we are only interested in the range of the transform which is here the column space of a so what additional information does the left null space of a gives that we are interested in finding it or solving it so it's a good question so the point is that yeah so of the four subspaces okay the null space of a matrix is the orthogonal complement of the row space of the matrix and the column space of the matrix is the orthogonal complement of the left null space of the matrix so in some sense if you know what n of a is you completely know what r of a transpose is and so knowing n of a there is no real additional information you are getting about r by knowing r of a transpose you know that they are the orthogonal complements of each other but basically making these connections is the core of how you look at mathematics is you try to ask what are the relationships between these the so the way to think about it is that you start with a matrix and you can define these four subspaces and you ask are they related in some way and you find that these are the relationships between them and and then you realize that if I if I know the null space of a matrix then I already know exactly what the row space of the matrix is because it's just the orthogonal complement so yeah it's a long-winded answer to your question but it's just a connection between these subspaces