 Okay. So we'll begin. So the last time we looked at fundamental subspaces and we started discussing the rank. Today we will finish the discussion about the rank of a matrix and then move on to the inner product and the Gram-Schmidt orthogonalization process. Okay. So let's continue. So we were talking about the rank. The basic definition is that the rank of a matrix A is the dimension of the range space of A and the dimension itself is the number of vectors in a basis for a vector space. And the range space of A is the span of the columns of A and that is a vector space. And the dimension of that vector space is the rank of a matrix. So the last time we saw that rank is equal to the number of linearly independent columns in A. And there is a remarkable fact which is that the rank of A equals the rank of A transpose, row rank equals column rank. Then we also said that if you have, if you are given a linear system of equations A x equals B, it can have no solution, one solution or infinitely many solutions. And it will have at least one solution if the rank of the augmented matrix A B equals the rank of A. If the rank of the augmented matrix A concatenated with B is greater than the rank of A, then there is no solution. Then we said that it is possible to find row reduced echelon form for a matrix. And you do this by performing row operations. And these row operations or elementary row operations, there are three of them. The first is to exchange a pair of rows. The second is to multiply a row by a nonzero scalar. And the third is adding a scalar multiple of one row to another row. So none of these operations change the rank. And therefore, if you look at the row reduced echelon form and you can, if you can find out the rank of the row reduced echelon form, then that tells you the rank of the original matrix. And the row reduced echelon form is in such a way that the diagonal entries will be nonzero up to a point. And then you have all zero rows. And the number of nonzero rows in the row reduced echelon form is the rank of the matrix. So it's important to remember that the rank of the matrix is the number of nonzero rows in the row reduced echelon form. I also find students saying that the rank of the matrix is the number of nonzeros in the row reduced echelon form. That is incorrect. It's not the number of nonzero elements in the row reduced echelon form. It is the number of non-zero rows in the row reduced echelon form, okay? So those were the properties we saw the last time. Now I did not actually walk you through how to find the row reduced echelon form of a matrix, but I assume that this is something that you have seen in your undergraduate linear algebra. So if you've forgotten how to find the row reduced echelon form of a matrix, you should just practice it, you should look it up and then practice it on one or two matrices to make sure you're aware of how to do it. So we'll continue with these properties. The next property is that if the rank of A is R, then exactly R columns of A are linearly independent and exactly R rows of A are linearly independent. Also there is an R cross R sub matrix of this matrix A, which has a non-zero determinant. So there are two keywords that I've dropped here. One is a sub matrix and the other is a determinant. So the sub matrix of a matrix is obtained by, so you pick R rows of the matrix A and you pick R columns of the matrix. When you do this, if you select the elements that are defined by these R rows and R columns, that gives you an R cross R sub matrix of the matrix. So for instance, if I take a matrix 1, 2, 7, 6, 2, 3, 8, 9, 7, 2, 1, 4 and if I take rows 2 and 3 and columns say 3 and 4, then I get a 2 cross 2 sub matrix 1, 9 and 4. So there is an R cross R sub matrix of A with non-zero determinant and determinant is something that I've not defined yet. You might remember it from your undergraduate program but we will also study it in more detail later in the course. But more importantly and all R plus 1 cross R plus 1 sub matrices have zero determinant. Another obvious property is that the rank cannot increase by deleting rows or columns and similarly rank cannot decrease by adding rows or columns. Okay, because when you add rows or columns, you can only increase the span or the dimension of the span of the columns of the matrix and so the rank cannot decrease if you add a row or a column to the matrix. The next question is, what happens to the rank of a matrix of matrices when you add or multiply them? And so there are some inequalities. In general, you cannot give a universal answer of a matrix when you add two matrices or you multiply two matrices but you can give some inequalities. So for example, if I have A in R to the m cross k and B in R to the k cross n so that AB is well defined A times B then we have that rank of A plus rank of B minus k is less than or equal to rank of AB is less than or equal to min of rank A rank B. So in other words, you cannot increase the rank of a matrix by multiplying it by some other matrix B. Its rank is at most rank of A. Similarly, you cannot increase the rank of a matrix B by pre-multiplying it by another matrix A, its rank is at most rank of B. So one other way to see this is that for example, if Bx is equal to 0 then ABx is also obviously equal to 0. So that any vector which lies in the null space of B also lies in the null space of AB. And so we can say that the dimension of the null space of AB is at least equal to the dimension of the null space of B which implies remember now the rank nullity theorem the dimension of the null space of AB and the rank of A should add up to the value m or n. So that means that the rank of AB is less than or equal to rank of B. Similarly, you can make an argument in terms of if Y transpose A equal to 0 then Y transpose AB equals 0 and so it goes. The another well-known inequality is the Sylvester inequality which says that this is about adding matrices. So it says that mod of rank A minus rank B is less than or equal to rank of A plus B is less than or equal to rank A plus rank B. See one way to get some intuition into these inequalities is to try to think about simple matrices that you can construct where each of these inequalities are satisfied with equality. So for example, this inequality, this first part, this is satisfied with equality if and only if range space of A intersection range space of B is the zero vector and range space of A transpose or the range of the rows of A intersection span of the columns of B transpose is the zero vector. Now the Sylvester inequality is actually the special case of another inequality. Okay, there's one other small remark I want to make. This inequality here, rank of A plus B is less than or equal to rank A plus rank B. I'll draw a star here and make a remark on it. So this is called the sub-additivity property of the rank and the consequence of this is that any rank K matrix can be written as the sum of K rank one matrices but not fewer. So you can't write a rank K matrix as the sum of fewer than K rank one matrices. So this, as I was about, I was going to say this Sylvester inequality is a special case of more general inequality called Frobenius inequality which says that if you have A in R to the M cross K and B in R to the K cross B and C in R to the P cross N then rank of A B plus rank of B C is less than or equal to rank of P plus rank of A B C with equality if and only if there exist matrices X and Y of appropriate dimension such that B can be written as B C X plus Y A P. I'm just stating these inequalities. I'm not yet sure in fact whether we'll use them or not but these are some basic rank inequalities that exist and it's just good to know. I'm not proving these because it will detract from getting to the core material of this course but for now I just want to state some of these basic results that are known about the rank. So specifically I've highlighted two results one to do with the product of matrices the other to do with the sum of matrices and then this more general result called the Frobenius inequality which involves three matrices. So let me maybe do the following. This thing doesn't have a name so I'll just say rank of the sum A M by N of course see there's also a notational thing here R to the M by N. Here the definitions of A B are different both are M by N matrices whereas here A is of size M by K and B is of size K by N only then is A B actually defined then this is called the Sylvester. Just one or two more properties one is that rank of A is unchanged by left or right multiplication by a full rank matrix. You cannot decrease the rank nor can you increase the rank by left or right multiplication. Of course you cannot increase the rank we've already seen that rank of A B is less than or equal to min of rank A and rank B but you cannot decrease the rank by left or right multiplication by a full rank matrix and another property which is something I already alluded to when I talked about subelitivity is that Sir full rank matrix means number of rows equal to number of columns equal to rank right? No full rank matrix so let me clarify it here. So I actually said that in the previous class so A in R to the M by N as a full rank if if rank of A is less than min of M N then it is rank deficient. Exactly so A is said to be rank deficient. Okay the other point I want to make is that any A belonging to R to the M by N of rank 1 can be written x A is equal to x y transpose where x is in R to the M and y is in R to the N. So related to this note that if I have x in R to the M and y in R to the N and I write construct a matrix x y transpose it doesn't matter which x and which y I take if x and y are non-zero then x y transpose is always of rank 1. Okay so one way to see that is when I do x y transpose x is a column vector and by multiplying by y transpose all I'm doing is repeating this column x multiple times in fact n times and each time I'm multiplying that column by the corresponding coefficient of y and I'm putting to putting them together as a matrix. So all the columns of A are linearly dependent and there is only one linearly independent column.