 We have seen yesterday that if V is a finite dimensional vector space and T from V to W is an isomorphism then W is also finite dimensional in fact we have dimension of W equals dimension of V this result we had seen yesterday okay. Let us look at the converse of this result that is if I have two finite dimensional vector spaces whose dimensions are the same are they isomorphism okay the answer is yes that is what I will do today the proof of this result and also some consequences of the rank nullity dimension theorem there is a little unfinished business okay so this is what we will first prove today I have V and W finite dimensional vector spaces with the same dimension then V is isomorphic to W that is what we will prove now you will see that this is the proof is natural let us take two bases so proof of this theorem let us take U 1, U 2, etc U n to be a basis this is a basis of V I know that dimension of W is also n and so I will enumerate a basis for the space W be a basis of W the numbers are the same the number of elements in these two bases this number is the same that is because dimension V is dimension W now what we know is that there is a linear transformation that maps each U i to the corresponding W i and this linear transformation unique okay the only thing that we need to do is to verify that this linear transformation is invertible injective and surjective see we want to show V is isomorphic to W so we must show that this T is invertible okay so let us define the mapping T from V into W by T of U i equals W i we know that there is one such linear transformation we also know that this transformation is unique we must show that this transformation is invertible from the rank nullity dimension theorem it is enough if you show it is injective the dimensions are the same so it must be surjective also so let us show that it is injective to show that it is injective we will show that null space of T is single term 0 we had seen that this is an equivalent condition so let us look at an arbitrary element in the null space of T then T x equal to 0 T x equal to 0 and this x belongs to V and V has U 1 U 2 etc U n as a basis so this is T of some linear combination alpha 1 U 1 plus alpha 2 U 2 etc alpha n U n T is linear alpha 1 T of U 1 etc plus alpha n T of U n but T of U 1 is W 1 etc T of U n is W n so this is alpha 1 W 1 etc alpha n W n so I have this linear combination alpha 1 W 1 etc alpha n W n equal to 0 I also know that the vectors W 1 etc W n they form a basis for W so in particular they are independent so it means alpha 1 equals alpha 2 equals alpha n equals 0 but what is x? x is alpha 1 U 1 etc alpha n U n so x is the 0 vector the definition of x is given here this is x and so T x equal to 0 in place x equal to 0 T is a linear transformation so T is injective where angle tree dimension theorem T is surjective so T is invertible so T is an isomorphism okay so T is an isomorphism hence V is isomorphic to W okay so the dimensions coincide then they must be isomorphic vector spaces there is an easy corollary to this result let V be a real vector space of dimension n let V be a real vector space of dimension n then V is isomorphic to R n then V is isomorphic to R n we can similarly show that V is a complex vector space of dimension n then V is isomorphic to C n okay remember the The dimension of R n over R is n that is what I mean here real vector space R n is the vector space over R look at the vector space C n, C n over C also has dimension n but C n over R that is also a vector space that has dimension to n because any complex number is an ordered pair of real numbers okay to represent a complex number you need two real numbers. If it is a complex vector space of dimension n then it is isomorphic to C n alright if you want to write it as R n then it is R 2 n that you need to write okay. So this is to summarize about isomorphisms between vector spaces as I said I want to go back to this rank nullity dimension theorem look at some consequences okay. Please recall this result that you must have learnt in when you studied functions suppose x is a finite set then we know that a function f from x to itself you know that a function from x to itself is this function is injective this function is injective 1 1 if and only if f is surjective if and only if f is bijective okay I hope you have seen this result over finite sets over infinite sets this is not true over finite sets a function is injective if and only if it is surjective if and only if it is bijective. So again over finite sets it is enough if you verify that it is either surjective or injective then it will follow that it is bijective it is invertible a similar result holds as a consequence of the rank nullity dimension theorem for a linear transformation over finite dimensional vector spaces. So this is a really particular case of the rank nullity dimension theorem which means I will not prove this result but I want you to compare the following result with the result that I have written down just now okay this theorem is a consequence of rank nullity dimension theorem and the proof is left as an exercise. Get T from V to V be linear and V be finite dimension T is linear V is finite dimension T is an operator on V T is a linear transformation from V to itself that is an operator then the following are equivalent in the following statements on T are equivalent first statement is T is injective second statement is T is surjective that statement for instance I could include null space of T equals single term 0 I could include one more statement range of T is the whole of V but these are equivalent statements easy consequence of the rank nullity dimension theorem for instance look at A implies B or A if and only if B T is injective if and only if null space of T is single term 0 which from the rank nullity dimension theorem tells us that rank of T is N but rank of T is a dimension of the range space of T it is a subspace of V V has dimension N this subspace has dimension N so this subspace must be equal to the entire V and so range of T equals V so T is surjective etc okay so there is nothing new in this result it is only a particular case of rank nullity dimension theorem where instead of W I have taken V instead of W I have taken V and this also allows us to compare with the result that I have stated here for a function over finite sets injectivity is equivalent to surjectivity equivalent to bijectivity but what happens in the infinite dimensional case in the infinite dimensional case this result is not true okay that is the reason why there is this restriction so let us look at an example I want to give an example of a linear operator say which is surjective but not injective so look at the following consider V as the vector space the real vector space of all polynomials the vector space of all polynomials whose coefficients are real numbers defined over the variable T, T is a real variable now this is an infinite dimensional vector space for instance this is a subspace of C 0 1 the space of continuous functions which we have shown is infinite dimension of course being a subspace of an infinite dimensional space does not mean this must also be infinite dimensionals but then we know that 1 T T square etc they are linearly independent so this is infinite dimensional V is infinite dimensional the same idea that we use to prove C 0 1 is infinite dimensional applies here let us define the differentiation operator define D from V to V any polynomial is infinitely many times differentiable so define D from V to V by D F of T to be what is the form of F depending on that D of T will be defined so let me say D F of T where F of T now F is in V so F is a polynomial F of T is let us say alpha 0 plus alpha 1 T etc plus alpha n T to the n then D F T is defined as alpha 1 plus the next term is alpha 2 T square that is 2 alpha 2 T plus etc plus n alpha n T to the n minus 1 this is the definition of D this is so called differentiation operator takes a polynomial computes its derivative that is the operator D then since differentiation operator is linear transformation this D is linear okay you can apply it to each term this D is linear let me also define by the way is D injective what is the null space of D null space of D is set of all constants constant polynomials okay so D is not injective D is not injective okay so let us now define another operator this is the so called indefinite integral operator I will do that so I am going to define another operator I will call that T from V to V is defined by let us take okay T G of T what is the form of G if G of T is let us say beta 0 plus beta 1 T etc beta L T to the L then I will do this so called indefinite integration that is integral of this which is beta 0 T plus beta 1 T square by 2 etc plus beta L T to the L plus 1 by L plus 1 so this is what I will define for the operator T the so called indefinite integral okay this is like integral 0 to T G of T DT this is again linear this T is linear okay what about range of T okay what about null space of T T is injective okay T is injective let us look at the relationship between D and T look at the relationship between D and T what about T D what do you expect T D to be you first differentiate and integrate okay let us start with D T what is D T D T is identity okay please check this D T is identity what about T D T D for example for constants if you apply you will see that it is 0 so T D cannot be identity T D cannot be identity so what is the moral of the story the moral of the story is see this D is not injective but it has a right inverse D is not injective okay the operator D that we have defined is not injective D has a right inverse from this it can be shown that T is D is surjective I am going to leave this as an exercise from this it can be shown that D is surjective what about T T is injective but you can verify that T is not surjective because of this okay that is also an exercise for you T is not surjective this T is injective let me say this right away implies T is not surjective okay so I have given an example of an operator over an infinite dimensional space one operator which is injective but not surjective the other operator which is surjective but not injective okay differentiation operator is surjective but not injective the indefinite integral operator is injective but not surjective okay so please check these calculations and so this result is not true for infinite dimensional spaces there are other consequences in particular let me give this little exercise using rank nullity dimension theorem one could solve this problem let T from V to V be linear with V finite dimensional suppose that null space of T equals range of T suppose that null space of T equals range space of T V is finite dimensional what is the conclusion that you could draw on the dimension of V it is even then dimension of V is even it is an even positive integral okay now that is an easy consequence of the rank nullity dimension theorem but what I want you to do is to give an example of such a linear transformation give an example of one such transformation T okay now in order to sort out this second problem I will give one hint T is an operator that satisfies the condition null space of T is range of T okay then first try to show that T square equal to 0 null space of T equals range of T so that this implies T square is 0 T square is a 0 operator construct an operator that satisfies this condition to construct an operator that satisfies this condition you construct a 2 by 2 for example let us construct T over R 2 T from R 2 to R 2 remember the spaces must be the same otherwise null space of T and range of T null space of T is a subspace of V range of T is a subspace of W so when I talk about equality I can do that only when the domain is equal to the codomain take T from R 2 to R 2 you want to construct T such that T square is 0 start with the matrix A that satisfies the condition A square equal to 0 use that matrix A and then construct the linear transformation T X equals A X so that I am going to leave you leave with you as an exercise but what you can try is the converse true that is a related question if T square equal to 0 then does it follow that range of T equal to null space of T that is another question so construct an example T that satisfies this verify if the converse is true okay. One final application of the rank penalty dimension theorem is to show that the row rank of a matrix is the column rank of that matrix this statement was made long time ago when we discussed elementary row operations row equivalence etc so we would like to prove the following theorem we would like to show that the row rank of A equals the column rank of A before that what is the row rank what is the column rank so before I prove so let us recall these definitions the row rank of a matrix of a matrix is defined to be the dimension of the row space of A the row rank is a dimension of the row space of A the column rank is a dimension of the column space of A okay as we have observed the row rank the row space rather is a subspace of R n the column space is a subspace of R m is that clear each row has n coordinates each column has m coordinates so the column space is a subspace of R m and the row space is a subspace of R n so it is an interesting and an important result that these spaces may lie in different these subspaces may lie in different spaces but their ranks are the same their dimensions are the same that is another consequence the rank nullity dimension theorem but we need a little work before settling this identity. So let us recall what we did for row reduced echelon matrices let me first prove this result let us start with two matrices of the same order which are related by the equation two matrices A and B related by this equation B equals P times A where P is a square matrix of order m P is a square matrix of order m so this is m cross m m cross n the product is m cross n B is m cross n what I want to do is to conclude that if B is equal to P A then the row space of B is contained in the row space of A I want to conclude the row space of B is contained in the row space of A to prove this I will invoke what we had seen earlier that anything so I am going towards the proof of this look at A x for any x A is n m cross n look at A x for any x this is in the column space of A this we have seen before in particular I wrote this A x as okay I want to conclude that A x is in the column space of A why is that so you can see that if I write A as A 1 A 2 etc A n these are the n columns of A with A 1 A 2 etc A n being the columns of A if A is equal to this then we can verify that A x is x 1 A 1 plus x 2 A 2 etc plus x n A n where I have used x is the column vector x 1 x 2 etc x n okay now this we have observed before see what you have on the right is linear combination of the columns of A what I am saying is that this is precisely A x where A is a matrix that we started with and x is this column vector okay that is to reinforce A x is in the column space of A okay let us go back to this equation B equal to P times A now B equals P times A after taking transposes using the fact that transpose satisfies a reverse order law this gives me B transpose as A transpose P transpose B transpose is A transpose P transpose let me call this as A transpose Q so Q is B transpose Q is P transpose let me write Q see P is m cross m so Q is also m cross m Q has m columns m rows so let me write Q as Q 1 Q 2 etc Q m again these are the columns of Q just as how I wrote down the columns A 1 etc A n for A these are the columns for of Q let me write down the columns of B transpose I will call that B 1 B 2 etc B transpose is B 1 B 2 etc B is of order m cross n so there are n columns B transpose will have m columns so B 1 B 2 etc B m B is m by n so B transpose is n by m in particular number of columns of B transpose is m the columns of B transpose I am denoting them by B 1 B 2 etc B m okay please be clear about the notation here I am using A 1 etc A n for A Q 1 etc Q m for Q but B 1 etc B m for B transpose okay for simplicity okay now look at this equation once again B transpose is A transpose Q B transpose equals A transpose Q gives me B 1 B 2 etc B m equals A transpose Q 1 Q 2 etc Q m and this is something we have seen before you can push the matrix inside this is A transpose Q 1 A transpose Q 2 etc A transpose Q m okay that is A into Q 1 Q 2 etc Q n A transpose into Q 1 Q 2 then this matrix can be pushed inside this can be verified by matrix multiplication I remember having told you this before in particular look at B 1 B 1 is the first column of B transpose B 1 is in the column space of B transpose B 1 is in the column space of B transpose then B 1 transpose is in the rows okay then B 1 is in the row space of B B 1 is in the row space of B column space of B transpose row space of B see in fact you will observe that B 1 is the first column of B transpose so B 1 is the first row of B in particular it is in the row space of B okay look at what you have on the other side A transpose B but B 1 is A transpose Q 1 okay look at A transpose Q 1 in its own right a matrix times a vector that is a column vector of A transpose is it clear that A transpose Q 1 is let me say in the let me say it is the first column of let me rewrite see I want to make use of this fact that A x is in the column space of A A transpose Q 1 is in the column space of A transpose Q 1 is in the column space of A transpose so it is in the row space of A A transpose Q 1 is in the row space of A it is in the column space of A transpose so it is in the row space of A it is a linear combination the columns of A transpose which is a linear combination of the rows of A. So what have we proved? On the one hand it is B1 is in the row space of B, on the other hand it is in the row space of A. So if B is equal to PA we have shown that the rows of B are linear combinations of the rows of A that is the first statement okay let me write I want to prove this statement what we first observed is that if B equals PA then the row space of A sorry the row space of B is contained in the row space of A if B is equal to PA then the row space of B is contained in the row space of A that is what we have proved just now. Because P is invertible then I can pre-multiply by P inverse if P is invertible by pre-multiply by P inverse I get I write A as P inverse B okay so what I have done is you can call this S times B I have written A as S times B apply the same idea if A is S times B then the row space of A is contained in the row space of B the argument that we have given just now can be applied here in this case so this implies row space of B sorry row space of A is contained in row space of B okay if this matrix P is invertible then I get this result which means if A equals if B equals PA with P invertible then the row spaces of A and B are the same in particular in particular row equivalent matrices have the same row space because of A and B are row equivalent then A can be written as P times B where P is a product of elementary matrices so row equivalent matrices have the same row space if A is row equivalent to B then the row space of A equals the row space of B that is because A can be written as P times B for some invertible matrix P look at the row okay now I want to determine a basis for the row space of A I want to determine a basis for the row space of A look at the row reduced echelon form R of the matrix A R is a row reduced echelon matrix row equivalent to A okay as before R is a row reduced echelon matrix row equivalent to A then the first let us take R to be the number of non-zero rows of R small R equals the number of non-zero rows of capital R okay okay what is the dimension of the row space of A can you see it is R for the following reason what is the rows what is the dimension of the what is the row space of R capital R row space of R is the subspace consisting of the linear combination of the rows of A now R first R zeros first R rows are non-zero the rest N minus R are zero so the N minus R zero rows do not contribute anything to the row space it is only the contribution that comes from the first R non-zero rows of capital R so the row space of capital R is spanned by these R vectors are these R vectors linearly independent we need to verify that but that is easy an argument similar to the standard basis can be given okay instead of just mentioning it let me write down the form of R and it will be clear from the first R non-zero rows that these non-zero rows are linearly independent in fact remember this is how we used to write we have certain zeros then there is a one and some entries here then some zeros here one appears to the right of this some other entries etc finally I have lots of zeros here this and then I have the zero rows that is I have this corresponds to the N minus R zero rows I am writing down R this is my matrix R these are then R non-zero rows and it is clear that these behave somewhat like the standard basis vectors so you take a linear combination alpha one times this vector plus alpha two times this vector etc alpha R times this vector equate that to zero then right away the first equation gives you alpha one zero second equation gives you alpha two zero they do not lie along the same column they are in different columns that is if this is in C one this is in C two etc this is in C R then we know C one less than C two less than C three etc less than C R so it is clear that these R non-zero rows the first R non-zero rows of R are independent and that they span the row space of R and so the dimension of the row space of A is equal to R dimension of row space of A equals R where R is the number of non-zero rows of the row reduced echelon form of A that is row rank of A that is equal to R row rank of A is equal to R we need to show that the column rank of A is also R okay for the column rank we need to do a little more so let me see how much I can cover now for the column rank I want to start all over again consider A in R m cross n that I started with earlier look at the system A x equal to 0 collect all the solutions call that as S, S is the solution set of the homogeneous system of equations A x equal to 0 okay remember that you can call this as the kernel of A only thing is we have defined kernel of a linear transformation what is the kernel of a matrix through this matrix you can define a linear transformation and then the kernel of that linear transformation is a kernel of this matrix okay but in any case what I want to say is this S is a subspace this is a subspace of R n set of all x in R n so the subspace of R n okay I would like to calculate the dimension of the subspace it cannot exceed n I know I want to calculate the dimension of the subspace I want to conclude that the dimension of the subspace is n minus R then it will follow that the column rank is R which is the same as the row rank of A okay so I want to conclude the dimension of the subspace is n minus R to calculate the dimension I also observe that if capital R is a row reduced echelon form of A then the solution set of R x equal to 0 and A x equal to 0 are the same I will use this to calculate the dimension of S I will use the row reduced echelon form R to calculate the dimension of S let us go back and write down these equations and analyze this once again let me in this case use this notation J is the subset of all indices 1, 2, 3, etc n which do not have C 1, C 2, etc difference C 1, C 2, etc C R C 1 is the column in which the leading nonzero entry of the first row appears C 2 is the column in which the leading nonzero entry of the second row appears etc okay I remove these indices from the indices 1, 2, 3, etc n then I can write down the so I can expand this we have done this before but I will do it using this notation you will see it is the same as before X C 1, X C 2, etc X C R correspond to those variables out of X 1, X 2, etc X n that correspond to these columns okay then X C 1 plus we write summation J equals 1 to n minus R but this time it is summation J over J let me say I have C 1 no C 1 I cannot use let us say alpha 1 J X J equals 0, etc X C R plus summation J element of J alpha R J X J equal to 0 okay probably I will stop here and then continue next time okay remember that I want to show that the dimension of this subspaces n minus R that would show that the column space of A is R dimensional which means the column rank of A is R the same as the row rank of A.