 Discussing the notion of linear independence and linear dependence of vectors and in the last lecture the definition was given and we looked at a couple of examples. Let me quickly recall the definitions and then look at some properties of linear independence dependence, prove a little result then look at the notion of spanning subset and then the notion of a basis. So in today's lecture the main objective is to look at the concept of a basis, give some examples and if time permits we will look at the notion of the dimension or the dimension notion will be discussed in the next lecture. So let us quickly recall what is linear dependence, linear independence etc. Vectors V1, V2 etc. Vr for instance contained in a vector space V are said to be linearly dependent if there exist scalars let us say alpha 1, alpha 2 etc. alpha r coming from the underlying field not all 0 such that the equation alpha 1 V1 plus alpha 2 V2 plus etc. alpha r Vr equal to 0 is satisfied. So this is linear independence and we saw why the name dependence has come. On the other hand linear independence means that, so let me again write these vectors V1, V2 etc. Vr are said to be linearly independent if they are not linearly dependent. So let me simply say if they are not dependent what we had seen last time is that if these vectors are linearly dependent then it means that then we have the implication alpha 1 V1 plus alpha 2 V2 plus etc. alpha r Vr equal to 0. This holds only if alpha 1 equals alpha 2 equals etc. equals alpha r equal to 0. One could think of the left hand side as a linear combination for instance then linear independence means that you could get 0 as a linear combination of the vectors V1 etc. only if each scalar is 0. Linear dependence means that one could get non-zero scalars for which the linear combination equals 0. So that is linear dependence for instance. We looked at a few examples. Let us look at a few properties quickly, properties of linear independence dependence. The first property is vacuously true that is empty set is linearly independent. It is vacuously true because you will not be able to pick vectors from this empty set for instance. So this is linearly independent. Next property is let us say we have I will call it x. Let us say x equal to x1, x2 etc, xm where 1 of the xis is the 0 vector. So this finite set of vectors contains a 0 vector then this is linearly dependent. Then x is linearly dependent set. It is almost obvious. All that one could do is to take suppose let us say that xi is the 0 vector then you can assign any non-zero number to that, you can assign any non-zero scalar to that vector xi. Assign the number 0 to all the other vectors then this linear combination gives us a 0 vector with at least one scalar being non-zero. So this is almost an immediate consequence of linear dependence. What we also have is the following property. Let us say I have let x be given as before x1, x2 etc, xm be linearly independent. And let me say that a is contained in x then a is a linearly independent subset. That is a is linearly independent. We say that any subset of a linearly independent set must be linearly independent. I think I will leave this as an exercise for you to prove. So what is the statement for? Let x be x1, x2 etc, xm be a subset. So that is given such that such that a non-empty subset of x is linearly dependent. So a is a subset of x, a non-empty subset of x, x is given here. What we know is that a is linearly dependent then it can be shown that x is linearly dependent. So this is same as saying that any superset of a linearly dependent set must be linearly dependent. What about linear dependence of two vectors? Can we just the vectors are given to you? Let us say x1 and x2 just by looking at the components can we immediately say whether they are linearly dependent? The answer is yes. So let us prove that it is a little result. I will state it as a lemma. Two vectors x and y are linearly dependent if and only if one is a multiple of the other, if and only if one is a multiple of the other, by which I mean a scalar multiple of the other. So let us suppose assume that suppose that x is a scalar multiple of y, let us say x equal to alpha y for some scalar alpha. Then I can rewrite this equation as 1 into x minus alpha into y equal to 0, where so I have an equation of the type alpha 1 x plus alpha 2 y equal to 0, where at least one of the scalars is not 0 and so this is precisely linear dependence. So this set x comma y is linearly dependent. So this is the simpler part. So the converse is that you are given that this set is linearly dependent. You must show that one is a scalar multiple of the other. So let us assume conversely suppose that this set is linearly dependent. That means what? So there exist scalars, there exist alpha beta not both being 0, not both 0 such that alpha x plus beta y equal to 0, not both 0 such that alpha x plus beta y equal to 0. So what we do is just consider the two cases. Suppose alpha is not 0, if alpha is not 0 then we can write x as minus beta by alpha times y. So we have written x as a multiple of y. If alpha on the other hand is 0 then beta y equal to 0 with beta not equal to 0 because of linear dependence both cannot be 0 simultaneously. So if alpha is 0 beta cannot be 0 and so we know from one of the properties of vector space that this implies that y is equal to 0. So I can write y equal to 0. 0 is now alpha is 0 you exploit that so that is alpha times x. Alpha is 0 so I have rewritten y as a multiple scalar multiple of x and so if the set is linearly dependent then one is a scalar multiple of the other. So that is a converse part. Let us now prove a more important result which will be used at least twice in the next few lectures. This result is the following. I will state this as a theorem. Let us say non-zero vectors okay so this is a statement. Non-zero vectors v1, v2 etc vn are linearly dependent okay so I have a set of non-zero vectors v1, v2 etc vn then I am trying to characterize linear dependence okay. Non-zero vectors are linearly dependent if and only if at least one vector in this set it will at least one of them let us say at least one of them is a linear combination if at least one of them is a linear combination of the preceding vectors. This will lead to an important result which in turn leads to the definition of the dimension of a vector space okay. So these vectors are linearly dependent if and only if there is at least one vector which is a linear combination of the preceding vectors okay so there is one vector let us say vk so let me write this statement that is so I am just writing the second part here. There exists k where we can show that this k is strictly greater than 1 there exists k such that vk can be written as a linear combination of the preceding vectors. So vk is alpha 1 v1 plus alpha 2 v2 plus etc plus alpha k minus 1 vk minus 1 so this is the second part there is at least one vector that can be written as a linear combination of the preceding vectors okay. So let us look at the proof one part is again easy suppose that there is a vector which can be written as a linear combination of the preceding vectors okay. Let us refer to this as equation star so suppose that star holds is it immediate that the vectors are linearly dependent all one has to do is to just rewrite it in the following manner then alpha 1 v1 plus alpha 2 v2 plus etc alpha k minus 1 vk minus 1 minus 1 times vk plus 0 times vk plus 1 etc 0 into vn this is equal to 0 so what I have done is I have obtained a linear combination of the vectors v1 v2 etc vn a linear combination on the left 0 vector on the right where there is at least one constant one scalar that is non-zero so this is linear dependence. So this proves that v1 v2 etc vn is linearly dependent so this part is easy okay let us prove the converse. Conversely I am given that v1 v2 etc vn are linearly dependent I must show that there is at least one vector that can written as a linear combination of the preceding vectors. Conversely suppose that v1 v2 etc vn is linearly dependent if these vectors are linearly dependent then by the definition there exist a scalars alpha 1 alpha 2 etc alpha n not all of them being 0 so not all 0 such that such that I have alpha 1 v1 plus alpha 2 v2 plus etc alpha n vn equals to 0 vector now among these scalars alpha 1 etc alpha n I choose the one which has the largest subscript let k be the largest integer positive integer such that such that alpha k is not 0 okay k is the largest subscript for which alpha k is not 0 then k is k can be equal to n that is not a problem but k cannot be equal to 1 k cannot be equal to 1 so why is that so if k is 1 okay if k is 1 you go back to this equation this means that see k is such that it is the largest among 1 2 3 etc n such that alpha k not 0 what it means is that alpha 1 v1 is 0 all the other scalars are 0 so alpha 1 v1 is 0 v1 is not 0 this would mean that alpha 1 is 0 a contradiction so k cannot be equal to 1 k has to be at least 2 but it can be n in any case what does this tell you you go back to this equation k is the largest which means alpha k plus 1 alpha k plus 2 etc alpha n they are all 0 okay so let us just exploit that so I will simply say that it now follows by the definition of k that alpha 1 v1 plus alpha 2 v2 plus etc plus alpha k vk equal to 0 because the other coefficients the other scalars alpha k plus 1 etc they are all 0 we also know that alpha k is not 0 all that you have to do is divide by alpha k keep vk on the left push the other vectors on the right and then you have the linear combination for vk in terms of preceding vectors that is it so you can divide so I will simply say vk can now be written as minus 1 by alpha k into alpha 1 v1 etc alpha k minus 1 vk minus 1 that is I have written vk as a linear combination of the preceding vectors v1 v2 etc vk minus 1 okay that completes the proof of this theorem. Let us look at an example okay a numerical example let us take the two vectors v1 as 11 v2 as 12 and v3 as 13 for instance these are vectors in R2 okay now what is a guess about these three vectors can they be linearly independent see the answer is no they cannot be linearly independent so we will prove a more general result a little later these are three vectors in R2 so we will be able to show that such a set cannot be linearly independent okay but let us try and prove that they are linearly dependent by using the previous result for instance so we will do the actual calculation and then show that they are linearly dependent but however observe that any two of them any two vectors taken at a time are linearly independent because v2 is not a multiple of v1 v3 is not a multiple of v1 v3 is not a multiple of v2 either and so you take two of them they are linearly independent we will show that three of them taken together will form a linearly dependent subset okay so what is clear is that see by appealing to the previous theorem by appealing to the previous theorem I am sorry the statement is here by appealing to the previous theorem which we can actually show that the third vector is a linear combination of the first two vectors okay so let us do that quickly it is essentially solving linear equations so I am seeking let us say alpha beta such that v3 is a linear combination okay let me write this v3 is a linear combination of v1 and v2 where the scalars are alpha and beta so let us say I have 1 3 this is alpha into v1 1 1 so I will write it as alpha plus beta 2 beta so this gives rise to two equations in 2 unknowns that is alpha plus beta equals 1 alpha plus 2 beta equals 3 so you subtract one from the other this minus this gives me beta equals 2 and alpha equals minus 1 beta equals 2 alpha equals minus 1 and so v3 we can actually verify that it is a linear combination minus 1 into 1 1 plus 2 into 1 2 okay so you can actually verify that this is true so we have written v3 as a linear combination of the preceding two vectors and so this set is a linearly dependent set okay let us now look at another notion called spanning subset is subset let us say yes of a vector space v is called a spanning subset is called a spanning subset of v if for every v element of v there are okay see I will make the following assumption this set S could be infinite but for practical see for all the subsets in this course we will have S to be finite we will be looking at the so called finite dimensional vector space so let me just assume that a subset S equals so this S is v1 v2 etc vS or may be vk this is a spanning subset for every v element of v there are scalars there are scalars alpha 1 alpha 2 etc alpha k such that this v is a linear combination of those vectors okay such that this v is alpha 1 v1 plus alpha 2 v2 etc plus alpha k vk this is a linear combination of the vectors v1 v2 etc vk all that we are saying is that any vector in the vector space v that we started with can be written as a linear combination of the vectors v1 v2 etc vk if that happens then we say that the set of vectors v1 v2 etc vk that is S is a spanning subset okay this is called a spanning subset of v okay so let us look at some examples let us say first one let us take S to be let us take S to be 1 0 0 0 1 0 0 0 1 so this is really a trivial example this is a subset of R3 does it follow that this is a spanning subset of R3 that is almost immediate you take any X in R3 let us use the notation that X equal to X1 X2 X3 in this manner like a row vector then it is easy to see that X equals by the way I can call this E1 this as E2 and this as E3 then I can write this X as X1 times E1 plus X2 times E2 plus X3 times E3 and so this is trivially a spanning subset of R3 of the vector space R3 okay let us look at S as the set consisting of the polynomials p0 p1 etc pn where p0 of t is 1 p1 of t equal to t etc pn of t equals t to the n with t in 0 comma 1 for instance or this could be even the entire R so I have these n plus 1 vectors the vector space V is pn 0 1 the vector space of all polynomials with real coefficients where the variable t varies in the interval 0 1 we have seen that this is a vector space is this a spanning subset of V is this a spanning subset of V the answer is yes immediately it follows from the way we write a polynomial the way we write an element in pn 0 1 okay so I will simply leave it as an exercise for you to show that S spans V okay on the other hand if you take a subset of this for instance p0 p1 etc pn minus 1 then obviously it cannot generate a polynomial whose degree is whose degree is n precisely that is the coefficient of t to the n is not 0 then 1 t t square etc t to the n minus 1 cannot any linear combination of those cannot give you a constant times t to the n so you take any subset then that cannot be a spanning subset of this vector space V okay okay so what is the basis then so these two notions form part of the definition of a basis so what is the basis definition let V be a vector space a subset I will use script B a subset B is called a basis is called a basis of V for V etc a subset B of V is called a basis of V if it satisfies these two conditions the first condition is that B is a linearly independent subset so I will say that B is linearly independent the second condition is that it must be a spanning subset of the vector space B script B spans the vector space V so this is the definition of a basis of a vector space okay let us now look at examples examples of vector spaces may be a couple of them and examples of basis in those vector space without writing the details let us go back to the previous two examples of spanning subsets is this a basis of R3 is this set S a basis of R3 see it must satisfy the two constraints that it is linearly independent and that it spans R3 that it spans R3 is what we have proved here and the linear independence of these three vectors we have proved in the previous lecture that is all one has to do is to look at a linear combination alpha 1 E1 plus alpha 2 E2 plus alpha 3 E3 equal to 0 then that will give us alpha 1 equal to alpha 2 equal to alpha 3 equal to 0 okay so this is clearly linearly independent and spanning subset and so it is a basis come to the second example in the last lecture we have shown that these vectors are linearly independent by differentiating for instance it is also discussed why this is a spanning okay so this is a definition of a basis okay a subset of a vector space must satisfy both these conditions it is a spanning subset and that it is a linearly independent subset let us look at two examples the previous two examples of subsets which we have argued must be spanning subsets so the first one is this consisting of E1 E2 E3 this has been shown to be a spanning subset of R3 in the last lecture we have also shown that this is a linearly independent subset okay that is one looks at alpha 1 E1 plus alpha 2 E2 plus alpha 3 E3 and equate that to 0 then it can be shown it is a trivial thing to show that alpha 1 alpha 2 alpha 3 must be 0 so this is a basis of R3 let us look at this example this is a spanning subset that is what we discussed while ago this is linearly independent also was proved in the last lecture when we for instance looked at differentiating the polynomials sufficiently many times to show that these scalars are all 0 so this is also a basis this is a basis for Pn01 okay what about the previous example I have these three vectors in R2 is this a basis okay look at these three vectors V1 V2 V3 do they form as a basis for R2 the answer is no because we have shown that these are linearly dependent see it is a spanning subset that is something you can show okay I leave it as an exercise for you to show that this is a spanning subset but this is not linearly independent so this does not form a basis for R2 okay you can have more than one basis in fact there are infinitely many basis for any vector space let me just give one example okay so I am really looking at example 4 okay example 4 let us look at 1 0 and 0 1 from what we discussed in the first example for R3 it follows that this is a basis for R3 okay this is a basis for I am sorry R2 okay this is a basis for R2 I am claiming that the vectors consisting of 1 1 and 2 1 this is also a basis for R2 okay this is another claim this is also a basis for R2 for one thing linear independence is immediate one is not a multiple of the other so this is a linearly independent subset so that is not a problem so this is linearly independent all that we need to show is that this is a spanning subset of R2 that is any vector in R2 can be written as a linear combination of these two vectors so let us verify that fact so the claim is span of these vectors is R2 so let us take a vector in R2 let X equal let us say for instance alpha beta this is the general form of a vector in R2 okay maybe I should take X1 X2 and then I want X1 X2 and then I will take the scalars to be alpha beta okay then I have I am looking for I am looking for alpha beta such that this X1 X2 can be written as alpha times 1 1 plus beta times 2 1 so the question is given numbers X1 X2 real numbers can I find real numbers alpha beta such that this equation has a solution it would then follow that this vector X is a linear combination of these vectors okay so this is what alpha plus 2 beta and the other one is alpha plus beta X1 X2 the question is can I obtain alpha beta in terms of X1 and X2 that is all so I am really solving for alpha and beta X1 X2 is given so this is like a linear system the right hand the left hand side vector is given I must solve for the unknowns alpha and beta alpha and beta are the unknowns it is easy to see that one could immediately solve subtract this from this you get beta equals X1 minus X2 yeah beta is X1 minus X2 and alpha is X2 minus beta 2 X2 minus X1 okay can we just quickly verify alpha plus beta is X1 gets cancelled X2 so alpha plus beta is X2 alpha plus 2 times beta is 2 X2 that gets cancelled minus X1 plus 2 X1 X1 okay so we have solved for the unknowns alpha and beta in terms of the known numbers X1 X2 and so this shows that this statement is true okay and so this is another basis of R2 okay we will show that any linearly independent subset of R2 consisting of two elements will be a basis okay so though there are infinitely many choices okay let us now look at one of the consequences of the result that I proved a little early that as a set of vectors a set of nonzero vectors V1 V2 etc VN is linearly dependent if and only if at least one of them is a linear combination of the preceding vectors we will use that result to show the following important theorem this theorem establishes a relationship between the number of elements in a linearly independent subset on the one hand and a spanning subset of the on the other okay what is the number of enough okay the number of elements in a what is the relationship between the number of elements in a linearly independent subset and a spanning subset of a vector space let us say that X equals X1 X2 etc XM suppose that this let X be this and Y be let me say Y1 Y2 etc YN okay so I have M elements here I have M elements here I have N elements here such that such that X is linearly independent and Y is spanning subset of course these are subsets of a vector space okay subsets of vectors spanning subset in V means that V can be written as a linear combination of the vectors Y1 etc YN any element in V can be written as a linear combination of elements in Y then the claim is that M is less than or equal to N the claim is that M is less than or equal to N the number of elements in a linearly independent subset of a vector space cannot exceed the number of elements in a spanning subset of the vector space okay so let us see how the proof goes see this is a spanning subset and this is linearly independent okay let us now consider the subset XM Y1 Y2 etc YN now what we know is that Y1 Y2 etc YN they these vectors form a spanning subset of V so in particular XM can be written as a linear combination of these vectors and so this is a linearly dependent subset this is a linearly dependent subset by the previous result that I quoted just now at least one of the YJs is a linear combination of the vectors preceding that YJ okay there is at least one YJ which has the property that it is a linear combination of the preceding preceding vectors preceding vectors including XM and remember that in that theorem the K is at least 2 so you cannot include the first vector okay that is where this the fact that K is at least 2 is important and so what follows is that there is a one YJ which is a linear combination of the preceding vectors and so I will remove that YJ from this let us call so there exists YJ which is a linear combination which is a linear combination of Y1 Y2 etc YJ minus 1 so what I will do now is I will remove this YJ from this set and call it as Y1 I will define Y1 as XM Y1 Y2 etc YN and then delete this YJ which is a linear combination of the preceding J minus 1 vectors then what is the number of elements as Y1 the number of elements in this Y1 in this set Y1 has again N vectors I have included one and then deleted one so Y1 has N vectors and Y1 is linear Y1 spans V how is the second part true I am claiming that this Y1 spans V that is because of the following you take any vector any vector X in V then that is a linear combination of Y1 etc YN look at YJ YJ is not present here and so YJ I know is a linear combination of Y1 etc YJ minus 1 so for that coefficient of YJ in the representation of X I will substitute the linear combination for YJ in terms of Y1 etc YJ minus 1 then it follows that the X that I started with is a linear combination of Y1 Y2 etc YJ minus 1 YJ plus 1 etc YN so this remains a spanning subset okay we started with Y which was a spanning subset Y1 remains a spanning subset it also has the property that it has precisely the same number of elements as Y okay. So we will now consider as before XM minus 1 XN Y1 etc YN difference YJ so I have included one extra element the previous element XM minus 1 from the subset X okay now this set must be linearly this set is linearly dependent what is the reason for that look at XM minus 1 XM minus 1 is a vector in V that can be written as a linear combination of Y1 etc YN because Y1 etc YN to begin with is a spanning subset okay so XM minus 1 can be written as Y1 etc YN but there is a coefficient corresponding to YJ which can in turn be rewritten in terms of Y1 Y2 etc YJ minus 1 so I have so both XM and XM minus 1 can be written as a linear combination of the vectors here okay so this is a linearly dependent subset this is a linearly dependent subset in particular XM minus 1 can be written as a linear combination of these vectors okay by the argument that I have given just now. So this is a linear combination of these elements and so this is linearly dependent again appeal to that theorem that we proved today there is at least one vector which is a linear combination of the preceding vectors okay now you must observe that so what I will do in the next step is to delete that vector I am claiming that it is one of the Y's that I will delete and not one of the X's I am claiming that when I go to the next step for deletion it is one of the Y's that I will delete and not the XM's because of the reason that XM minus 1 XM for instance is a subset of the linearly independent set X1 etc XM and so these are linearly independent so you cannot write one as a linear combination of the other. So this means that when you write when you apply the theorem for this linearly dependent subset you are deleting only some Y for instance okay and not XM minus 1 or XM so I will simply say then you look at Y2 as XM minus 1 XM Y1 etc Yn difference Yj and then let me just say difference Yr okay so I am deleting two vectors now I have included two vectors I have deleted two vectors the number of elements here is Y2 has n vectors by the same argument as above it follows that Y2 spans V the same argument as above it follows that Y2 spans V okay so this is Yr this is Yr you repeat this procedure M times repeat this procedure M times then Ym has okay you repeat this procedure M times then look at Ym Ym has n vectors and it spans V Ym has n vectors and it spans V this means what also now when you do this what it means is that after the mth step what follows is that once you have once you are in the mth step once you have constructed Ym what follows is that you have all these vectors X1 etc XM with a few Ys left probably you have all the vectors X1 etc XM with a few Ys left probably and so what follows is that Ym is I am sorry what follows is that X is contained in Y I am sorry what follows is that Y yeah what follows is that X is contained in yeah okay this is correct what follows is that X is contained in Ym Ym has n vectors X has X has m vectors this has n vectors so m is less than or equal to n okay that is the idea of the proof so the number of linearly independent vectors cannot exceed the number of vectors in a in any spanning subset of a vector space okay is this step clear X has m vectors Ym at every stage the way we construct Y1 Y2 etc they have precisely n vectors m elements n elements so Ym also has n elements so m is less than or equal to n okay now the idea is that in this process we do not exhaust Y1 etc Yn there is always some Y that remains here that is the idea of the proof so this is an important result that will be useful for us in proving in defining the dimension of a vector space okay so let me stop here in next in the next lecture we will discuss the notion of the dimension of a vector space