 So in last lecture we were discussing the notion of subspaces, okay. The topic for today is linear independence of vectors but before that there are certain aspects of subspaces that are important which I would like to cover, okay. For instance how to get new subspaces from old? Let us first address this problem. I have a vector space with two subspaces, okay. Then I get a new subspace by looking at the sum of these two subspaces. I will define W as W1 plus W2, sum of sets, sum of two sets. What is the definition? The definition is it is a set of all x plus y such that x belongs to W1 and y belongs to W2, okay. I am defining the sum of two subspaces W1 and W2. The notation is W1 plus W2 I am calling that as W. This is a collection of all x plus y where x comes from W1, y comes from W2. Now collection of all x plus y or y plus x does not matter because addition is commutative, okay but let us stick to this notation. The first one will come from W1, second comes from W2. Then this is a subspace. This is a subspace of V. Why is it a subspace? It is easy. Just appeal to the theorem that we proved last time. We need to take two elements in W, show that their sum is in W. Take a vector in W, take a scalar multiplication. It is close with respect to that operation also. So once you show that then it follows this is a subspace. So I am going to leave this as an exercise. W is a subspace of V. This is an exercise for you, okay. Now this is in some sense larger than W1 and W2. Let us look at something that is smaller than W1 and W2. So that is another subspace. Let us call it Z as W1 intersection W2, the Z intersection, okay. Then I am going to leave this as an exercise for you to show that this is also a subspace. Z is a subspace of V. This has a property that this is contained in W1 as well as W2. So you can think of this as a subspace of W1 as well as W2, okay. There is another method of getting subspaces. Let me describe that. Let us take S as a subset of V. I will define the span of V, the span of S denoted by Sp of S. I am going to define this as the set of all linear combinations of elements of S. I have not defined what a linear combination is but I will do this now. The span of S is the set of all linear combinations of elements of S. So mathematically what is span of S? Span of S is the set of all alpha 1 V1 plus alpha 2 V2, etc, alpha K VK where the alphas are scalars, VKs are vectors in S, okay. So alpha 1, alpha 2, etc, alpha K they come from the field and V1, V2, etc, VK they come from S. Now this is what I am calling as a linear combination. This will be specifically linear combination. This is a linear combination of the vectors V1, V2, etc VK. This is my definition. A linear combination of the vectors V1, V2, etc VK will be called a linear is a typical element of spanners and it is of this type. Linear combinations of this form that is a typical element of span of S. Now this span of S is a subspace, okay. This span is a subspace of V. To prove that this subspace of V again we will use that theorem close with respect to addition and scalar multiplication. The proof is there once you write down the first step correctly. This is a subspace of V, span of S is a subspace of V. The proof you are going to tell me the first line and I will just leave it at that. We want to show that this is close with respect to addition for instance. Scalar multiplication is similar close with respect to addition. So I need to take two elements. Let us say U comma W belong to span of S. We must show that U plus W belongs to span of S, okay. I wanted to just tell me the first line after this. How do you prove that this belongs to span of S? Okay. Quite a few seem to know the proof. Let me just write down and I am going to leave the other steps for you to fill up. U is in span of S so it is a linear combination. So I have something like this. U equals beta 1 U 1 plus beta 2 U 2 plus etc plus beta L U L for some scalars beta 1 etc beta L whereas the vectors U 1, U 2, etc U L they come from S, okay. See it is just some linear combination. Please remember that it is not the same k that we have here. Finitely many terms V sorry W I will write a similar expression for W delta 1 W 1 plus delta 2 W 2 etc plus delta R W R. In general this R and L are different, okay. Again now for some delta 1 delta 2 etc delta R from the reals and W 1 W 2 etc W R from S. This is the first step. This is the important step. The rest of the proof is obvious. U plus W now is a linear combination. I can write it as beta 1 U 1 beta 2 U 2 etc beta L U L plus delta 1 W 1 etc delta R W R where the scalars come from R the vectors come from S. So it is close with respect to addition. Scala multiplication is simpler than this, okay. So span S is a subspace. Let us look at one or two examples of span of a certain set. See remember this S could be an infinite subset. This S could be an infinite subset, okay. But for illustration let me take the following two examples. First one, let us take S to be V 1 X 1 X 2 where X 1 is a vector 1 0 1. X 2 is the vector 1 1 0. X 1 X 2 are vectors in R 3, elements from R 3. S is X 1 X 2. The question is what is span of S? What is span of S? Let us try to determine span of S. Now you will see that what we have learnt earlier with regard to linear equation that will be that will come handy, okay. What is span of S is what we want to see? What we know is that span of S will contain linear combinations of X 1 and X 2. But what we want is a formula. A formula for elements in span of S, okay. Given a vector I should be in a position to determine whether this vector belongs to span of S or not, okay. Given a vector is there a condition that I can impose on this vector in order for this vector to be in span of S? If this condition is satisfied it is in span of S, if the condition is not satisfied it is not in span of S, okay. Let us try to derive one condition. You will see that this elementary row operations will be useful, okay. So let we are trying to determine this question. Let B belong to span of S, okay. Then what is B? That is the question, okay. What is B? That is how does B look like? That is the question. Now B is in span of S, okay. So let us write down the definition. B is something like alpha 1 X 1 plus alpha 2 X 2 for alpha 1 alpha 2 in R, okay. If B is in span of S then by definition B is a linear combination of these two vectors. Let us write it in full. Now this B is a general B in R3 we do not know how it looks like. So let me rewrite it like this. I will write alpha 1 X 1 plus alpha 2 X 2 on the left and B on the right. Then familiar linear equations could be used. Alpha 1 X 1 plus alpha 2 X 2, alpha 1 into X 1 is 101 plus alpha 2 into X 2, 110. On the right hand side I have the vector B. So it has three components B 1, B 2, B 3, okay. Alpha 1, alpha 2 are arbitrary. What must be the conditions on these three numbers B 1, B 2, B 3 in order for B to be in span of S? In order for this equation to be satisfied but this equation you will see is precisely the following linear equation. One homogeneous linear equation. I can write it as follows, okay. Can you see it is precisely this? Okay just tell me if this is okay. That is we seek B such that A alpha equals B where A is the matrix whose columns are these two, 101, 110. Alpha is the vector of unknowns alpha 1, alpha 2 and B is of course B 1, B 2, B 3. We are seeking to solve A alpha equals B. Do you agree with this? Just write down the three equations. First equation alpha 1 plus alpha 2 equals B 1, alpha 2 equals B 2, alpha 1 equals B 1. Those are the three equations in two unknowns alpha 1, alpha 2, okay. Let us do elementary row operations. That is I want to now determine all B for which this system has a solution. I know how to do it by using elementary row operations. So I need to look at A appended with B, 101, 110, B 1, B 2, B 3, okay. I do the elementary row operation reduces this to the row reduced echelon form. The final step tells me what must be B, okay. So this is now row equivalent to I will keep the first one as it is, 101, B 1. This will also be kept as it is minus this plus this 0 minus 1, B 3 minus B 1, okay. Next step one more operation. I can actually stop here but I will reduce it to the row reduced echelon form. So let me just do one more operation for the sake of completeness. Second row is 0, 1, B 2, okay. Now it is in the row reduced echelon form. The number of non-zero rows of this is R for me. This whole thing is R prime. This is R for me. This is D. I have R alpha equals D. I have R alpha equals D, right. This is R. The number of non-zero rows is 2. The necessary sufficient condition for the original system to have a solution is that this must be 0. So A alpha equals B has a solution if and only if B 1 is B 2 plus B 3. Now that is the condition for any vector to be in span of S. So span of S is the set of all, you tell me if this is clear now, set of all X in R 3 such that X 1 is X 2 plus X 3. So this determines the subspace completely, okay. Let us modify this example and just do one more. You will see how the notion of homogeneous equations, non-homogeneous equations, invertibility, etc that we had studied earlier will be useful once again. So I want to look at another example, modification of this. This time S will be for me X 1, X 2, X 3. X 1, X 2 as before, okay. X 1, X 2, X 4 as before. X 3 is this vector 0 1 1. I would like to determine what is span as now, okay. What is the span of this set? Now again we will have to look at something like this, okay. We seek B such that A alpha equals B. This time A is a 3 by 3 matrix, 0 1 1, alpha is alpha 1 alpha 2, B is B 1, B 2, B 3, okay. So I will have to do the elementary row operations. Let me remove this as before. To determine the span S completely we need to solve the system A alpha equals B, okay. This time A is the matrix whose columns are the 3 vectors that we started with, okay. So we need to append this 0 1 1 and then B 1, B 2, B 3, okay. Let us do this quickly. This is row equivalent to, first one I am going to keep it as it is, right. B 1, okay. Next step, this is row equivalent to, okay. Let us see what we could do, okay. Then I will divide this by 1 and then rewrite this like this and then keep this as the pivot row, the operation performed with respect to this row. This whole thing comes here, okay. Here I have to just add 1, 0, 0. I have another expression here. Please fill up. This will be the sum of these two terms, okay. But whatever it is, now observe that this is R for me, R alpha equals D. This is R. Then I am reducing the system A alpha equals B to R alpha equals D. The number of non-zero rows of R is 3 and so which is precisely the number of rows of R and so there is no condition. This D i is equal to 0 for i greater than R. That condition is vacuously satisfied. So this system has a solution for all B which means what? So A alpha equals B has a solution for all right hand side vectors B. So what is span of S? It is a whole R 3, okay. It is an improper subspace of R 3. Span of S is the whole of R 3, okay. If S is chosen as this set then span of S is R 3, okay. We will come back to this example. This will be useful when we discuss the notion of linear independence. So we are still discussing subspaces. In this example alpha is alpha 1, alpha 2, alpha 3. Yes, in this example it is alpha 1. Here I have to make a correction, yes. In this example there are, yeah that is correct. In the second example alpha has 3 coordinates, okay. We are still discussing the notion of subspace. There is one small extension of this idea of span of a subset which is what the so-called row space, column space of a matrix. So let me describe this and then go to the next topic. Let A be an m cross n matrix. The row space of A is defined to be as the subspace of all linear combinations of the rows of A. The row space of A is the subspace of all linear combinations of the rows of A. So what one does is to if A is an m cross n matrix there are m rows each row is an R n, each row has n coordinates. So the row space, so what one does is look at span of the rows, okay span of the rows that is set of all linear combinations of the rows of A. That is the row space of A. Let us observe that the row space of A is a subspace because just now we have proved that the span is a subspace. This is a subspace of R n. Each row has n coordinates, each row has n coordinates. So this is a subspace of R n. One can define the column space similarly. The set of all linear combinations of the columns of A, the column space is a subspace of R m. So let me just say similarly column space of A is defined and what I want to say is that the column space of A is a subspace of R m, is a subspace of R m, okay. So these two are subspaces of different vector spaces but they have a number that is the same. There is a unique number associated with these two subspaces. That number is the same for these two, okay. We will see that in a little while. Even though these are subspaces of different vector spaces, we will see that there is an important number that one would like to associate with a vector space or vector subspace for that matter. That number would be the same for these two, okay, okay. With this I would like to move to the notion of linear independence of vectors, linear independence of vectors, okay. Once we have this notion, we could define a basis for a vector space and then the dimension of a vector space, okay. So what is linear independence? Let us first look at linear dependence. Linear dependence among vectors. Let us take a collection. Consider a set of vectors. Let us say v1, v2, etc vk from a vector space v. We would like to say that these vectors are linearly dependent if at least one of them depends on the others, okay, informally. At least one of them depends on the others. One of them depending on the other. At least one of them depending on the other means you can write one of them as a linear combination of the others, okay. In other words, let us formulate this. v1, v2, etc vk are said to be linearly dependent. I have this equation to be satisfied. Alpha 1 v1 plus alpha 2 v2, etc plus alpha k vk. If this is the zero vector, so I am defining linear dependence, okay, linearly dependent. If this holds without all of them being zero, yes, I will write it like this. This holds with not all alpha i's being zero. That is there is at least one alpha i such that this equation holds, okay. Now you will see that if there is at least one alpha i which is not zero, let us say alpha s is not equal to 0 for some s, at least one scalar is non-zero, then one could write down. See all that I want to demonstrate next quickly is that there is some dependence. That is at least one of the vectors here can be written as a linear combination of others. That is why it is called linear dependence. If this holds, then I have linear dependence, okay. Then observe that alpha s is not zero. I push all the other vectors to the right hand side and just have alpha s vs on the left minus alpha 1 v1 minus alpha 2 v2, etc minus alpha s minus 1 v s minus 1 minus alpha s plus 1 v s plus 1, etc minus alpha k vk. I have simply pushed the other vectors to the right hand side. Since alpha s is not zero, I will divide this by alpha s. That is v s is I will use beta 1 v1 plus beta 2 v2 plus etc plus beta k vk. You fill up the details. I have written v s after dividing by alpha s. I have written v s as a linear combination of v1, v2, etc vk. On the right side I will not have the vector v s. This is linear dependence, okay. So this is a definition that conforms to what we fail intuitively about linear dependence, okay. What is linear independence? Linear independence is a negative of linear dependence. v1, v2, etc vk are said to be linearly independent if they are not linearly dependent, okay. Of course what does it mean in terms of that equation alpha 1 v1 plus etc alpha k vk equal to zero. We have the following. Alpha 1 v1 plus alpha 2 v2 plus etc plus alpha k vk equal to zero. This holds where I do not have linear dependence. So each scalar must be zero, okay. So this implies, this notation, this implies that alpha 1 equals alpha 2 etc equals alpha k equals zero. That is linear independence there because even if one of them were non-zero, one could push the other vectors to the right, divide by that scalar to get this non-zero, to get the vector on the left as a linear combination of the others. So none of them should be zero. That is linear independence. One also says that the only way to get the zero vector by means of these vectors v1, etc vk is by the trivial linear combination. Then we say that the vectors v1, v2, etc vk are linearly independent, okay. Let us look at some examples. Let us look at some examples to consolidate. Take the example of the previous one, the span thing. Let me stick to the same notation. X1 is 101, X2 is 110, X3 is 011, okay. I would like to verify if these vectors are linearly independent. I would like to verify if these vectors are linearly independent. So I must start with a linear combination, okay like that. So consider a linear combination alpha 1 X1 plus alpha 2 X2 plus alpha 3 X3 equal to zero, the three dimensional zero vector, the vector with three components each being zero. Then again, okay let me write in this example to make it more transparent. So I have alpha 1 plus alpha 2 equals 0, alpha 2 plus alpha 3 equals 0, alpha 1 plus alpha 3 equals 0. These are the three equations. Of course Gaussian elimination can be applied and then you can get the solution immediately but I will pretend as though I cannot do that. I want to use elementary row operations once again, okay. But let us recall what happened for this particular example. This is the same as, I will again use the same A that I used in the previous example. I am looking at the system A alpha equal to 0. I am looking at the system A alpha equal to 0. I want to know whether this system has a non-zero solution. I want to know whether the system has a non-trivial non-zero solution. What is A? A is the coefficient matrix here. Just write down these three vectors as columns. You get A. What is alpha this time? There are three unknowns, okay. I want to know if the system has a non-trivial solution but look at what we did earlier. For this matrix A, we had shown that the right hand side, look at the non-homogeneous system A alpha equal to B. We had shown that the right hand side needs no condition. Whatever be B, whatever be the right hand side, the system A alpha equal to B has a solution. Remember the result that we proved with regard to linear equations. A is invertible if and only if the homogeneous system A x equal to 0 has 0 as the only solution, if and only if A x equal to B has a solution for all right hand side vectors B, okay. So from what we have done earlier, A x equal to B has a solution for all B means that A alpha equal to 0 has 0 as the only solution which means if this system has a solution then 0 is the only solution which means alpha 1 equals alpha 2 equals alpha 3 equals 0 that is linear independence. So A alpha equals 0 implies alpha equals 0, the computations have been performed earlier. I am making use of that that is A alpha equals 0 implies alpha 1 equals alpha 2 equals alpha 3 equal to 0. So the vectors x 1, x 2, x 3 that we started with they are linearly independent, okay. Now what is the guess about the other example? The example where we had only the first 2 vectors x 1, x 2 just guess, do not look at the numbers, do not look at the coordinates. That is what is given is that you have 3 vectors x 1, x 2, x 3 you have shown these 3 are linearly independent. The previous example is x 1, x 2 my question is they are linearly independent, okay. The reason is something that we can prove in general any subset of a linearly independent set is linearly independent, okay. So like a precursor you can show that vectors x 1, x 2 from just if you take in fact you take any 2 of the vectors from this set they will be linearly independent that follows from a more general principle, okay. What about a different vector space instead of R 3? So I am going to look at example 2 really. Let us look at pk, pk is a vector space of all polynomials of degree less than or equal to k with real coefficients, okay. Let me pick these vectors. So I pick k plus 1 polynomials p i of t equals t to the i that is p not is a constant polynomial 1, p 1 is a polynomial which satisfies p 1, p equals t for all t, p 2 t is t square etc, pk of t is t power k. Let me conclude by showing that these vectors that is these polynomials are linearly independent. These polynomials are linearly independent. Let me take the case k equals 3 just for simplicity and consider a linear combination alpha, I will start with alpha not, alpha not p not plus alpha 1 p 1 alpha 2 p 2 plus alpha 3 p 3. I start with this linear combination I must see if I can show that all the coefficients are 0 then I could conclude that these vectors that is these polynomials are linearly independent, okay. Now this means what? This means see the set of polynomials is a vector space so this is also a polynomial. This polynomial takes the value 0, this is identically the 0 polynomial means at every t this is 0 that is alpha not p not of t plus alpha 1 p 1 of t plus alpha 2 p 2 of t alpha 3 p 3 of t. This is 0, this number is 0 for all t in R that is the meaning that is the previous one is identically 0 I should write which means you evaluate this at any t then the value is 0. Now you write it in full and see what you get, you get alpha not plus alpha 1 t alpha 2 t square plus alpha 3 t cube equal to 0. This is true for all t, how to conclude from this that each of the scalars is 0 that is the claim I am making yes one could use heavy machinery fundamental theorem of algebra a polynomial of degree k has precisely k roots, okay but we can we can do without the fundamental theorem of algebra that is what I wanted to illustrate in this example. See each is a polynomial, polynomial we know are differentiable functions differentiate this equation as many times as you want and get the desired conclusion. Differentiate once you get alpha 1 plus 2 alpha 2 t plus 3 alpha 3 t square once more 2 alpha 2 plus 6 alpha 3 t one last time 6 alpha 3, okay. So look at the last equation alpha 3 is 0 the penultimate equation alpha 2 is 0 etc backwards substitution gives you alpha not alpha 1 alpha 2 alpha 3 each of them is 0, okay. So this is a proof which does not use a fundamental theorem of algebra just use differentiability of these polynomials okay let me stop here for today.