 So, now in this module we shall carry on with this and we shall see that when we extend this P to P plus 1 how do we choose the P plus 1st vector. So, this assumes that we already have W1 through WP which meets the conditions for Gram-Schmidt construction Gram-Schmidt procedure. Now, given that there is this VP plus 1 suppose someone extends this linearly independent set to contain just one more fellow than this P number of fellows that have already been given to me. How do I cook up the corresponding P plus 1st fellow in this set W as well while preserving that orthogonal structure and also ensuring this inclusion ok. So, the idea is going to be very much the same as what we have done for 2 which is why we put some effort behind it. So, let V1 V2 VP plus 1 be a linearly independent set choose the P plus 1st fellow that is WP plus 1 as any guesses same idea. V P plus 1 the new fellow that has come in the new kid on the block, but you have to take something out what new qualities is this fellow bringing to the table in a philosophical manner of speaking. Clearly if this fellow is expanding this set while preserving linear independence it is bringing something fresh in purely philosophical terms. So, we better take out what is common and only retain what is unique to this fellow in some sense. So, this is the fellow in its entirety from it we subtract whatever is nothing new to this already we already had it on the table which is summation i going from 1 through P what is it V P plus 1 inner product with what is it? Do I need the VI's or can I do something better what should I exactly do? See we will so which should be a smarter choice just think should I take this fellow's inner product VI's or the VI's because remember V1 through VP is spanning exactly the same subspace as W1 through WP yeah oh I have not completed this again for all k belonging to 1, 2 till P. So, this automatically means that V1, V2, V3 dot, dot, dot until VP spans exactly the same subspace as W1, W2 dot, dot, dot till WP. So, what should I do here? Should it be the VPs or the W I mean VI's or the WI's? What is a better way to do it? They are already orthogonal. So, we can probably harness some benefit out of those things that is the hope and then of course see in the case of 2 did it really matter? It did not because V1, W1. So, now that we are writing it in this manner we have to be a little you know smart about how we choose these. The V's have no special property with respect to orthogonality the W's do on the other hand this is already a ready made orthogonal set that I have at my disposal. So, why not go ahead and use it then right? Now, what do we have to show? Once I have made this choice supposedly this is the saviour the messiah that is going to give me my P plus 1 dimensional subspace span by a bunch of orthogonal vectors. So, what are the things that need to be checked for this? First thing can this fellow be 0? Suppose this fellow is 0. So, can we have WP plus 1 is equal to 0? That is the question. Suppose it is 0 that follows immediately VP plus 1 is equal to this. That means if yes then VP plus 1 belongs to span of W1, W2 till WP is it not? But what is that span equal to? That is also equal to the span of V1, V2 till VP. What would that mean? This P plus first fellow in the set of V's adds nothing. It is already contained in the span of V1 through VP. Can it be linearly independent then? Of course not because V plus 1, VP plus 1 has a representation in terms of the linear combination of V1 through VP. So, there is a non-trivial linear combination that takes them to 0. So, this is a contradiction. So, we must have been mistaken when we suppose that this is 0. So, clearly this fellow we are adding is a non-zero vector at least that we are sure of. So, this question is addressed. Let us get rid of this then. Next question is of course about the orthogonality. So, after all this effort that we put into construct this WP plus 1, this must be orthogonal with each of the first PWI's that is this must be orthogonal to W1, W2, W3 till WP. Otherwise the set would not be orthogonal. The rest I do not need to check. Mutual orthogonality between these fellows is already checked. The only checks I need to perform are with P plus first fellow with all of these first P. So, let us take an arbitrary WK and let us take the inner product. So, the inner product with WP plus 1 and WK suppose K belongs to 1 to any of those. So, what happens? This is equal to VP plus 1 inner with WK minus. What about this? When I take an inner product with WK what survives from this? Only WK survives because the W's are already that is why I chose W. I did not choose V's because it would help me see this more transparently. So, if I now take the inner product all the terms in the sum apart from 1 vanish because of the orthogonality. So, only thing that remains is WK normed squared in the denominator which will cancel each other because the WK is our non-zero. So, only term that survives is the inner product of VP plus 1 with WK. Is that clear why it vanishes? Please ask if it is not. I am just writing in one shot. But that is nothing but 0 which is the orthogonality as desired. There is nothing special about the manner in which I chose K it could be any of those fellows from 1 through P. So, that means WP plus 1 is orthogonal to W1, W2 so on till WP. Anything else? Of course, the most important part. The fact that already by mathematical induction I do not need to check for the predecessors. What I need to check for is just simply the fact that the span of this is equal to the span of W1, W2, W3 till WP plus 1. The rest of it is already checked already given by the induction hypothesis. So, I have to check only for the P plus 1st case. So, can I erase this now? I think this is already explained. So, maybe I need not have erased it. I could have continued here also, but it is okay. We will see if we need more space. So, now look at this WP plus 1 again from this expression. Can I not say that WP plus 1 belongs to span of WP plus 1 plus span of W1, W2 till WP straight away. There is one component along WP plus 1 and the rest of it is on the subspace span by W1 through WP, right. But what is this? This is equal to the span of WP plus 1 plus by our induction hypothesis. This is nothing but the span of W1, W2 till WP. What can you say about these two subspaces? Can this fellow belong to this subspace? Of course not. So, when I write this as a sum like so, is this not also equal to, yeah, it is a direct sum. So, what is this equal to then? If it is a direct sum, the dimensions add up. Dimensions add up means this is a linearly independent set. So, it is a basis for a P plus 1 dimensional subspace. So, this is also equal to the span of V1, V2 until VP plus 1. This is absorbed here, right. Again by the same reasoning as before, if WP plus 1 is contained inside this and the set W1 through WP is already equal to this, what can we say? Now, think about it. The span of W1, W2, I mean this implies that the span of this is contained inside this because the P plus 1st fellow came from this and the first P fellow were already equal to this, yeah. So, therefore, this entire thing is must be contained inside V1, V2 until VP plus 1 and again bring in the dimension argument now. Here is a subspace of dimension P plus 1. Why? Because these are linearly independent because they are orthogonal and none of those vectors are 0 and here is another P plus 1 dimensional subspace while one is contained inside the other. They cannot help but be equal. So, this along with the dimension argument, this also is of dimension P plus 1. This along with the dimension argument leads us to conclude, sorry, that the span of W1, W2 till WP plus 1 so constructed is the same as the span of V1, V2 until VP plus 1, right, which proves the Gram-Schmidt procedure works, yeah. You can keep on inductively going on. So, if I give you a bunch of vectors that is just nothing but linearly independent but that come from an inner product space, you can always go ahead and construct an orthogonal set of vectors and pose that same A x is equal to B question and that is fantastic. Because all that you are asking for is just a spanning set for your column span. Go ahead and find its corresponding orthogonal set using Gram-Schmidt procedure and solve the problem, right. In the next lecture, we shall see how this sort of an idea allows us to approximate vectors as best as we can. So, that is to say when that vector B does not necessarily belong to the column span of A, we cannot just give up because there are several reasons why it may not belong to the column span, maybe the data was noisy, the experiment was flawed at some levels but we still have to do the best that we can which is to get the best possible approximation and we shall see that is this idea of orthogonality and this projection that allows us to do this precisely when the error is orthogonal to the subspace on which you are approximating the vector that is the best approximation, ok. It is very similar to as I have given an example earlier a kid running after a balloon that is flying high but the kid cannot fly. So, the closest that the kid can approach the balloon is to be directly beneath the balloon. So, that the error vector between the kid's location and the location of balloon is perpendicular to the plane again assuming flat earth, right. So, we shall see that sort of results in the next lecture. Thank you.