 This is Gram-Schmidt process the formula that I have written down the idea is to construct an orthonormal set from a linearly independent set okay this can also be used this procedure can also be used to determine if the set that we started with is linearly dependent okay this can also be used to determine if this U1 U2 etc is a linearly dependent set let me explain that suppose this set is linearly dependent okay then we have learnt that there is at least one vector which can be written as a linear combination of the preceding vectors okay so I will say that there exists of course I am assuming that 0 vector does not belong to this if 0 vector is already there you do not have to check linear dependence okay so I will assume that these are non-zero vectors and I want to see when it follows from the procedure that this set is linearly dependent so there exists U let us say M plus 1 such that there is at least one vector that can be written as a linear combination of the preceding vectors so U M plus 1 such that U M plus 1 can be written as let us say gamma 1 U 1 plus gamma 2 U 2 etc gamma M U M. Now this belongs to span of U 1 U 2 etc of course which in turn is equal to contained in span of V 1 V 2 etc but what is extra about these V 1 V 2 etc is that they are orthonormal vectors okay so when I write U M plus 1 belongs to this span these vectors are orthonormal then we have seen this yesterday U M plus 1 can be written as a scalar times V 1 plus a scalar times V 2 etc what are the scalars in a product of U M plus 1 with each of these so that is U M plus 1 V 1 this is U M plus 1 V 2 plus etc U M plus 1 V M into V M this is the expression for U M plus 1 in terms of the orthonormal vectors V 1 V 2 etc V M okay this is summation j equals 1 to M U M plus 1 is fixed we have V j times V j U M plus 1 is equal to this from this can you see that W M plus 1 is 0 the expression for W M plus 1 is here so what is this convey it means at any stage if you get the 0 vector then you have linear dependence W M plus 1 is the vector that we are determining at the M plus 1 stage if that turns out to be the 0 vector then the vectors that we started with must be linearly dependent in fact U M plus 1 is a linear combination of the preceding M vectors see this whole thing can be traced back W M plus 1 is 0 if and only if U M plus 1 is equal to this which will tell you that U M plus 1 is a linear combination of the preceding M vectors okay so one can check linear dependence also by using the Gram-Schmidt procedure is that clear okay let us now look at two examples numerical examples and see how this procedure gives us orthonormal vectors let us look at the case of R 3 V is R 3 with the usual inner product I take the following vectors U 1 is 1 0 1 U 2 is 0 1 1 and U 3 is 1 1 0 then you can verify that these vectors are linearly independent so they form a basis for R 3 you would like to construct an orthonormal basis for R 3 starting with this linearly independent set okay so let us do the computation the inner product is the usual inner product so the norm is the Euclidean norm so if you look at norm of U 1 that is root 2 and the formula for V 1 is U 1 by norm U 1 which is 1 by root 2 into 1 0 1 this is V 1 and it is clear that norm of V 1 is 1 that is how it has been constructed to construct V 2 we must construct W 2, W 2 is U 2 minus the inner product of U 2 with V 1 into V 1 from the formula so U 2 is 0 1 1 minus inner product of U 2 with V 1 inner product of U 2 with V 1 these two terms get cancelled you get 1 1 by root 2 minus 1 by root 2 into V 1 V 1 again goes with the 1 by root 2 so I will write 1 by 2 V 1 goes with the 1 by root 2 1 0 1 so that is minus 1 by 2 1 minus 1 by 2 let me write this as 1 by 2 outside minus 1 2 1 that is W 2 this does not have norm 1 so I must divide define V 2 as W 2 by norm W 2 norm W 2 is 1 by 4 into 1 plus 4 plus 1 that is 6 by 4 that is 3 by 2 1 by that is 2 by 3 into 1 by 2 I think root 2 by 3 into minus 1 to 1 so let us compute this gets cancelled 1 by root 6 into minus 1 to 1 this is my V 2 for one thing norm V 2 is 1 1 plus 4 plus 1 by 6 and W 2 must be orthogonal to V 1 which is clear this is a minus 1 by 2 this is a plus 1 by 2 the dot product is 0 I am just checking the calculations so this is V 2 V 3 is constructed using W 3 W 3 is U 3 minus U 3 V 1 V 1 minus U 3 V 2 V 2 U 3 is 1 1 0 minus U 3 V 1 V 1 see this 1 by root 2 comes twice because V 1 comes twice so it will go with the 1 by 2 I will write 1 by 2 first what is U 3 V 1 U 3 V 1 first just 1 so just 1 I have to multiply by V 1 so V 1 is 1 by 2 I have already written this 1 0 1 minus U 3 V 2 with U V 2 V 2 goes with the 1 by root 6 so I get a 1 by 6 outside U 3 V 2 U 3 V 2 that is just 1 again so 1 by 6 into V 2 without the constant minus 1 2 that is 1 minus 1 by 2 plus 1 by 6 1 by 2 plus 1 by 6 what is that 2 by 3 1 minus 1 by 3 minus 1 by 2 minus 1 by 6 minus 2 by 3 please check these calculations so if you look at the dot product of this vector with V 1 and V 2 that is 0 this will go with minus 1 minus 1 minus 2 plus 2 0 and W 3 what is the norm of W 3? V 3 is W 3 by norm W 3 norm of W 3 is 4 by 9 into 3 that is 4 by 3 so root 3 by 2 root 3 by 2 into 2 by 3 into 1 1 minus 1 so that is 1 by root 3 1 1 minus 1 that is V 3 so you can now verify that norm V 3 is 1 and we have already verified that this is orthogonal to the vectors V 1 and V 2 okay so this gives an orthonormal basis so let me just summarize V 1 is here 1 by root 2 1 0 1 V 2 is here 1 by root 6 minus 1 2 1 and 1 by root 3 1 1 minus 1 so this is what we get after applying the GS process which has the required properties that span of V 1 is span of U 1 span of U 1 U 2 is span of V 1 V 2 span of V 1 V 2 V 3 span of U 1 U 2 U 3 okay so this is corresponding to the discrete case that is the finite dimensional case let us look at a linearly independent set coming from C 0 1 let us say C minus 1 1 okay so I want to do one example coming from C minus 1 1 C minus 1 1 with the usual inner product okay so let me start with the usual linearly independent set so let us say I have F 0 of T is 1 F 1 of T is T F 2 of T is T square so I just take this see this vector space has an infinite has a basis consisting of infinitely many elements so I am just taking 3 linearly independent functions I want to apply GS process for this set and obtain orthonormal functions okay this will involve computing integrals okay what is norm of F 0 say I will use F 0 and G 0 the intermediate W's I think I will use H okay what is norm F 0 integral minus 1 to 1 mod F T square D T that is just D T 1 times D T which is 2 norm F 0 square is 2 and so norm F 0 is root 2 so I said I will define G 0 G 1 G 2 so G 0 of T I will define that as 1 by root 2 then this has norm 1 okay this is the first vector this is V 1 construct the second function which is orthogonal to this function with respect to this inner product the procedure I will use H from G 0 we are trying to construct G 1 so I will use H 1 H 1 of T is that okay F 0 and G 0 see G 0 is F 0 by norm F 0 okay H 1 of T is so what is the formula I take the second vector I must take the second vector so that is F 1 that is what I start with F 1 of T minus okay let me write without T F 1 minus F 1 G 0 G 0 is that okay these are my use these are my V's W's will be my H F 0 is the first vector F 1 is the second vector so what is in a product F 1 with G 0 what function that is 0 so it is just F 1 is that okay F 1 0 G 0 0 which means H 1 is just F 1 and I have to just divide by norm F 1 so what is norm H 1 that is norm F 1 minus 1 to 1 T square 1 by 3 and you apply it is 2 by 3 or what yes norm square is 2 by 3 okay so what is my G 1 G 1 is H 1 by norm H 1 root 3 by 2 H 1 is just F 1 F 1 is T okay this is this is H 1 sorry this is G 1 finally I need G 2 so I need W 2 which is my H 2 H 2 is F 2 minus F 2 G 0 G 0 minus F 2 G 1 G 1 F 2 G 0 F 2 is T square G 0 so these are not even functions that is not 0 F 2 G 1 F 2 with G 1 that becomes an odd function so that second term is third term is 0 so this is F okay what is F 2 G 0 minus 1 to 1 T square and G 0 is constant so 1 by root 2 1 by root 2 I want F 2 F 2 is T square so that is 2 by 3 root 2 by 3 just root 2 by 3 is that correct 1 by 3 and 2 by 3 that gets cancelled root 2 by 3 that is F 2 G 0 that is this term so H 2 is F 2 F 2 is T square minus F 2 G 0 is root 2 by 3 into G 0 G 0 is just 1 by root 2 this is T square minus 1 1 by 3 finally we need to compute the norm of this and then divide H 2 by the norm that is that is G 2 okay so what is norm of H 2 minus 1 to 1 T square minus 1 by 3 D T that is 2 by 3 minus okay so let us do the calculation T square 1 by 3 whole by whole power 2 that is T power 4 can we do a mental calculation hopefully not make mistakes that is T power 4 T power 4 is 1 by 5 2 by 5 minus 2 T square by 3 minus 2 by 3 T square minus 4 by 9 and plus 1 by 3 into 2 by 3 is this correct whole square 2 by 5 minus 2 by 9 so what is that 8 by 9 that is norm square so 45 yes which means my G 2 is H 2 by norm H 2 that is 45 9 into 5 that is 3 root 5 by root 8 that is 2 root 2 into H 2 H 2 is T square minus 1 by 3 is that okay root 45 by root 8 see the computations are similar the formula is the same the only thing is it this is a little more complicated but what is the moral of the story let me write down these functions that we have got G 0 G 0 is the constant function 1 into 1 by root 2 G 1 is the function that I have here root 3 by 2 times T and G 2 is another constant times T square minus 1 by 3 okay see these functions appear have we seen these functions multiple of constant multiple of T multiple of T square minus 1 by 3 etc that is the purpose of giving you this example the functions see suitable multiples of constant suitable multiples of T suitable multiples of T square minus 1 by 3 etc are legendary polynomials legendary polynomials come when you model certain physical problems legendary differential equations these are the legendary polynomials okay they have some nice properties which you must have studied in your differential equation course you apply what is called as the power series method to the differential equation and for some values of N you get polynomial solutions these are those polynomials okay so it is just to tell you that these are not different they are interconnect differential equations linear algebra and many other subjects they are all interconnected only to illustrate that point I wanted to do this example okay that is GS process and examples I want to discuss one application of the GS process let us look at the following it might appear to be completely different situation but you can apply the GS process to infer the following so I want to discuss an application see what I would like to do let me tell you that beforehand is to derive a certain decomposition for a matrix with real or complex entries given that the columns are linearly independent okay it is called as a QR decomposition so what exactly is the problem let us say we are given a matrix A in RM cross so we have a matrix with M rows and N columns this is given such that the N columns are linearly independent let me denote the N columns by A1 A2 etc so I am writing A as A1 A2 etc A N the N columns so each AI is a column AI is the ith column of A and let us also remember that AI belongs to RM okay there are M rows so I am writing A just by using the columns of A now these columns are linearly independent so I consider the set A1 A2 etc A N this is a linearly independent set of vectors I apply Gram-Schmidt process apply GS to this I get the following set I get an orthonormal set of vectors to obtain an orthonormal set given a matrix A with linearly independent columns the rectangular matrix with linearly independent columns consider the column vectors as linearly independent vectors apply Gram-Schmidt process to get orthonormal vectors N orthonormal vectors in RM I will call those vectors Q1 Q2 etc just to emphasize again that each of these vectors belong to each belongs to RM there are some properties right that these two sets together satisfy let me write down span of A1 is span of Q1 which means Q1 is a multiple of A1 in fact that is how we construct span of A2 sorry A1 A2 this is span of Q1 Q2 I will just write one more and write down the last relation A1 A2 A3 this is span of Q1 Q2 Q3 etc we can proceed in this manner the last step is span of okay last but one span of A1 A2 etc A n minus 1 that is equal to span of Q1 Q2 etc Qn minus 1 and finally span of Q1 sorry A1 etc A n is span of Q1 Q2 etc Qn okay so we have these equations between subspaces write down the first equation from this A1 is a multiple of Q1 Q1 is a multiple of A1 really but that is a non-zero multiple you can divide so A1 is a multiple of Q1 I will call it R11 Q1 okay look at the second look at A2 A2 belongs to span A1 A2 and so it belongs to span Q1 Q2 so A2 is a linear combination of Q1 and Q2 I can write A2 as a linear combination of Q1 Q2 I will write it as R12 Q1 plus R22 Q2 the next equation is similar okay I will proceed like this the last one gives me A n minus 1 A n minus 1 is R1 n minus 1 Q1 R12 22 the second is fixed it is the first one which changes R2 n minus 1 Q2 etc plus you go only up to the n minus 1 term R n minus 1 n minus 1 Q n minus 1 the last one will have all the n terms let me write the last one A n is R1 n Q1 etc R n n Q n can you see it when I write it here okay can R11 be 0 cannot be 0 because I started with a linear independent set so 0 is not present if R11 is 0 it means A1 is 0 that is not possible can R22 be 0 suppose R22 is 0 then I am writing Q1 in term A2 in terms of Q1 Q1 in term can be written in terms of A1 so I am writing A2 in terms of A1 not possible A1 A2 are independent etc R n minus 1 n minus 1 can that be 0 for the same reason it cannot be 0 if it is 0 then I am able to write Q1 etc Q n minus 2 I am able to write A n minus 1 in terms of Q1 etc Q n minus 2 that is A1 etc A n minus 2 which means A1 etc A n minus 2 comma A n minus 1 is linearly dependent that is not possible so all these last coefficients are non-zero okay so if I form a matrix with these are diagonal entries let us say a triangular matrix that matrix should be invertible so let us say we call the matrix R as so I have written it in that manner R11 R1 etc so it is an upper triangular matrix R11 R12 etc R1 n this entry is 0 R22 etc R2 n 0 0 R3 n etc all these entries are 0 the last entries are n n this is my matrix R this is an upper triangular matrix the lower the entries beyond the principle diagonal are 0 and the diagonal entries are not 0 note RII is not 0 for all I and so this is an invertible matrix okay this is an invertible matrix the diagonal entries are not 0 this is invertible so if you look at QR for instance what is Q? Q is my matrix whose columns are Q1 Q2 etc Qn into R11 R12 if you look at this matrix QR then can you see that this is see you do this multiplication R11 into Q1 plus this entry into Q2 etc into Qn which means I have only one term R11 Q1 the next one is R12 sorry the next one is R12 Q1 plus R22 Q2 R12 Q1 plus R22 Q2 etc the last one has all these terms R1 n Q1 etc Rn n Qn okay but what these are is already there these equations are already there R11 Q1 is A1 R12 Q1 plus R22 Q2 that is A2 etc that is An which is the matrix that we started with so Q times R is A this is called the QR decomposition of a matrix given that the columns of the matrix are linearly independent we have not said anything about we have not made use of the fact that the Q i's are orthogonal orthonormally in fact what happens to this see remember Q is a rectangular matrix the order of Q is the same as order of A so Q is m cross n okay Q transpose Q is n cross n okay so what can you say about Q transpose Q what is Q transpose Q what do you know about the columns of what do you know about the columns of can you see it is identity see it is identity of order n Q transpose Q the identity of order n but if you look at Q Q transpose you cannot expect that to be equal to this because in the first place Q Q transpose of order m okay such a matrix is called an orthogonal matrix see it is a matrix with real entries such a matrix is called an orthogonal matrix if you have a matrix with complex entries that satisfies this Q star Q equal to identity it is called a unitary matrix in the rectangular case in the square case let us remember that in the square case if Q transpose Q is identity then Q Q transpose also has to be identity okay so this matrix Q has this extra property that comes primarily from the fact that the columns are orthonormal vectors okay so what we have proved is the following what we have proved is the following theorem Q R decomposition let A belong to R m cross n with whose columns let me write like this whose columns are linearly independent then there exist matrices Q and R Q has the same order as A there exist matrices Q in R m cross n and R in R n cross n R is a square matrix such that the following conditions hold the first one is this decomposition A is Q times R second condition Q transpose Q is identity of order n and the third condition third property is that R is invertible okay the idea behind studying decompositions of matrix is that the question about problems involving the matrix A can be reduced to questions about problems involving Q R it gives a kind of a reduction okay I will explain that a little later but let us maybe work out an example okay so we had this theorem has been proved let us look at an example that we have done before today this Gram-Schmidt process for the first example so can just tell me those vectors let us say the matrix A or the three vectors A1 A2 A3 what is the first one 1 0 1 1 0 1 1 1 1 0 these are linearly independent vectors so I have A1 as 1 0 1 A2 is 0 1 1 A3 is 1 1 0 see by the way I am when I write vectors I interchangeably write them as rows and columns from the context it should be clear whether it is a row or a column it is only for convenience that I write it as a row vector but remember that A1 A2 A3 are the column vectors of A okay so strictly speaking I must write transpose okay but at times I might omit that but just make sure that the compatibility matrix multiplication etc that is done correctly that is appropriate so these are the vectors we obtain Q1 Q2 Q3 so Q1 I can write down 1 by root 2 into 1 0 1 Q2 you should tell me minus 1 1 1 no minus 1 1 1 Q2 is this correct okay and Q3 I remember 1 by root 3 at much that much minus 1 1 okay so the matrix Q in this problem is okay you have to just write down all these 1 by root 2 0 1 by root 2 minus 1 by root 6 2 by root 6 1 by root 6 1 by root 3 1 by root 3 minus 1 by root 3 this is my Q see to determine R I need to solve those equations but that is not necessary see A is Q R okay I will do that A is Q R I can pre multiply by Q transpose Q transpose A is Q transpose Q R this is where we use the second property this is just R so R is Q transpose A so just do the multiplication matrix multiplication to get R R must be an upper triangular matrix I am not simplifying the fractions as they are okay this is R okay so computationally you know you do not have to solve those equations so the property that Q transpose Q equal to I saves effort of computing R okay just pre multiply do one matrix multiplication which is much easier okay let me stop here and discuss another application of Q R with regard to what are called as least square solutions of linear systems okay so I will stop here.