 So, up until this point, we have not made any claims about the uniqueness of this best approximation. But now it turns out that if you have some best approximation of a vector v in v on the subspace u, then there can be no other that also gives you the same best approximation. So, the best approximation is unique, okay, that is what we are going to try and show now. So, in layman's terms, this is how we are going to write it down. Of course, the setup is again u sitting inside the vector space v which is an inner product space and you have v belonging to the vector space v and say suppose u1 and u2 belonging to u are both best approximations of v in u, all right. So, what we will end up with is of course, again by now you will see the familiar pattern. This is v minus u2 plus u2 minus u1 squared. So, of course, this is the inner product of v minus u2 plus u2 minus u1 inner width v minus u2 plus u2 minus u1 and again what can I say about u2 minus u1. What is u2 minus u1? Where does it come from? u of course, because u1 and u2 come from u, so u2 minus u1 must also come from u. I mean, I am going to just use that same thing that I have just developed. So, what is this? v minus u2 inner width v minus u2 which is v minus u2 the norm squared and v minus u2 inner width this, what should that be? Look, both u1 and u2 are claiming to be best approximations, so that v minus u1 must be orthogonal to every vector in u, v minus u2 should also be orthogonal to every vector in u and u2 minus u1 is exactly one such vector in u. So, therefore, it must be orthogonal to v minus u1 as well as v minus u2. So, the inner width this and this vanishes, the inner of this and this vanishes. The only thing that remains is the inner of u2 minus u1 with itself which is nothing but the square of the norm of u2 minus u1. But now, if this and this are equal, so they are both at both best approximations that simply means that the norm of v minus u1 must equal the norm of v minus u2, so then what is left to prove nothing essentially. This must equal this, so what you must have is u2 minus u1 is equal to 0, but when can the norm vanish only when the argument inside it is 0 which means u2 is equal to u1. So, hereafter we shall not only talk about a best approximation of a vector, but the best approximation rather because whenever we are talking about the best approximation it is the unique best approximation and nothing but that one ok. So, it is a unique the best approximation, no we have not we have not. So, we are assuming that the best approximation suppose it exists, if it does exist it must be unique. We will only show that which is our next result when the subspace u that you have chosen is finite dimensional. So, remember I am not assuming that v is finite dimensional v can be an infinite dimensional subspace, but if I assume that u is a finite dimensional subspace then it is guaranteed. So, by no chance by no means am I saying that if you have a finite if you have an infinite dimensional subspace you will not have a best approximation, but if you have a finite dimensional subspace of an inner product space then there is always a best approximation of any vector in this vector space v on the finite dimensional subspace u. So, that is exactly our next result. So, all throughout this I have assumed that if I mean if the fact that the search is not futile. So, if that does exist a best approximation that is a standing assumption, but when we assume or when we have u as a finite dimensional vector space then we do not need to assume it anymore it is guaranteed to exist. So, had I not written just remind me in the before stating the first result did I not write suppose v has a best approximation v hat because that is a supposition. You have to suppose that u is sitting inside v and there is indeed I mean you cannot by default assume that I mean you cannot by default just take it for granted you have to assume you cannot take it for granted that there is a best approximation you have to assume that there exists a best approximation and in that case the search will lead you to finding the best approximation if you apply that condition that is orthogonality of the error vector right. So, next again u is sitting inside v which is an inner product space and suppose u is finite dimensional vector space in its own right all right. Then for every v in v there exists v hat belonging to u such that v hat is the best approximation of v in and it is going to be quite straight forward we are going to bring our old friend the Gram-Schmidt procedure back and invoke it right. So, if u is finite dimensional and it is a subspace of an inner product space what can we say about a basis first that the basis will be finite and that I can always apply the Gram-Schmidt procedure on any basis because the basis is a linearly independent set and turn it into an orthonormal set of vectors which are linearly independent because they would not contain a 0 vector right. So, by Gram-Schmidt procedure there exists b equal to say u 1 u 2 till u k such that b is an orthonormal basis for u of course the dimension of u is then k right. So, look at v hat given by the inner product of v with u i times u i summed over i running from 1 through k right. So, orthonormal remember which means that the norms of each of these vectors is unity. So, they have been all normalized right ok. So, my claim obviously is going to be that this is the best approximation of v in the subspace u finite dimensional subspace u. So, how should we prove that what is the way to go about it we will use the equivalence condition that v minus v hat should be orthogonal to every vector in u. But do I need to actually show it for every vector in u if I just show that v minus v hat is orthogonal to every member in the basis in some basis precisely I am going to use this basis here, but any basis would have worked if I am able to show that v minus v hat is orthogonal to just k of these fellows in the basis set then it is orthogonal to every vector because every vector is after all a linear combination of the fellows in the basis. So, individually component by component term by term they would vanish any arbitrary vector in u can be written as summation alpha i u i i running from 1 through k. Now, if the inner product of v minus v hat with u i is itself 0 then obviously the inner product of u minus I mean v minus v hat with any u is also going to be 0 right you want me to write that down or that is clear ok good. So, v minus v hat inner product with u i consider this this is given by what v minus summation inner product of v with u i u i i running from 1 through k with u i that is going to be equal to first term with u i. So, that is inner product of v with u i what about the second term when I take the inner product of this term with this that is just this much with this what is going to be picked out maybe I should use a different notation I should use u r yeah otherwise it creates a bit of confusion. So, this is also u r ok. So, of course, let us say r belongs to the set 1 2 till k some arbitrary r index right. So, when I take the inner product of this underbraced term here with u r what is the result remember the orthonormality assumption I have had on this set b. So, what happens to this what happens for any i not equal to r inner product of u i with u r 0. So, the only term that survives is just u r with itself, but u r is a unit norm vector. So, what survives is basically the norm of u r squared which is 1 in other words what survives is just the inner product of v with u r everything else vanishes. So, this is nothing, but the inner product of v with u r do I have to write the down open that up or it is clear please ask if it is not what I am saying is I am going to open this term up and take the inner product of v with u 1 and u 1 with u r unless r is equal to 1 that is 0 then u 2 with u r unless r is equal to 2 it is 0 until I hit upon r is equal to the requisite index here only that will survive and then the inner product of u r with itself because of the orthonormality assumption is nothing, but the norm of u r squared which is 1. So, whether I write it or not matters not only term that survives is the sum and this coefficient here which is inner of v with u r, but this is of course 0 as I required to prove I was required to prove rather yeah. So, this then proves what that in a finite dimensional subspace of an inner product space there is always a best approximation and this has profound implications very profound implications. So, if you take the subspace of polynomials for instance it is not finite dimensional if I restrict the polynomials to some degree then it becomes a finite dimensional vector space. So, what this says is that if I give you a very higher degree polynomial and ask you to approximate it to a lower degree polynomial there is always a best approximation in general let us say I mean I have not said that v has to be infinite dimension. So, let us say v is some vector space of 12th degree polynomials all right and I am not comfortable dealing with 13 coefficients 12th degree polynomial will have 13 coefficients yeah I want to deal with fewer variables. So, I want to deal with 4 which means cubic polynomials. So, the best approximation of a 12th degree polynomial by a cubic polynomial will exist provided you have a sufficiently well defined inner product right. So, in the sense of that inner product it there will be induced a norm and in the sense of that norm now you can talk about the best approximation right. So, you can stretch your imagination to all those abstract inner product spaces we had discussed in the first class when we talked about inner products and think about finite dimensional subspaces sitting inside some vector space and take any arbitrary vector in the vector space and think of its best approximation. So, when you do curve fitting this is exactly what you do you do not really know right the exact nature of a curve. So, you have certain data points you are you trying to fit a trajectory let us say you are given some coordinates x and y coordinates all right x and y coordinates of the trajectory of some vehicle vehicle or a football or a cricket ball or whatever and you are asked to fit it in a curve. So, if you have 20 points and if you want to do a polynomial fitting you can fit it as a 19th degree polynomial, but look the actual trajectory need not necessarily even be a you know a polynomial right. It can be some very complicated function some hyperbolic function some catenary equation something whatever who knows what you know you have these power systems in power systems if you have done in electrical engineering there are these cables that hang they have the equation of a catenary some hyperbolic terms that come in right. Now, if you want to do a polynomial approximation of that and you want to do a parabolic approximation. So, that is basically a wider class of functions the space of those functions is infinite dimensional, but as long as you have an inner product that is well defined in that space of functions with respect to that inner product if you now go about seeking out what is orthogonality and find out the best possible approximation right. If the parabola also happens to fit in to that class of functions then the parabola is definitely a subspace. This is a set of all parabolas let us say it is a subspace over the entire space of all those suitably chosen functions right. So, you want to get a best parabolic approximation of some function you can go ahead and do that. So, this is very profound implications in fact, I would say followed by the I mean sorry not followed by I think preceded in importance by the part on eigenvalues and eigenvectors which are very important for dynamical systems. This part of our syllabus where we talk about inner products has the greatest amount of applications coming up in various spheres right and that is mainly because of this inner product the definition and the best approximation optimization problems and whatever you have got right you can always talk about these sort of things you can talk about the kind of error and you can see that the greater the number of terms that you take often times you will see that the error gets smaller and smaller. So, it is always a trade of like to what degree of approximation you will go whether you will approximate to a higher degree polynomial then of course you have to fit in more data you have to invest more computational resources if you take a lower degree approximation then of course you are compromising with the error right. So, this has very profound implications we will try to talk about certain applications of inner product spaces after we have covered every part of the theory that is there. So, that will take us to our next result which is we are going to talk about no it is sufficient it turns out these are Hilbert spaces and they are closed. So, closed Hilbert spaces you will always be able to do this. So, finite dimensional spaces are always closed. So, you will always be able to find a best approximation we will we will talk a little in fact I had planned to talk about it a bit, but of course not in the language of closed spaces and all, but we will show you some examples at least to motivate because the next part will exactly talk about that and you will see some similarities with what we have discussed earlier although the language we use there was different. So, suppose you have a set ok suppose you have a set. So, this is a subset not a vector space this is an inner product space. So, of course a vector space, but S is a subset let us say S is equal to some S 1 S 2 till some S k. So, then we define this object called the orthogonal complement of S in the following manner. So, this is called the orthogonal complement. Now notice that I am not required to choose an S that is a subspace, but if I do choose a subspace that is also fine a subset can be a subspace right. I mean every subspace is after all a subset right, but not every subset is a subspace. So, I am not restricting my choice to just subspaces, but to subsets of the vector space. The point is that irrespective of whether S is a subspace or not S perp that is the orthogonal complement of S is always a subspace right. The claim is that S perp is a subspace and again it is not very hard to see why you know the test for subspace we have discussed many moons back when we chose suppose S bar 1 S bar 2 belong to S perp consider alpha S 1 bar plus S 2 bar the inner product with S for some S in S. So, again because of linearity of course alpha is coming from the field either real or complex yeah again in this domain of inner products always field is real or complex right. So, this means so alpha S 1 bar plus S 2 bar inner product with S is equal to alpha inner product S 1 bar with S plus S 2 bar with S, but what do we know about these 2? If S 1 and S 2 are individually members of S perp then these individual fellows must vanish. So, this is 0 as is this. So, therefore, this is 0. So, I chose an arbitrary object as alpha S 1 bar plus S 2 bar in S perp and I found out that it also belongs to sorry I chose this fellow alpha S 1 bar and S 2 bar and I found out this fellow I did not choose it necessarily in S perp I did not know whether it belonged I was testing and I found that indeed it satisfies the conditions for belonging to S perp which is the test for a subspace. So, therefore, even if S is not a subspace S perp is definitely a subspace which brings us to the next question the curious object called S perp perp what can we say about this we wonder see one of the things we can definitely say without fail is that S is contained inside this is a double orthogonal complement. So, anything that belongs to double orthogonal complement what does it mean? This is a collection of all vectors which are orthogonal to S perp that means vectors in S are definitely going to be part of that, but suppose let us stretch our imagination a bit of course, at this point it is not even fair to compare these because this can be a set and this is always a subspace once you have taken an orthogonal complementation it is always a already a subspace now you take another orthogonal complementation it continues to remain a subspace. So, at this point it is not fair to compare them. So, in order to be fair in our comparison let us start with some S which is a subspace in that case we can pose the question as to whether there is any chance of equality in these two. So, what do you think should they be equal? What sort of restrictions? That is what we should try and answer. So, when you are assuming basis you are I think you are on the right track S is finite dimensional S is a finite dimensional subspace are you sure? No that is no let us just not assume that S is a subset let us give it let us concede that S is a subset now I mean sorry subspace now. So, you do not have to take the span to cook up our subspace let us just give it let us concede that S is now a subspace. So, that we are not comparing two different types of objects altogether in that case it makes sense to ask because this is a subspace this is a subspace and this containment is as we have claimed is true, but the opposite the reverse containment the converse is that also going to be true is what we are trying to investigate. So, let us look at an example to sort of understand and from there we will not really prove this, but we will try to sort of provoke this question in your minds. So, consider V IPS inner product space to be the vector space of polynomials remember I am not restricting the degree of polynomials. So, this is an infinite dimensional vector space ok. So, this is a vector space of polynomials and consider u to be defined in this manner f belonging to V such that f of 1 is equal to 0 is this a subspace of course it is take any two polynomials which have a root at 1 there some of those two resultant polynomial resultant some of those two polynomials will also have a root at 1 scale them the root remains invariant yeah. So, this is of course a subspace what do you think is the orthogonal complement of this ok very good question yeah thank you for asking that question. So, the inner product is we will define it in this manner summation a i b i i going from 1 through m where m is equal to minimum of the degree of f 1 comma f 2 where of course f 1 is equal to summation a i x to the i i going from 0 to some m 1 and f 2 x is equal to summation i is equal to 0 to let us say m 2 b i x to the i. So, let us say one is a tenth degree polynomial the other is a seventh degree polynomial you take only up to the coefficient of the seventh degree the first eight terms starting from the constant term and you take their products that is the inner product yeah I should have mentioned it already I thought it was like you know very analogous to the Euclidean space, but yeah you do need to mention it because now you have different sizes possible right. So, what do you think is this object going to be I am saying only the zero polynomial and nothing but that it is only the zero polynomial check that when you are saying that this what does it mean to say this you are basically plugging in x is equal to 1 and you are going to say this is a 1 a 0 plus a 1 plus a 2 plus dot dot dot till a k whatever it is is equal to 0. So, you might think that hang on let me just take 1 plus x plus x square plus x cubed, but you have to truncate it or terminate it at certain point otherwise it fails to be a polynomial. The moment you terminate it I will come up with a higher degree polynomial for which it fails to be the requisite vector whose inner product is 0. So, you can keep on you will see that if you go about it that via that route where you choose 1 plus x plus x square plus x cubed will be like an infinite series it is not a polynomial anymore. So, among the polynomials the only fellow that sort of pulverizes this in the term of terms of the inner product is just the zero polynomial and nothing else. So, this is just going to be the zero polynomial of course. So, I should specify this this is 0. This means that polynomials such as a 0 plus a 1 plus dot dot dot till say a 25. So, you take a 25th degree polynomial this is 0 right if something belongs if a 25th degree polynomial belongs to you then this must be true. So, you might think hang on. So, let me choose 1 plus x plus dot dot dot till x to the 25 this polynomials inner product with this will be 0. So, you might think it is not just the trivial polynomial, but there is this non-trivial polynomial also whose inner product with this vanishes, but then if I choose this to be 26 degree then this will no longer satisfy the condition. Now you increase by 21 to 26. So, the more you increase this I will keep increasing this by 1. So, until you expand all possibilities and take this to countable infinity you will not be able to cover, but the polynomial or so called with an abuse of notation the polynomial of degree countable infinity is not really a polynomial right. So, it is not a polynomial it is then a series and that does not belong to V at all. So, forget about it being in the orthogonal complement. So, the only polynomial with whom if you take the inner product of fellows such as this that the inner product will be 0 is a 0 polynomial right. So, therefore, u perp is a 0 polynomial then what is u double perp I mean this is very this is a very generic result. If you have the 0 of any vector space an inner product space what is the orthogonal complement of 0 every vector is in its orthogonal complement. So, this is exactly equal to the entirety of V and look you can already see V is obviously going to contain you, but V is also bigger than you. So, in general this equality you cannot expect this containment is the best you can expect only when you are dealing with finite dimensional vector spaces that is if this V that you start with is finite dimensional then you can definitely assure and you will see that through something called projection maps we will see that when you have this finite dimensions then you can say that the double orthogonal complement gives you back the same as the original. So, I would just end this part with one reminder remember this double dual we spoke about then the double dual unless it was finite dimensional yeah there is this isomorphism did not hold I told you about that right the double dual is bigger than the original vector space. So, the double orthogonal complement is in general bigger or contains the original vector space only in case of finite dimensional vector spaces can you say that they are one in the same right. So, not just isomorphic, but they are just the same. So, that is just just a little quick comment okay. So, yeah.