 So, if S is finite dimensional, then S perp what do you think it is going to be? We will just see that any vector space whether it is finite dimensional or not that is in fact, the next result. We will see that it can have a direct sum decomposition into S and it is orthogonal complement. So, the original vector space is not finite dimensional if S is finite dimensional then it is orthogonal complement needs to be infinite dimensional. And then the orthogonal complements orthogonal complement yeah why do you expect it to be finite dimensional after all? It is basically all down to we will not get there in this course, but it is all down to the closed property of Hilbert spaces. These are inner product spaces Hilbert spaces they are closed then you can go ahead and do it. If they are not closed. So, again we will not prove it I mean let us not get very technical with that, but if I mean all finite dimensional vector spaces are closed vector spaces endowed with inner products. In which so Hilbert spaces are special cases of Banach spaces Banach spaces are spaces with norms we have spoken about it very briefly in the very first lecture. So, norms can come from different sort of motivations, but if the norm happens to be defined in terms of the inner product then those are Hilbert spaces. So, Hilbert spaces have an inner product and then that inner product itself automatically brings with it this norm, but even if you do not have an inner product you can still have a norm the notion of a distance can still be defined irrespective of whether you have an inner product or not that is a Banach space ok. So, next suppose as I said you have u which is a subspace of an inner product space yeah alright and suppose. So, this is this is the big assumption here suppose for all v in v there exists v hat in u such that v hat is the best approximation of v in u then v is equal to u direct sum with u perp and the proof is much easier than it seems in this statement. So, consider any v v in v we can write v as v minus v hat plus v hat yeah. So, what do we know from our very definition of best approximation? Where does this fellow come from? Now that you know what an orthogonal complement is v minus v hat must belong to the orthogonal complement of u it satisfies no see what is what is something that is in the that is the best approximation. The best approximation must be orthogonal to every vector in u. So, it must therefore, belong to the orthogonal complement of u. So, this fellow belongs to the orthogonal complement of u and this fellow needless to say belongs to u because it is a candidate best approximation of v in u. So, of course, this implies that v is certainly a sum at least of u and u perp whether it is a direct sum or not is left to prove I mean the moment you are told that there is a best approximation for every vector then go ahead and write any vector any arbitrary vector in this form with respect to its best approximation in u. So, therefore, any vector can be represented any vector in v can be represented as sum of two vectors one of which comes from u and one of which comes from u perp. Now, what we have to show is that u and u perp this is a direct sum of course, the straightforward way to do that is to show what is in the intersection of u and u perp right. So, if v bar belongs to u intersection u perp then what does it satisfy it implies it belongs to u. So, the inner product of v with itself when I am taking this v let us assume it has a dual identity. So, this v this v bar now it belongs to u and when I am looking at it as a second argument of course, it is the same fellow but it has multiple identities. So, this then belongs to u perp what do we know about an inner product of a fellow in u with a fellow in u perp it is 0 by the definition this means that the norm of v bar must be 0 that means if something belongs to this then it can be nothing other than the 0 vector. So, therefore, u intersection u perp is equal to just the 0 vector. So, therefore, this of course, means that based on this and on this that v is equal to u direct sum with u perp no assumptions on dimensions as such here in this right all we have assumed is that this now of course, we know if u is finite dimensional then such a best approximation is guaranteed to exist but that is like an maybe like an overkill no it is like a sufficient condition even if it is not finite dimensional you might have a best approximation provided you have a best approximation you can go ahead and do this right. So, in whatever little time is left today the last part of this lecture I will introduce the projection maps and that will be very vital in seeing certain important results more specifically the orthogonal projection map ok. So, there are things like oblique projections as well we will not go there we will talk about orthogonal projections right. So, what is an orthogonal projection? So, this part is clear right this claim and this cute little result is I hope clarified. So, there is this map projection map P which takes objects in v of course, first of all you obviously have a subspace of v sitting inside it which is u right. So, this is a mapping from a vector space v to a subspace of itself and what sort of a map is this it takes fellows in v and maps them to their best approximations in u and now we by now we already know what is this best approximation business. So, that is all it does. So, this is a projection map you have a vector space and inner product space of course, it maps fellows from the inner product space to a subspace. So, maybe by rights I should just put a subscript u here, but I will omit that for every subspace you choose you can have a P u, but ok for now you understand from the notation what this is or standing assumption is u is a subspace of the inner product space v right. So, it takes a fellow and maps it to its best approximation right alright. So, the first claim is the P is linear what that means is that suppose v 1 gets mapped to v 1 hat v 2 gets mapped to v 2 hat then the question is does alpha v 1 plus v 2 get mapped to alpha v 1 hat plus v 2 hat if the answer is yes then of course, it is a linear map is it not. So, now we are going to use the equivalence condition for the best approximation which is through the inner product. So, from these two given conditions these two given conditions I shall write v 1 minus v 1 hat inner product with u is equal to 0 for all u in u and v 2 minus v 2 hat inner product with u is equal to 0 for all u in u right. So, let us look at alpha v 1 plus v 2 minus alpha v 1 hat plus v 2 hat inner product with some arbitrary u for some u in u that is given by it is collecting together terms such as v 1 and v 1 hat what we will have is alpha v 1 minus v 1 hat with u. In fact, I am going to just in one shot pull out the alpha so and alpha in plus inner product of v 2 minus v 2 hat with u, but from these two what can I write this is 0 this is also 0. So, this is 0 and we know that this best approximation is unique as we proved earlier. So, the best approximation is unique and I have exactly found one such candidate right. So, if I call this a new vector then its best approximation is this. So, that means p is a mapping from v to u such that it takes alpha v 1 plus v 2 to alpha v 1 hat plus v 2 hat which is just another way of saying really that p acting on alpha v 1 plus v 2 is equal to alpha what is v 1 hat p acting on v 1 p acting on v 1 plus v 2 hat is just p acting on v 2 right or go the linearity. So, the projection the orthogonal projection map so I should write maybe orthogonal projection map of course. So, the orthogonal projection map is linear yeah ok. What can we say about the image of the orthogonal projection map? So, of course, by its very definition notice that the image of p is contained in what u of course right, but can I show the other way round that is for every arbitrary object I pick in u must it have a pre image in v. In fact, it will if I just go ahead and pick the same object in u intuitively you feel like every object in u is its own best approximation that is very obvious common sensical way of seeing things right u is sitting inside v. So, any object in u is also an object in v. So, if you want to find out the best approximation of our object if you want to find out the best approximation of a quadratic polynomial among all quadratic polynomials it is just the quadratic polynomial right I mean if you want to be a little more formal about it then just say that for some u in u yeah consider p u minus u must be equal to 0 for all u till day in u is it not that is the best approximations equivalence that the error vector because p u gives you the p u spits out the best approximation of a vector in u. So, p u minus u is the error vector and the error vector must be orthogonal to every vector in u. So, this u till day is arbitrary. So, choose u till day is equal to p u minus u why not I mean u is in u of course by my choice and p u obviously maps two objects in u. So, it is a sum of two objects in u. So, this is also an object in u. So, then what happens it is the norm of p u minus u is equal to 0 which means u is equal to p u. So, if you want to find out the pre image of any object in u it is itself its own pre image at least one such pre image is itself there are several others which also project back may not belong to u, but belong to v and that can be projected back to you, but at least one image is already there. So, at least every object has a pre image right. So, the image of. So, then u is also contain in the image of p and by virtue of these we can say that image of p is equal to u. There is also another crucial detail we will say that this projection map is also idempotent I mean you must have come across this term somewhere in the context of matrices. So, projection map is idempotent which is to say that if you compose this p with itself. So, for want of a better notation or abuse of notation rather I just say p squared is equal to p that is just composition of p on p why because once you let p act on a vector you are already in u thereafter if you keep acting on it through p it changes nothing it just projects it back to the same vector. So, that is why it is idempotent right. So, in the next lecture when we come back we shall also study the kernel of this projection map and we will see that the kernel is nothing, but the orthogonal complement of u ok. So, that is what we will start off with in the next lecture. Thank you.