 Hello, welcome to NPTEL NOC, an introductory course on point set topology part 2, module 3. So, today we shall begin with some preliminaries required from one variable real analysis. I can say that it is one variable calculus, but in calculus courses we do not go this deeper. So, I take this opportunity to do this one. I will explain why this is needed a little later. Start with any function defined on an open interval j to r. For any point x inside j, we will define the upper denierite and derivatives. They are attributed to Dini, he was one of the French mathematicians I believe. So, let us just call them upper right hand derivatives, ok. They are not right hand derivatives, you are already familiar with the right hand derivatives and left hand derivatives. Now, we are only taking upper right hand derivatives which are nothing but, so notation is t plus of f at x, lim sup of these divided differences, f of x plus delta minus fx divided by delta. The only thing which makes it right derivative is delta tends to 0 plus, so only from positive part or delta is always taken positive here, ok. Delta tends to 0, but positive and you are not taking the limit which may not exist, but you are taking the lim sup. The whole idea is that lim sup always exists, no matter what the function is. If you allow infinity also for example, this always exists. If it is a bounded function, then it will be finite, it will also exist, ok. So, I am just recalling the definition of lim sup here which is nothing but, infremum of all these quantities over n, for all n here, where the inside thing is supremum over all the deltas of f of x plus delta minus fx divided by delta, delta between 0 and 1 by 2, ok. So, if you make say 1 by 2, 1 by 3, 1 by 4 and so on, as the interval for delta becomes smaller and smaller, supremum also becomes smaller. So, this sequence as a sequence in n is a monotonically decreasing sequence, therefore its infremum exists, ok. It is actually limit of this one, it is monotonically decreasing sequence, ok. So, this always exists, I do not need to call this one, but I am rewriting it just to tell you what is lim sup that is all. So, d plus fx always exists, this is called upward deney right hand derivative. Similarly, I can define the lower deney right hand derivative, the only thing I change is instead of lim sup, I take lim inf, which is in the first you take infremum over in this interval over delta, then you get a strictly, you get a monotonically increasing sequence, you take supremum of that. It can be shown that, I mean if you are a clear from this one, lim sup is, lim inf is always smaller than lim sup and so on. If both of them exist and, sorry, both of them always exist, but if they are equal then it will be actually the limit, namely the right hand derivative, ok. So, these things you must be knowing, but now we have these symbols here, d plus and d minus for upward right hand derivative and lower right hand derivative, ok. So, it is easily checked that if f is differentiable at x, then d plus is equal to d minus equal to the derivative. For any two functions f and g and any constant alpha, we have d plus of f plus j is d plus of f plus d plus of j, d plus of alpha f is alpha times d plus of f. So, these are just linearity of d plus, which follows by the corresponding property for lim sup, ok. The same thing for lim inf also, then in d minus also. By the way, I have only talked about the right hand derivative, it is exactly same way you can fact two more d is here from the left hand also. So, there will be four such quantities, if f exists, sorry if f is differentiable then all these four quantities are equal to the derivative itself, indeed it is a field only, ok. So, that is the easiest way to see that, but I am not interested in right now the left hand derivatives here at all, ok. So, neither I am going to do a lot of things with these things, these things are very very helpful in general, you know, analysis, so I take the support just to introduce them. My main aim is to use them to prove the so called big mean value theorem in the case of Banach spaces, ok. So, let us go ahead with that. So, this is theorem which will help us to do that job, this is very simple thing, ok, depends only on d plus which always exists. So, all that I start is start with a function f defined on say some open interval and a continuous function, real value function assume f 0 equal to 0, this is a harmless assumption, you can always assume f 0 equal to something and then subtract that, that is not a problem. So, this is technically, this is just a technical assumption. The basic assumption here is that this is a continuous function, ok. Suppose for some alpha positive, d plus of f is less than or equal to alpha for all x inside 0, 1, suppose we have found out a bound, upper bound for d plus of f, ok. So, pay attention, I am taking it only 0 to 1, ok, all that I want is, this could be 0 to 1, but I want to have a function even defined at 1 also, ok. So, the conclusion is that f x is less than equal to alpha x for entire of 0, 1, ok. So, I do not want to write 1 plus epsilon and something. So, I have taken just 1 comma 0 comma 2, there is nothing lost here, no extra assumption or nothing, I mean this as general as it could be, ok. So, just a continuous function, if the dv derivative t plus is bounded, it would give you bound, a very specific bound in terms of bound for all the function f x itself, ok. So, this bound must be inside the open and true 0, 1, I mean 0 open and 1 open, 0 closed and 1 open, but the conclusion is for the entire 0 to 1 closed and true, ok. So, so how do I go ahead it? If I show that alpha x, f of x is less than equal to alpha plus epsilon x, alpha plus epsilon times x for every epsilon positive, then because it is true for every epsilon positive, it must be true for alpha itself, right. So, I consider the function f x minus alpha plus epsilon x, ok, right and show that this function is less than equal to 0 g 0, g x is this function, g x is less than equal to g 0, g 0 here is put to be x equal to, if x equal to 0 here, this f 0 which is 0 is 0, so this is 0. So, this is negative, non positive for all x. The same thing as saying that f x is less than equal to alpha plus epsilon times x, for every epsilon this is true, f x is equal to alpha x. So, how do I prove that g x is less than equal to 0 for all x in 0, 1, ok. Now, what the only hypothesis that I have is that g is continuous because f is continuous, ok, this is just a difference of two continuous functions here. So, g is continuous, continuous function on a closed interval attains its minimum on each closed interval is 0 to x I have taken, ok, for all x inside 0, 1, ok. Look at fix some x inside 0, 1 after all I have to prove for all this one here, right. I do not want to take 1 yet, but you can take for all x this is true after all, g is continuous therefore, attains its minimum on the whole of 0 x, ok, for all x. So, we claim this minimum value is at x itself, if the minimum is at x, ok, for all other things g x must be less than equal to that minimum, bigger than equal to that minimum. So, g x will be less than that, in particular g x will be less than g 0 because g 0 is also one of the values, right. So, we want to prove g x is less than g 0, we are proving a very strong thing that the function itself takes minimum value at x, ok. So, there is lot more we are proving here, but we are not using finally, we are just using that g x is non-negative, ok. So, how do we prove that? Claim is that the minimum value at x, minimum value of this g is at x, ok. So, that is where the Dini derivative D plus will help us. By the way, every time you are to do with limb soup, you have to do this epsilon business, give an epsilon, subtract or add whatever then the something happens that is the only way to catch this limb soup and limb means, not only even the limits also there is in the way. So, for limb soup also is what you have to do. So, D plus of g, remember g was difference of two functions, right. D plus of f plus D plus of this function minus of alpha plus epsilon times x, ok. But what is the derivative of this one? It is this is linear map. So, it is alpha plus epsilon. Whenever derivative exists, D plus will exist, D plus will be equal to that. So, it is just the derivative, it is alpha plus epsilon. This is D plus as a as it is, ok. So, this D plus g is nothing, but D plus f minus alpha plus epsilon which is we started with our assumption that this is negative, ok. And D plus is less than or alpha itself what we have assumed, ok. So, apply this one to a point y inside 0 to x. This is true for all points inside this open interval 0, 1, ok. Take a, take fix an x and then take y inside 0 to x. We see that there exists an n belonging to n such that now I am going to use what is Limb Soap correctly, ok. Such that supremum over this interval is less than 0. If something D plus is less than 0, if all the supremums, ok, where greater than equal to 0, then the limits Limb Soap will be also greater than equal to 0. So, for some n this must be less than 0 because D plus is infimum over all such things, ok. So, for some n this must be less than 0, ok. So, which is the same thing as saying if you clear, remember now delta is positive because 0 and 1 by n, g of y plus delta minus g of y is negative for every delta in 0 to 1 by n, ok. That is where this delta is ranging over this one. This must be negative for all the y because the supremum itself is negative. Therefore, each term must be negative. Therefore, see if g y is minimum, then g y, g y plus delta cannot be smaller than g y. Therefore, g y is not the minimum of g in the entire of 0 to x because y is taken between 0 to x. 0 to close x, it is minimum. In the open interval, it cannot be the minimum. Therefore, where is the minimum? It must be at x. You understand? The minimum is not, may not be attained in 0 open interval, open part. But it is the minimum we are looking at 0 to x closed. But this minimum cannot be inside the 0 open x. That is what this says. Therefore, the minimum must be at x. Hence, g x must be the minimum value of g in 0 to x. And that is precisely what we wanted to say. Therefore, g x is less than or equal to g 0 in the entire of 0. So, now, we are ready to do the generalization of weak mean value theorem. Why I am calling generalization? Weak mean value theorem is true for all differentiable functions on a convex domain into R n into R m, any vector value functions. Now, we are going to do it on Banach's basis. Same thing, same statement for Banach's basis is what we are going to do. The usual proof for in the case of R n and R m becomes easier because the norm square function is differentiable on R n. What is norm square? Namely, the Euclidean norm. Euclidean norm square is just summation xi square. So, we can use that to do our job because all other norms are all equivalent to the Euclidean norm. But in the general case, we do not have any such theorem. And we do not know. In fact, it is not even true perhaps that given a Banach space with a norm, that norm in our definition may not be differentiable. Even the square of norm may not be differentiable. Putting that norm being differentiable requires too much of a restriction. Almost we are begging that it must be a till work space. So, that is why the Diris derivatives are brought in to help us to prove the weak mean value theorem. So, the proof itself is not at all difficult now. Let us go through this one. Start with V and W Banach's basis. U is a convex neighborhood of 0 belonging to V. By the way, I have already told you that this assumption that 0 belongs to V is just a technical thing. You can do it for any other point also. Suppose, g from U to W is a differentiable function on U and there exists a lambda positive such that all the derivatives are bounded by lambda. Norm of g T g at all the points U is less than or equal to lambda. Then the norm of g U minus g 0 is itself less than or equal to lambda times norm of U for every U inside U. So, this is weak mean value inequality. In the case of one variable calculus, this was deduced by using the mean value theorem wherein there is equality namely g U minus g 0 equal to g prime of something in the interval. And then g prime is bounded by lambda. So, you get this one. But we do not have the mean value theorem itself in the case of vector valid functions. But what we have is inequality directly. So, this is what we are going to prove it in the case of Banach's basis directly. Of course, it will work for any R n also because R n's are also Banach's basis. So, fix one point U inside U and define this h T equal to norm of g T U minus g 0. See, if you just take g T U minus g 0, this is precisely one variable calculus. Only thing is, this would be a W valued function, not a real valid function. How to get a real valid function? We are left with only taking norm or norm square or some such thing. Okay, taking norm square denthalpy, just take the norm function. This is a continuous function. We do not know that it is differentiable. Okay, now we have got into one variable calculus. T is sum between 0 and 1 because U is convex. When you take T times U, it will be still inside U. So, h from j to R is defined on an interval containing 0, 1 because in open subset, I can extend it just a little bit. Okay, 0, 1. I can make it 0, 2 also, but then I will have to take some T by 2 or some such thing, unnecessarily complication. I just need 0, 1, something positive. Okay, and some open interval j containing that. We claim that the derivative satisfies d plus h t is less than 2 lambda times 0. So, this was the condition that we needed. So, this lambda is the same thing as this lambda in the hypothesis of this proposition. Okay, once we have proved this one, you can use this earlier theorem that this function h itself is less than 2 lambda times norm U times T. The T is there after all here. Okay, for all T inside 0, 1. Okay, but then you can put T equal to 1, then you get h t. What is h t here? Just see. When T equal to 1, h 1 is g of U minus d 0. That will be less than equal to lambda times norm U. That is what we wanted to prove. Okay, so we have to prove this equation 12, this formula. Okay, so this comes very easily now. Given T and delta positive, such that both T and T plus delta are inside j, because j is the interval on which the function is defined. So, you have to choose only delta as small as, you know, small. So, that T plus delta is also inside. So, once that is satisfied, I can look at h of T plus delta as well as h t of T and take the difference. Divide by delta, take the supremum and then a ring supremum that is D plus. Okay, so start with h of T plus delta minus h t. That is by definition g of T plus delta times U minus g 0, take the norm minus g of T U minus g 0, norm of that. That by, you know, triangle inequality, you see add and subtract g T U, this plus this is less than equal to this one. So, if you take this term on the other side, so this minus this is less than equal to g of T plus delta times U minus g of T, norm of that. Okay, this is triangle inequality. Now, once again, you add and subtract this term is as it is dg of T U operating on delta U, dg at T U operating upon delta U. Okay, add and subtract so that the norm will be less than equal to norm of this plus norm of this second factor which I have subtracted here. So, there the delta comes out, delta is positive, which is nothing but norm of dg T U into norm of U. Okay, so this is less than or equal to, norm of this is less than or equal to, that is why I have put less than equal to. This is norm not equal to. Now, that why I have pulled out this delta because now I can divide out by this delta. When you divide out by this delta, okay, on the left hand side what you have is the corresponding term for our D plus delta definition, h of T plus delta minus h T divided by delta. What is this one? This is the, this occurs in the definition of the derivative, right, g of T U plus delta U minus g of T U delta times dg at T U, okay, operating upon U. So, this is the directional derivative at the, in the direction of U because there is this word divided by delta, okay. And this last term remains as it is, this delta has gone away. When you take the lim soup here, it will be less than lim soup of this plus lim soup of this, but here there is no delta. So, it is just a constant term, okay. What is the lim soup of this one? Lim soup, this limit itself exists, the limit itself is 0. Therefore, lim soup, limit for right hand, left hand, all of them are 0. So, this is 0. So, what you get is h, this one, d plus of h will be less than or equal to this one, okay. Because when you take lim soup, what you get is the definition of d plus of h, okay. That is what we wanted to prove. d plus of h T is less than or equal to this term which is, we know is all less than or equal to lambda times. By the assumption here, dg of u is less than or equal to lambda, okay. dg for all elements inside the convex set is less than or equal to lambda. So, lambda times, norm you get, okay. So, what we have proved is this mean value inequality for Banach spaces. For all differentiable functions on a convex neighbor, convex open subset, okay. So, let us now convert this into the following theorem ready made to use for our implicit function theorem, so on. Let V and W be Banach spaces, u is a convex open subset, f is a differentiable function on u, there exists a lambda positive and T is a some linear, bounded linear function. See, this part is same thing as the previous proposition. But here, now I am bringing an arbitrary T which is a bounded linear function on V to W such that df V minus T norm of this is less than or equal to lambda. In earlier case, if you put T equal to 0, then you get the earlier case, okay. So, now this is an extension df of V minus T is less than or equal to lambda for all V inside u. Then the conclusion is also slightly different, namely f of V2 minus f of V1 minus T times V2 minus V1 is less than or equal to lambda times V2 minus V1 norm for every V1 V2 inside this convex neighborhood, okay. This u itself is a subset of the Banach space V, okay. So, this is very easy now, of course, using this one, using this one. We do not need any more DNA derivative, we directly use this ready made theorem there. So, first consider the case when V1 itself is 0, okay. This is my assumption, this may not be the case, but I am assuming this is a special case. Now, u is a convex neighborhood of 0 because V1 is after all an element of u, okay. So, we have to prove that f u minus f0 minus T of u is less than or lambda u for every u inside u. This is a special case we want to prove when V1 is 0 and V2 is just u, okay. So, this is what we want to prove in the special case. For this, what I do? Put g equal to f minus T because in the hypothesis here, df V minus T is bounded, the norm of this is bounded. So, I take g equal to f minus T. What is the derivative of g? It is df minus T. Therefore, this dg is now bounded. Therefore, we can apply previous proposition and go over, right. Now, one more simplification we have to remove. Namely, I assume V1 is 0, okay. So, how to do in general case? In the general case, you first take the domain itself to be u minus V1. Translate u by V1, u minus V1. So, shift the origin. That means, all points V minus V1 where V is inside u, okay. On this one, you change the function also now. Namely, instead of f, you take f quiddle of u equal to f of u plus V1, okay. V1 is a point of u. Therefore, 0 is a point of u minus V1. u is convex. Therefore, u minus V1 is convex. So, minus V1 is a linear, is a isometric, okay. It is not nonlinear, it is affine linear. It is an isometric. It preserves the norms. So, it is convex also and so on. So, we can apply the previous conclusion which was done for when V1 equal to 0 for this step, instead of that, you do it for f quiddle. What you get is precisely a statement now, okay. Because f quiddle of V2, look at this one, f quiddle is by definition f quiddle of f of u plus V1. So, what I have to do, if I want to get f of V2 here, f quiddle of V2 minus V1 will be f of V2. f quiddle of 0 will be f of V1 and so on. So, this one follows, okay. So, whatever we wanted, we have proved it by just shifting the origin or just shifting the function, okay. The shift of the function is occurring by a linear map here, okay. So, essentially this 1.21 is nothing but just a little more modification of our proposition here. And this was proved by using deny derivative, okay. So, next time we shall prove implicit function theorem. Just for the sake of what we are up to, I will just begin it, just statement and then we will prove this one next time. So, implicit function theorem, the statement itself is somewhat long here, okay. The proof is not as long as that because you know, you do not have to be threatened by the big statement, okay. The first part is, I mean the preparation here, what are the hypothesis here? V and W are Banach spaces. Y is any topological space, okay. So, that is the generality that we have achieved here. Take m cross n to be any open subset of Y cross V. In other words, m is an open subset of Y and n is an open subset of V. I will what we specifically that, okay. f from m cross n to W be a continuous function such that for some point Y naught V naught belonging to m cross n, we have f of Y naught V naught is 0. So, for each Y inside m, the function f Y from n to W given by f Y of V could f of Y V, namely Y is fixed now, okay. It is a function of just V from n to W. That is differentiable and this derivative, derivative will be a function from for each Y, there is a function n to the bounded linear maps from V to W, okay. This function must be continuous. So, for each fixed Y, it is continuous. That is not enough. As a bounded function, as a function of two different variables, it must be continuous, okay, jointly continuous. Finally, third one is that the derivative at Y naught V naught, f Y naught derivative of f Y naught at V naught, that is t. Then this t is a similarity, okay. So, hypothesis is this hypothesis is very important that here we have an isomorphism, a similarity. Other things are joint continuity of the derivative. Before that, the function must be differentiable, okay. Only in terms of Y, in terms of n, okay, namely in terms of V, the Y part is arbitrary topological space, there is no differentiation there. You understand that why is an arbitrary topological space? In the other words, you should think of this as a family f Y of V of differentiable function. Only thing is the family itself is a continuous family, okay. Not only each function is continuous and differentiable, the family itself is continuous. That is the way we have to express that function is m cross n itself is contained. With these hypothesis, now the conclusions, there are two conclusions here, okay. For the second conclusion, we will need little more hypothesis, that is why it is separated out. The first conclusion is that you can take a neighborhood prime of m of Y naught, okay, a smaller neighborhood, not the whole of m, some smaller neighborhood and a neighborhood and a row positive that will be neighborhood of that will be neighborhood of zero inside n there, okay, such that for all Y inside m prime, okay, there exists a G Y belonging to B row bar of V naught, the closed ball around V naught with the property that f of Y G Y is zero, okay. So, you see f of Y naught V naught was zero. One solution was think of f Y V as equal to zero as a equation which we want to solve. One solution has been given, okay. Then you want a continuous solution here on a small neighborhood. So, that is precisely what is achieved here. In fact, when you are trying to get a one solution, you are assuming you are already getting a unique solution. This is the only way to get a solution after all, put sufficient conditions so that it becomes unique cause. In all existence theorems, the uniqueness part helps you a lot, okay. So, somehow we have got this uniqueness here also, you see. Only thing is with these hypotheses we have to cut down the domain properly. Some opens up set we do not know, okay. Some row we do not know, okay. On the closed ball, okay, for each point inside m prime, G Y will be unique such that f of Y G Y is zero. And this G itself is a continuous function, okay. One is not satisfied with just continuity, we would like to have differentiability also. For that we have to put little more hypotheses because m prime was a subset of just Y, right. Y was a topological space. So, it does not make sense to tell you or demand that G must be differentiable. So, in order to make that sense, we have to at least say that m prime is an open subset of some norm linear space. So, that is why we would like to assume that this is, this Y is a Banach space, okay. So, further assume that Y is also a Banach space and the function f v naught from m to w defined by f v naught of Y equal to f of Y v naught is differentiable at Y naught. You see I have used now upper index here to indicate that now I am fixing the right end slot as fixed now. v naught is fixed, Y is a variable. But the function is same a function f, okay. It is restricted to v naught and Y is varying. That must be differentiable, okay. We are not demanding anywhere that the m prime to n, this may have been function, so m cross n to w itself is differentiable from the product space. But we are demanding that this is continuous. Whereas, differentiability is by partial. Fixed Y naught, you get a differentiability. Fixed v naught, you must be differentiable. That is all we are demanding, okay. So, this part is that f v naught, where v naught is fixed, that must be differentiated at Y naught. And the derivative, let us say, is H naught of H of Y naught v naught. This is just a notation for the derivative of this function, okay. Then what happens? G will become differentiable at Y naught. What is this G? It is a unique solution given by the part a, okay. That will be differentiable to Y naught and its derivative is given by minus t inverse composite with H. See, H is a linear map, bounded linear map. T is a bounded linear map. This t is invertible, it is similarity, so I can talk about t inverse. Take t inverse of H, take the minus of that, that will be G Y naught. Which happens to be, I rewrite it because G is just, this t is just G Y naught v naught, okay. So, let us prove this statement next time. Thank you.