 So, what we are going to look at is functions f defined in a domain, say R2 or R3 taking values in R. We want to look at, we want to analyze local maximum, minimum of f. So, to analyze that, let us first define what is local maximum, minimum of functions of two variables, so definition. So, a point, say a, b, belong to D is called a point of local maximum. So, it is a sort of maximum, say it is largest and local means in a neighborhood, so if there exist some delta bigger than 0, such that the value of f at a, b is bigger than or equal to the value of f at x, y. For every point x and y in a neighborhood, so let us write a ball around the point a, b of radius delta. So, that is local maximum and you can define similarly local minimum if it is, inequality is less than or equal to, so local minimum a, b, if all that has above, f of a, b should be less than or equal to f of x, y, for every x, y in that neighborhood. So, local maximum, the name itself says it is a, the value at that point is largest in a neighborhood and minimum says it is the smallest in that neighborhood. So, the problem is how do we identify points where the function has local maximum or local minimum and how do we verify that indeed these are points of local maximum or local minimum. So, let us, for example, let us look at some examples. The simplest would be, for example, let us look at f of x, y is equal to x square plus y square for every x, y. So, geometrically if you look at this, this is the square of the distance. So, f at say 0, 0, the value is 0 and for every other point f of x, y is bigger than or equal to 0 for every x, y other than, of course, equal to, is strictly bigger than 0, is bigger than 0 for x, y not equal to 0, 0. So, clearly, 0, 0 is a point of local, is bigger than local so minimum. Can you visualize this geometrically? So, what does it look like geometrically? So, geometrically if you try to visualize the graph of the function is a surface and every section of the surface x square plus y square equal to some number. If z is fixed, then this is a circle. So, it is going to be, so that is the surface looks like. So, that is and obviously, it has a minimum at the point 0, 0, geometrically also. So, here we are just looking analytically that this is a minimum. For example, if you change f x, y equal to minus of x square plus y square for every x, y, then it is just inverting this surface. So, 0, 0 is a point of local maximum. At every point, the value is less than or equal to 0 and 0, the value is 0. So, that is a point of local minimum. So, f x, y is less than 0, which is equal to f of 0, 0 for every x, y. But that is not good enough for us. These kind of examples, we would like to find out. Like in one variable, we said if you want to find points of local maximum minimum, we have got a condition that the derivative at that point must be equal to 0. So, that gave us, that necessary condition gave us a collection of points where possibly the function can have local maximum or minimum. Those were the critical points. So, something similar can be done here. For example, let us look at the following. Let us write it as a theorem. So, let f x, y, x, y belonging to D have a local maximum or minimum at a point AB. So, suppose a function has, we want to find a, we want to find a necessary condition like function of one variable. So, let us put this back on one variable. So, then the partial derivative at the point AB with respect to x should be 0 and should also be equal to partial derivative with respect to y at the point AB. If both f x, f y exist. The reason for this is obvious that if you have a function of two variable and it has a local maximum at a point, then one variable fixed also it is a point of local maximum. If at a point, the value is largest in a neighborhood and if you fix one of the variables x or y, then as a function of one variable also it is a point of local maximum or local minimum at that point. If the derivative exists with respect to x, then the derivative by that theorem of one variable should be equal to 0 and similarly with respect to y. So, proof is just saying that if AB is a point of local maximum or minimum, then let us consider the function say x going to f of x, x going to varying x. So, x comma b also has local maximum, minimum at x is equal to a. Is that correct? So, it is clear to everybody that if in the domain AB is a point in the domain where it has a local maximum say, then in a neighborhood in that domain it is a point of. So, let us just draw a picture and understand what we are saying. So, this is the domain and this is the domain D and this is the point AB where the function has. So, f of AB is local maximum. So, it is bigger than or equal to f of x, y for every x, y in a neighborhood. So, let us draw a neighborhood. So, this is a neighborhood. So, in a neighborhood of radius delta for every x, y belonging to ball at AB of radius delta. Now, if I fix y equal to b and if I fix and let x vary, so y equal to b is fixed and x is varying. So, you are moving along this. So, in this part of you are moving along these values. So, in that part we will have f of AB is still bigger than or equal to f of x, b for every x, b belonging to that ball. So, as a function of one variable it has a local maximum at that point. So, f of AB as a function of variable x is bigger than or equal to the value at the point x, b in that neighborhood. So, since f of x AB exists, f of x AB should be equal to 0. So, f of x AB is equal to 0. So, as a function of one variable it has a local maximum, the derivative exists. So, that the derivative must be equal to 0. Similarly, f of y AB should be equal to 0. So, that is just a conclusion we are putting back. So, this is a necessary condition. So, this gives a necessary condition. So, let us define the points. So, definition, a point x y belonging to D is called a critical point for the function f. If, like in one variable, what are the possibilities? Either of the partial derivatives or both the partial derivatives do not exist. So, one f x f y either of this, either or both, does not matter. Even if one does not exist, both do not exist at the point x y. So, those are the points of non-differentiability like in one variable. Two f x equal to f y equal to 0 at this point. Derivatives exist and are equal to 0 at that point and possibly some boundary points. So, these critical points are the possible points where the function can have local maxima or minima as a function of two variables. And then again, we will have to find sufficient conditions to check whether these are points of local maxima or local minima. So, let us look at some examples to understand this a bit more. So, we have already seen that example of x square y square. So, let us look at some other example. So, let us look at the example of f x y, a simple one, x q plus y 4. x y belonging to the whole of all, the main is the whole of all. So, partial derivative exists everywhere. So, f x f y exists for all x y belonging to R 2. And what is f x? The partial derivative is equal to 3 x square f of y is equal to 4 of y q. So, what are the points? f x equal to f y equal to 0 implies the point is x y equal to, this will give you x is equal to 0 and that is the only critical point. So, for this function, the only critical point is 0 0. So, can we say at 0 0, the function has a local maximum or a local minima by looking at the function. So, let us note at 0 0. So, if I look at as a, let us put y equal to 0. So, f of x 0 as a function of one variable, it is equal to 3, it is equal to x cube. So, derivative at this point 0 0 exists and is equal to 0. But we know that f of x is equal to x cube as a function of one variable does not have local maximum or a minimum at the point 0 0. On the left side of x, x negative, it values negative on the positive side value is positive at the value at 0 0. So, that implies that f does not have local maximum or minimum at 0 0. Now, the partial derivatives exist at 0 0, but the function does not have local maximum or local minimum. So, that like in one variable, even if the function has both the partial derivative exists and are equal to 0, that is only a necessary condition that the point may have local maximum may have local minima. It may not be anything like in this example. So, let us look at another example. So, f x equal to 0 equal to f y is not sufficient to ensure that the point is local maximum or minimum. Let us look one more example. So, let us look at say f of x y, so x square minus y square for every x y. So, what is partial derivative f x? That is 2 x, f y is equal to 2 y. So, 0 0 is equal to 0 0 is equal to 0 0 is again is the only critical point. As before, if I look at f of say x 0, that is x square and f of 0 y is equal to minus y square. So, as a function of the variable x, y fixed as 0, it is x square, which has a minimum at the point 0. As a function of one variable and f of 0 y has a maximum, local maximum, actually a global maximum at the point 0. So, I cannot say for the two variables, the function has a local maximum or local minimum. But something more interesting is happening that whichever neighborhood of 0 I take, there is a point where the value of the function is less than the value of the function at that point. And there is another point where the value is bigger than the value at that point. For example, I will look at 0 0 for x very small. So, x comma 0 will be a point where the value of the function is positive, which is bigger than the value at 0 0, that is 0. Similarly, for if I fix x is equal to 0, the value is negative. I can make it as close to 0 as I want. So, whichever neighborhood of 0 0 I take. So, in the picture, you can think it as in this picture, for whichever neighborhood I take, there is a point, there is a point say p and there is a point q. The point p is actually in this one, there is a point p here and there is a point q here, very close whichever you want. So, here is a point p, here is a point q. The value at this point is positive. The value at the point q is negative. Whichever neighborhood you choose, you can always find points close to 0 0, with this properties. So, the conclusion is, there exist points p and q belonging to every neighborhood such that of 0 0, say that the value at p is bigger than the value at 0 0, that is bigger than the value at the point q, which is negative. So, is it clear what we are saying? This function has a special property, not only the point 0 0 is not a point of local maxima or local minima. In fact, something more is happening, that whichever domain, whichever neighborhood I choose of 0 0, I can find a point, how small that neighborhood may be, I can find a point where the value of the function is positive. So, that is bigger than the value at 0 0. And some other point where the value is negative, no, no, not x comma, that is a point q is here, in the y axis. So, y, x is 0. So, in the picture let me draw a correct picture. So, here is the neighborhood, here is a point p and here is probably the point q. A point on x axis close to 0 0, value is positive, a point close to 0 0 on the y axis where the value is negative. So, that is less. So, such points are called saddle points. So, let us put it as definition. A point, say a, b belonging to d, is called a saddle point for f, if for every delta bigger than 0, there exists p and q belonging to ball at a, b of radius delta, such that the value of the function at the point p is bigger than or equal, not equal to, we should say, strictly bigger. Otherwise, every point will be negative is bigger than the value at the point a, b is bigger than the value at the point q. So, such points are called saddle points. Why are they called saddle points? If you have, so, let me probably draw another picture. This is the domain. So, let us look at the graph of such a function. What does it look like? What does saddle point mean? So, let me see if I can, why it, I should not take it 0 0. So, let us skip that. So, let us, let me, look at this kind of surface. This is something that you put on a horse if you want to ride the horse. You put a seat on the horse. You do not just jump on the horse and ride. You put a seat and that seat is called saddle. You sit into that and that looks like this. So, it is a curve like this and there is a curve the other way around. So, if you look at along, along the curve, if you look at this red ones, there is a minimum at in between. And if you look at these ones, blue ones, then that point is a point of maximum. So, along some curve, there is a maximum. Along some other curve, there is a minimum positive to the same point. That is why it is not a point of local maximum or local minimum. Because as close to that point, there are points where the value is bigger and there are points where the value is smaller. But not only that, you can, for every neighborhood if you can find points like this, then it is called a saddle point. So, there are three possible possibilities for a critical point. One, it can be a point of local maximum. It can be a point of local minimum. It can be a saddle point. It may not be anything. You may not be able to conclude anything about it, the critical point. So, one would like to find out what are called sufficient conditions for local maximum, minimum and saddle points. So, what are, we can find critical points by looking at those collection of points, namely where the derivatives exist and are equal to 0 and looking at points where the value derivative do not, either or both of the partial derivatives do not exist. But out of this, how do you find out which are the points of local maximum or local minimum? Like in functions of one variable, we had various tests. We had a test of continuity test. We had first derivative test, then we had the second derivative test, which were sufficient conditions to ensure that the function has a local maximum or a local minimum. So, for that, we need to go to second order derivatives for functions of two variables. So, let us look at what are called second order derivatives. We have a function x, y going to f of x, y, x, y belonging to a domain D. Let us say that the partial derivative exists. Suppose, f x, f y exists at a point a, b. Like function of one variable, the first derivative exists, then we can ask whether the second derivative exists at that point or not. Similarly, we are going to ask partial derivatives with respect to x and y exist at a, b. Whether we can differentiate it again with respect to, so note that f x and f y are defined at a point a, b, taking values in R. They are constant. So, if we want to talk about the second order derivative, derivative of the partial derivatives, then the functions f x and f y should be defined in a neighborhood first of all. Otherwise, you cannot talk about the derivative a, b and in a neighborhood of a, b. So, let us assume they both exist in a neighborhood of a, b. That means the partial derivatives are defined as functions in a neighborhood. So, let us say a ball of radius delta at a, b. They are functions. Like for one variable, if the derivative exists, then the derivative is itself a function of a variable at that point. Now, supposing f x exists in a neighborhood, then it is a function of two variables. So, you can ask whether f x, x exist and f x, y exist, because it is a function of two variables. So, you can ask whether it has partial derivative with respect to x and with respect to y, both. So, they may exist, they may not exist. So, these are called f x, f x, y and similarly f y, y and f y, x if exist are called second order partial derivatives of f. They are called second order partial. So, for a function of two variables, there are four partial derivatives of second order. If you have a three variable function, how many there will be? f x, f y, f z first order derivatives, then f x with respect to y and then with respect to z or f x with respect to z and then with respect to y. So, order may change. So, for each one, there will be two of them. So, there will be totally six second order partial derivatives of a function of three variables. Now, a question arises. If I take the function f, find its derivative with respect to x and then find the derivative with respect to y. So, this is what we have written as f x, y. Sometimes, let me also give that notation. Partial derivative sometimes also is written as with respect to x. This also notation also we have introduced and then partial derivative of this with respect to y. Now, keep in mind here we are going x, y and here it looks like y x. So, in this notation, you are going from right to left and this one you are going from left to right. So, first you differentiate with respect to x and then with respect to y. First with respect to x and then with respect to y. So, this is also denoted as sometimes written as this instead of writing brackets and all that. So, this is same as this. These are different notations used for second order part. So, they are second order partial derivatives. But there is no condition, there is no surety. So, in general, f x, y need not be equal to f y, x. If you first differentiate with respect to x and then with respect to y, then it may not be same as differentiating with respect to y first and then with respect to x. In general, that need not happen. The two need not be same. I think one example I will let me state here. Probably you can try it later on yourself. So, for example, if I look at f x, y equal to x, y, I want x square minus y square divided by x square plus y square for x, y not equal to 0, 0 and 0 if x, y is equal to 0, 0. So, I leave it as exercise for you to check that this function has got. So, f x, y at 0, 0 is not equal to f y, x at 0, 0. So, that is an exercise. That means it has partial derivatives with respect to x and with respect to y at 0, 0 at every point near 0, 0 also you want. So, second order partial derivatives also exist. But check at 0, 0 the two values are not equal. So, one wonders is there any conditions. So, one proves a theorem, will not prove that. So, if f x, f y, f x, y, f y, x, if they exist and are the continuous at a, b. Basically, you want for f x, f y, you need differentiability. So, you need continuity will come automatically. So, the condition is the mixed derivatives f x, y and f y, x, if both are continuous at that point, then they will be equal. Then, f x, y at a, b will be equal to f y x at a, b. So, they will be equal. Then, they are equal. We will not prove that and we will not be sort of having much use of it. So, most of our functions will have that property. So, it will be okay. But in general, this is a condition one should verify to ensure that the mixed partial derivatives are equal.