 So, last class we are discussing the solution of unconstrained optimization problem using Newton's method and we have seen there are two drawbacks are there. One drawback is there that you have to each iteration you have to find out the Hessian matrix or second derivative of the cost function or objective function and that must be at each iteration must be a positive definite matrix. And this is the first drawback and because this condition must be satisfied because we are going over the descent direction. So, that each iteration from kth iteration to k plus 1th iteration the function value should decrease that is the condition that. So, Hessian matrix must be positive definite another drawback is there each iteration you have to do the inverse of this matrix whose dimension is n cross n that inversion Hessian matrix inversion you need it at each iteration. And n is the small n is the number of variables involved in the decision objective functions. Then we have seen that this problem can be overcome by using quasi Newton's method it is similar to Newton's method only the inverse of the Hessian matrix at each iteration is replaced by a some matrix S suffix kth iteration that matrix is updated at each iteration that you have to update. Then how you have to update that one we have explained this one, but now we will derive that expressions. If you recollect this one that we want to replace in the Newton's method this inversion that means Hessian matrix inversion by a positive definite matrix X of k suffix k. And then next is next is X of k how we have what is called updated this one we have seen that this is the updated expression we have shown you that X k plus 1 if you see this one X k plus 1 update X k then this things you have to calculate delta k is nothing but a that successive two iteration that X difference of X values gamma k is the successive two iteration the gradient of objective functions difference of gradient of objective function. This way you have to update how we got it this expression that we have we are going to discuss now. And the derivation is based on that our concept like this way that the function value let us call f is a function which is a function of X and variable the function difference of function value at two successive points carries the information of derivative of the functions. Similarly, the next second derivative that the gradient of the function at two successive points carries the information of the second derivative of the function that means Hessian matrix information carries based on this one we have retained that one that Hessian matrix of the function at k plus 1 instant or the iteration is equal to the difference of two difference of two gradients at successive points two successive points X k plus 1 and X is equal to X superscape k. This difference carries the information of the second derivative of this function that means Hessian matrix information if you multiply it by that change in that is variable change in difference in that variable change if you multiply it then this is equal to expression you can write it from there I rearranged yesterday if you see X k plus 1 minus X superscape k is equal to this matrix inversion then Hessian matrix inversion. So, I mentioned earlier then Newton's method for solving the unconstrained optimization problem this is creating problems first thing delta square f Hessian matrix must be positive definite that is next is this inversion is always exist when the number of variables is very large the composition burden involved here is much. So, this one we are not taking the inverse we are replacing this one by a matrix X k plus 1. So, this we are replacing X k plus 1 now see this one how this X k plus 1 is that X of k each time it is updating. So, that is we will discuss now now look this let us call this expression that is what we will write it this expression you write this equation number 1 plus this is equation number 1. So, now question is given X k then X k plus 1 X k plus 1 can be obtained can be obtained using correction formula correction formula and that correction formula is based on the rank 1 type or rank 2 type correction formula. So, various type of correction formula is available that means 1 may be a correction type of order 1 rank 1 rank 2 another way. So, you can write various correction formulae have been developed in the literature have been developed most of which are of the type 1 or type 2. So, let us see now we can write X k plus 1 is equal to X k. So, this now I am updating X k I know X k plus 1 at kth iteration what is the Hessian matrix inversion at k plus 1 at the iteration that I am finding out by S k plus adding a correction term delta of suffix k where this delta of k is a matrix of same dimension of X k and X k dimension as you know it is n by n this matrix also n cross n matrix and that matrix is a positive definite matrix this. So, that matrix of rank 1 or rank 2 or rank 2 the simplest choice of delta k 1 can take like this way the inner product or the outer product of any two vectors which is not a null vector. So, I can define delta k as a special choice of delta k like this way alpha k u k and u k transpose this is the inner product of vector u k at kth iteration u k any vector you have considered multiplied by u k transpose. So, this product of this outer product of the vector u k and this same vector outer product if this is a matrix of dimension n cross n where the u k dimension is n rho 1 column and alpha k is a scalar quantity whose value is greater than 0 that is scalar quantity scalar greater than 0 this. So, this is the choice of this one and this matrix if you see the this is always positive semi definite matrix. So, if you add it with a positive definite matrix another positive semi definite matrix results will be a at most it will be a positive semi definite matrix of that one or sorry this result will be a positive definite matrix of that one again because this is a positive definite matrix. Now, a positive definite matrix add with a delta k which delta k is positive semi definite matrix results will be a positive definite matrix this matrix is positive definite matrix and this is positive semi definite matrix and that results will be positive definite matrix. So, this with this one let us call this equation is I am considering this equation number 2 and this is the equation number this equation is equation number 3. From 2 one can write it from 2 x k plus 1 that what I mentioned is that is a symmetric matrix symmetric if s k is symmetric. And see when you take the outer product of 2 vectors this results is you will get a symmetric matrix. So, symmetric matrix plus symmetric matrix is a symmetric matrix of this one that is from 2 we can see this one. Then note the what we have defined delta k is nothing but a the change in decision variable value at k plus 1 th instant minus at k th instant. So, that is we are denoted by delta k symbol similarly, gamma k this gamma k is the difference of gradients at 2 successive points. What is the successive value delta f x is equal to superscript k plus 1 minus delta f x superscript of k this. So, let us call this is equation number last equation we have given the number 3. So, this is 4 and this is 5 again our main job is here now replace that u k and alpha k this u k and alpha k replace in terms of the information delta k and gamma k that is our next step how to replace it. Aim is to replace is to replace that our alpha k and u k in equation 2 with delta k and gamma k this. So, how you do this one now see this equation number 1 this equation number recall the equation number 1. If you recall equation number 1 then I can write it x superscript k and this is what we have denoted by this one gamma k the change in the what is called gradient of the function at 2 successive point change we have denoted by gamma k. So, I can write this is gamma k is equal to delta k see this one if you take this if you take this side it will be a delta k. So, I can write it this expression from equation number 1. So, now what is x k plus 1 this I can write it x k plus delta k that is the what we have consider x k plus 1 we have consider like this way into gamma k is equal to delta k. So, now I can this if I take it that side x k gamma k in the right hand side. So, I can write it delta k gamma k is equal to delta k minus x k into gamma k this is x k. So, that one you can write it and one can write it this one because you know what is delta k I have consider delta k is nothing but a alpha k what a inner product of same matrix u k. So, this I can write alpha k u k u k transpose this is delta k into gamma k is equal to delta k minus s k gamma k and note this one what is gamma k gamma k is nothing but a difference of the gradient value at two successive points gamma k. So, it is a vector of dimension n cross 1 and this is also dimension u k transpose dimension 1 cross n. So, this quantity this two product scalar quantity scalar quantity now both side I multiplied by gamma k transpose both sides and alpha k is also scalar. So, if both side you multiplied by gamma k then left hand side alpha k then gamma k transpose u of k then u transpose of gamma k is equal to because I both side I multiplied by gamma k this equal to delta k minus s of k into gamma of k. Now, this you see this is a scalar and inverse of the scalar is that one this is also scalar. So, what is this value is later let us call it is 5 that is also 5 if you take inverse of this scalar quantity scalar. So, I can write square so alpha k this is like alpha k I can write u transpose u k transpose gamma k whole square two scalar quantity square I can write is equal to gamma k transpose delta k minus x k gamma k. So, let us write it this is equation number last we have gone up to equation 5 then you can recall that one is this you can consider equation number 6 then you can call this is a equation number 7 that one and there is a identity you can see one can write it this identity without any problem note this identity note this identity what is this identity alpha k u k u k transpose. And this the whole thing is nothing but if you see this is nothing but a delta k we have considered and alpha k is a scalar quantity that is what is a scalar quantity if you recollect that our alpha k definition that what we have just considered alpha k is a scalar one when you have defined the your that is delta k this is alpha k is a scalar and this is a matrix. So, delta k is this we can write it same thing we can write it alpha k this we can write it then we can write it u k that one as it is then you are writing u k transpose gamma k then gamma k transpose u k then you are writing u k transpose u k transpose this one into alpha k divided by you see alpha k u k transpose gamma k I agree gamma k transpose u k. So, they say I told you gamma u k transpose gamma k is a scalar quantity this is a scalar quantity scalar. So, this is a scalar so this is this scalar this scalar same thing same quantity I am written down so you can cancel it. So, this alpha and this alpha you can cancel it so what is left alpha k u k u k transpose that is same thing, but we have written into this and this are the identity either you write this one or you write this same thing, but this quantity whole because I can divide this is a matrix right hand side this quantity from here to here must be a matrix and this is divided by a scalar quantity you see this is a scalar this is a scalar and alpha is we have considered alpha is a scalar. So, I divide the matrix by a scalar quantity so these are the identity you can say. So, now from 2 what is the equation number 2 if you just say the equation number 2 here this equation what how to update that x k plus 1 k plus 1th iteration. So, 2 if you take that side x k plus 1 minus x k it will be equal to delta k. So, I am writing x k plus 1 minus x k and x k you know what is the inverse of Hessian matrix at k th iteration or k 2 successive points k plus 1 th iteration the inverse of Hessian matrix and this is the k th iteration inverse of the Hessian matrix. The difference of this one is written by you see that one equation number 2 this difference of this delta k and delta k. So, I can write it that the value of delta k now here what is the value of delta k from equation from this equation this expression I can write it the whole expression what is because this nothing but a that one just now we have shown it. So, you can write it alpha k u k then u k u k transpose this I am writing into gamma k. So, this 4 quantities 1 2 3 4 quantities in together I am writing this this up to this I am remaining is I am writing gamma k transpose u k u k transpose alpha k. So, this and you can see here this is nothing but a transpose of that one that quantity is transpose of this quantity is transpose of that one or vice versa. So, divided by that scalar quantity and what is this quantity this is a scalar whatever the value of this one and this value are same. So, I can write it square. So, alpha k u k transpose gamma k whole square because what is the value of gamma k transpose u k is the same value of u k transpose because transpose of square quantity is same quantity. So, it is scalar quantity and this quantity you see this is what you can easily verify this one this is a scalar quantity again and this is also scalar this 2 scalar quantity is a scalar quantity u k is a column vector u k is the column vector whose dimension is n row 1 column and this is the transpose of this one that will be a row vector. So, column vector multiply post multiply by same matrix with a row vector that results is a matrix. So, I am getting this one. So, from 6 and 7 that 7 expression you see with this one alpha k u k gamma k square this quantity I can write by replace by this one denominator expression I can replace by that one or replace by that equation number 7. So, using equation number 6 say 6 alpha k u k u k transpose gamma k alpha k u k u k transpose gamma k I will be replace by that quantity. So, using 6 equation number 6 and 7 using equation number 6 and 7 or from 6 and 7 from 6 and 7 we can write x k plus 1 minus x of k s this is s k is equal to I am replacing from 6 and 7 delta k minus x k gamma k into delta k minus x k this is s k gamma k whole transpose divided by gamma k transpose delta k minus u k and this is gamma k. So, that what is this value I just see this one this gamma k transpose x k this is delta k minus this I am replacing equation see this one here alpha k u k transpose gamma k equation to gamma k transpose delta k minus x k then gamma k x k gamma k. So, this is x k delta k minus x k x k into gamma k s k is a matrix multiplied by vector column vector is a vector. So, vector minus this vector and multiplied by another row vector. So, it is a scalar quantity. So, this I can write it. So, therefore, I can write x k plus 1 is equal to x k plus delta k minus x k gamma k and that transpose delta k minus x k gamma k whole transpose divided by gamma k this gamma k transpose delta k minus x k gamma k this quantity. So, you see at each iteration now we have derived at each iteration the inversion that is Hessian matrix inversion is replaced by a matrix equivalent to matrix x k and next iteration x k plus 1. That means, inversion of Hessian matrix at k plus 1th iteration we can obtain knowing the previous value of x k and this compute that one and this one you know delta k is the difference of the values of the decision variables x value difference at two successive point k plus 1 and k 2 1 and gamma k you know the difference in gradient of a objective function at two successive points the difference this. So, I can compute this one for k is equal to 0 k is equal to 0 1 2 3 dot dot in this way. So, each iteration one will compute recursively like this way I am avoiding the inversion of a what is called that Hessian matrix inversion when you will compute the solution of optimization problem using the Newton's method. And when you will replace in by a matrix this and recursively we are calculating s k plus 1 in place of inversion of a matrix then we will call it is a quasi Newton's method. It is this is like Newton's method only the different way we are calculating. So, one of the advantages this is look at this point this is a scalar quantity this whole thing is a scalar one must be careful that denominator part should not be very very small or 0 if it is 0 or very very small this quantity will blow I mean very large value will come. So, that is the one disadvantage of this rank one this problem can be this is this derivation is based on the rank one because we have considered if you recollect the delta k we have added with a x k by choosing the delta k matrix like this way a vector outer product of same vectors u k u k transpose and that rank is 1 and that basis we have derived. So, if this is very small or very very small or 0 then you cannot apply these things. So, one can the alternative way is you go for rank 2 for choosing the delta k. So, this is the one disadvantage advantage is there you do not need to check the gradient direction because that direction that is what we are checking the gradient direction which in turn we got that what is called Hessian matrix must be positive definite matrix it is not necessary in this case because it is automatically once you see this one once you have selected at k is equal to 0 s of 0 is positive definite matrix any positive definite matrix of that one. And this I told you delta k that is what is this is nothing but a delta k that delta k expression you can write it like this way this is always a positive semi definite matrix and in turn that matrix is that matrix is a positive definite matrix you will get it. So, the results always it is guaranteed that this is a positive definite matrix. So, you need not to check the what is called descent direction of a function each iterations. So, that is the one disadvantage of this Newton's method and due to this one it is widely used for optimization what is called unconstrained optimization problem that method quasi Newton method is widely used due to this advantage. So, next we will see so far we have discussed that how to solve that what is called unconstrained optimization problem by using the numerical techniques or directly you can solve it let us call unconstrained optimization how will you solve it first the gradient of this vector u s and 0 and solve it find out the what is called stationary points then you find out the second Hessian matrix of this one and check whether the Hessian matrix is positive definite or negative definite or not check it and you will be able to conclude that function is minimum or maximum. If the Hessian matrix is positive definite then the function is a minimum function value of the function you achieve the minimum at that stationary point. If the Hessian matrix is negative definite by analytically if you solve if you get it negative definite this means at the stationary point corresponding to that stationary point the function value you will get the maximum. So, let us go next is that constrain optimization problems how to solve the constrain optimization problems. So, next is your optimality and once you see knowing this algorithm and one thing I just forgot to tell you that what is called steepest descent method the convergence rate is linear means order 1 whereas in conjugate gradient method the convergence rate is between 1 and 2 order 1 means what it is linearly it is approaching to the what is the error is decreasing or it is approaching to the function value is approaching to the minimum value of this function linearly. Whereas in Newton's method the convergence rate is quadratic in other words the convergence rate order is 2 and similarly that what is called Newton's method is order convergence rate is much faster than the steepest descent method and its order is 2. So, next we will consider the optimality condition for constrain optimization problem optimality conditions for constrain optimization problems. So, let us if you recollect our previous first lecture we have discussed that what is the basic structure or mathematical formulation of optimization problems agree. Now, we are going for constant optimization problem basic structure if you see generalized constraint optimization problem. Most of the practical problems you see there must be constant real time real world problems that must be constraints and not only constraints then equality constraint means and what is called inequality constraints agree in addition to this one there is a side constraints are there some of the variables cannot be negative all these things that we have discussed at length in our first lecture or second lecture agree. So, let us call what is the generalized optimization problems that minimize if you recollect minimize f of x and x dimension is n cross 1 which you can write it with x as a n variable. So, our decision variables are n variables are there we have to minimize these functions. So, let us call this function I am going to equation number 1 agree. So, subject to I just recollect our problem that subject to equality constraint. So, we have a h i which is a function of all decision variables may not be some of the function may be few decision variables x 2 x 3 dot x n and these are the all equality constraints agree and this i varies from 1 to dot dot p. So, there are p equality constraints is there with the optimization problems subject to how many i is equal to 1 h 1 is equal to 0 h 2 is equal to 0 dot dot h p is 0. So, we have a p equality constraint and also inequality constraints g j which is a function of x 1 x 2 x 3 dot dot x n agree x n is equal to 0 and that we have a x 1 2 dot dot m small m. So, we have a m inequality constraint sorry this is inequality we have a m inequality constraints are there. So, our problem is minimize this function subject to this constraints agree. So, this is equation number 2 and this is equation number 3 if the equation number 2 3 1 2 3 are all function of linear functions then we will call linear programming optimization problems. Otherwise it is a non-linear optimization problems any one of this is non-linear that we have discussed earlier also. So, we have a linear optimization problem may be non-linear optimization problem may be let us see about the constraints get some idea about this constraints. Suppose, we have a problem like this way minimize example just to explain the constraint of a minimize f of x and x is a in your case 2 variables dimension is. So, it is a x 1 minus 2.5 whole square plus x 2 minus 2.5 whole square agree our problem is minimize this quadratic functions agree. Subject to g 1 of x which is a function of x 1 and x 2 agree and is equal to 2 x 1 plus 2 x 2 minus 3 is less than equal to 0. Let us call we have a only one inequality constraint. Please note that when there is a equality constraint is there we are using h small h when there is a inequality constraints are there we are using the small g that is we will use this notation throughout our discussion. So, this is the problem and this is you have to optimize this one and if you see the graphically this problem of that one agree. Another constraint also you can see g 2 which is a function of x 1 and x 2 agree is equal to minus x 1 less than equal to 0. Another is g 3 x 1 x 2 is equal to x 2 greater than equal to 0 just you can see this one. So, this what is this one let us see this point g 2 inequality constraints minus x 1 is less than equal to 0. That means it indicates the x 1 value is always positive. So, this in other way I can write it this one I can write it also g 2 of x 1 and x 2 I can write it is equal to x 1 greater than equal to 0. Because this expression is showing that this quantity always less than 0 when it is possible when x 1 is greater than 0 greater than equal to 0. So, this indirect or in other words I can write this one. So, let us say that constant g 2 in place of this one I am writing that one x 1 is greater than equal to 0. That means this indicates where which region that x 1 is always positive. So, I can see the our region of x 1 is that side this side this. So, if you consider this is our x 1 and this is our x 2. So, this indicates that x 1 greater than 0. So, x 2 greater than 0 the third inequality condition g 3 means x 2 value is always positive or 0. So, this indicates that region that region. So, the first quadrant g 2 and g 3 indicates the first quadrant the our design space or design variables must lie in the first quadrant from g 2 and g 3 I can say this. Now, what is this equation that first constraint inequality constraint that you just draw the line that 2 x 1 plus 2 x 2 minus 3 equal to 0 draw the line first. Then I will decide less than 0 in which side of the line or greater than 0 which side of the line. So, if you draw equal to then x 1 is if x 1 is 0 then x 2 is becoming x 1 is 0 2 x 2 minus 3 is equal to 0 means x 2 is 1.5. Let us call this is 1 this is 2 this is 3. So, 1.5 is here this point this is the equation of this straight line this. Then when x 2 is 0 then x 1 is again 1.5 this is 1 2 3 and 1.5 is here 1.5 is here. So, this is equation of straight line. So, I can draw it this one this is I cannot extend because it does not valid because x 1 x 2 value is greater than 0 means in first quadrant. So, you cannot go like this way that valid up to this region design space is only first quadrant. So, this indicates that equal to 0 g 1 that mean g 1 is equal to 0 any point on the line, but we have given that g 1 is less than equal to 0. Again less than equal to 0 that means which region the g 1 less than equal to 0 that should be what is called in this region that is this region is g 1 less than 0 less than equal to 0. When equal to that is when g 1 equal to 0 that is on the line when g 1 less than 0 that means in this region the whole region if you see just straight line if you extend it the whole region g 1 the whole region of that one the whole region g 1 of x 1 x 2 less than equal to 0 either on the line or this, but our g 2 and g 3 are telling that our design variable x 1 and x 2 lies on this first quadrant. So, we cannot go outside this first quadrant. So, our design space or design variables must lie within this triangle. Now, what is our problem or problem is to minimize that one. So, if you see this one this is nothing but a equation of a circle whose center is 2.5 2.5 comma 2.5 this is our center of the circle. Now, it is obvious you see minimum value of the suppose this constants are not there what is the minimum value of the function at centers means 0, but we have a constant. Now, if you increase the size of this that is that with the center if you just the function value you have to increase it. Suppose, I put it a function value this is some value function, but this does not satisfy the our constants agree this three constant. Combinedly we have shown this is the our space where this our x 1 x 2 must lie. So, now next our if you go on increasing. So, our minimum value of the function from geometrical point of view one can say lies on the straight line on the straight line. How from there you draw a circle which will touch this straight line and beyond that if you want to increase the size of this circle it lies no doubt on the design space or the design space, but function value is increased this is not the minimum. Minimum of it will be what which will touch this line let us call that one. So, this is o point this is p point here. So, now you can easily find out what is this what is this point at which this circle will touch and that point you will get the minimum value of the function it is a simply geometric concept. So, when it will touch this one one can find out which from geometric point of view from this point to on this line what is the perpendicular distance you can find out. Once you know the perpendicular distance of this one you can find out what is the coordinate of that one by using some geometry of this one. So, once you know this coordinate immediately you know what is the function value put the value of x 1 and x 2. So, I will just give you this hence that o p you can find out if you write it o p the perpendicular of distance this one by coordinate geometry distance 2 x 1 you see this equation plus 2 x 2 minus 3 divided by the coefficient of a x plus a x plus b y plus c is equal to you know the distance from any point is that value is 2 square minus 4 plus 2 square 4 put the value at x is equal to x 1 is equal to 2.5 from this point we are doing and x 2 is 2.5 if you find out this one that value will come approximately not approximately it come this one. Now, this line this o p and this straight line are perpendicular to each other and you see you can easily find out the slope also if you like the slope of that one also. What is the slope that means if you draw this one this and this coordinate you do not know let us call it is x 1, x 1 bar and then x 2 bar this point. So, immediately you can find out this is from here to here is x 1 bar y x 2 bar this is you know 2.5. So, it is 1.5 divided by this is you know 2.5 this is 2.5 this is x 2 x 1. So, 2.5 minus x 1 this 2 ratio this by this ratio is same means 1. So, the slope I know this one from there you will get it 1 expression x 1 bar and x 2 bar expression. So, I can write it if you see 2.5 minus x 1 x 2 bar minus 2.5 minus x 1 bar is equal to 1 that implies x 1 bar is equal to x 2 bar. So, this coordinate x 1 x 2 are same next what you do this coordinate must satisfy this circle equation. And what is the circle equation if you see the circle equation is our x 1 minus 2.5 whole square plus x 2 minus 2.5 whole square is equal to the distance we got it. If you recollect this one o p distance is 7 by 2 root 2 7 by 2 root 2 7 by 2 root 2 7 by 2 root 2 whole square. So, put this value x is equal to because it satisfy at x is equal to x bar and x 1 bar x 2. So, if you put this value in this case. So, it will be x 1 bar minus 2.5 whole square plus x 2 bar minus 2.5 whole square is equal to this quantity. So, solve this two equations I know x 1 is equal to x 2. So, x 1 we know x 1 is equal to x 1 2 is equal to x 2 bar. Then you can find out x 1 bar from this one you can find out x 1 bar is equal to x 2 is equal to that value will come x 1 bar will become 1.26. Please check it just in this expression you put it will get it. So, and we know and physically this is the what is called this three equations inequality constraints I just drawn in a graph and shown without solving any what is called optimization technique all this thing physical many geometric concept of view. I find out what is the point at what point the function value will be minimum that is one. But next we have to do it through what is called by analytically. Now, look at this expression if you recollect this one in general let us call we have a one expression is there x 1 x 2 that expression and that expression is what in your for this specific problem see that one 2 x 1 plus 2 x 2 minus 3 is less than equal to 0. So, this quantity this quantity I can always write in equality constraint form, but what does it mean this quantity is less than 0. That means if I add some positive quantity with this one then I can make it this is equal to 0. So, I am writing now that our inequality constraints is now converted into equality constraints how see this one plus I am adding one positive term because this is less than 0 means negative term I have to add some positive term to make this is equal to 0. So, any inequality constraints are there I can always write it into a this plus some positive quantity you have to add this quantity that is what quantity is greater than 0 or equal to. Now, look at this one when s is equal to 0 that means this satisfy the equality constraint that satisfy the equality constraints. So, any inequality constraint now I can convert into a equality constraint by adding s 1 square which is a greater than equal to 0. And it cannot be negative if it is a negative s 1 square is negative that means it indicates this quantity is g 1 is always negative this quantity. And if it is coming negative that means this violates our inequality constraint given that constraints. So, that s 1 square this should be always greater than equal to 0. Now, question is that we know at this moment how to solve the unconstrained optimization problems. So, if you have a what is called general problem that we have mentioned it minimize the function subject to equality constraint and inequality constraint. And inequality constraint we know how to convert into a equality constraint. So, now next question is if we can convert the unconstrained sorry a constrain optimization problem with equality and inequality constraints and inequality constraints we can convert into equality constraints. Then the problem is if you can transform that unconstrained optimization problem way what is called a constrain optimization problem into unconstrained optimization problem then we can solve our problems as per our we have considered earlier. The unconstrained optimization problem we know how to solve it either numerically or what is called by analytic method. So, let us see this one how to convert one way conversion is quickly if you see this one minimize same problem f of x 1 x 2 that x 1 minus 2.5 whole square x 2 minus 2.5 whole square subject to h of 1 only one equality constraint is there is equal to minus 2 x 1 plus 2 x 2 minus 3 is equal to 0. So, only one equality constraint is there. So, how to convert this what is called constrain optimization problem how to transform into a unconstrained optimization problem one can do like this way you see I will find out x from here x 2 is equal to what minus 2 x 1 plus 3 if you take this divided by 2 is nothing but a half this 2 to cancel it is a minus x 1 plus 1.5. If you put this value of x 2 put the value of x 2 in this objective function now it is a function of x 1 and x 2 is a function of I can write this is a function of x 1. So, it is a function of x 1 which I can write it now if you just put this value of x 1 minus 2.5 whole square then in place of x 2 I will write it minus x 1 plus 1.5 minus 2.5 1.5 minus 2.5 whole square now you see this constraint I have force the equality constraint force in objective functions. So, now it is become a unconstrained optimization problem there is a unconstrained optimization problem optimization problem. So, this problem we can solve what we have discussed earlier this one similarly inequality constraint is there I have just mentioned how to convert any equality constraint and then proceed in the similar way. So, next class we will discuss more general inequality constraint also along with the equality constraint how to convert what is called equality constraint optimization problem into unconstrained optimization problems. Thank you.