 So, last class we have seen that how to solve a unconstrained optimization problem using KKT conditions. So, and what is the necessary and sufficient condition also we have seen and we have worked out some problems. So, quickly if I tell you that what is the basic problem of this one, we have a function this function we have to minimize this function subject to equality constant and inequality constant. So, our problem is to convert this constant optimization problem into a unconstrained optimization problem by choosing a Lyapunov function not sorry a Lagrangian functions agree. And this is the Lagrangian function we have considered where this is the objective function and this lambda i is associated with the all equality constant and mu i you have to consider associate with the inequality constant. We made the inequality constant by adding a what is called a variable s g square to make it equal to 0 equality constant. Then necessary condition means first that is partial derivative of l with respect to x k lambda k mu k you have to assign to 0. Then we finally we found out the this is the expression when you differentiate that Lagrangian function with respect to mu and differentiate Lagrangian function with respect to s variables. Then this two constant will assign to 0 will ultimately it will term boils down to a single what is called expression conditions. So, it can be seen that this Lagrangian multipliers mu 1 mu 2 this which is associated with the inequality constant this g 1 into mu 1 g 2 into mu 2 and g m is equal to mu m. This are the orthogonal if you form in a vector form and this in a row vector form the they are an orthogonal condition is satisfied. So, and the when the when you will get the optimum point of this objective function at that point at that condition if g 1 or g i is equal to 0 it indicates that it is a active constant is satisfied. When active constant is satisfied mu i corresponding to mu i value will be greater than equal to 0. That is we have seen it and also we have seen that what is the necessary condition sufficient condition to check whether the function is minimum or maximum this. So, that Hessian matrix of the Lagrangian function you have to check it if this Hessian matrix of Lagrangian function is positive definite. Then this that function is a minimum value that will be the minimum value of the function. If it is a negative definite that is what is called Hessian matrix of Lagrangian function is negative definite matrix that function value f of x will be a maximum that is we have seen. Now, we will see this what is called the sensitivity analysis in other words you can say post optimality analysis. So, now we will see the sensitivity analysis that means post optimality analysis post optimality analysis. So, let us let us call what is the problem is given minimize a function f of x subject to condition that h i of x is equal to 0 i is very 2. That means our problem is minimize f of x and x dimension is n cross 1 subject to h i of x is equal to 0 and i is equal to we have p number of equality constant and g j of x is less than equal to 0 and j is equal to 1 to m constant that is our basic problem. Now, this indicates we want to study the variation of in the optimal cost due to the variation in parameters of the original problems means our original problem is h i x is equal to 0. If in place of 0 if it is equal to f let us call we consider that is alpha in place of h i x if this instead of this equality constant if it is a alpha i again i is equal to 1 2 dot dot p and g j of x instead of less than equal to 0. If it is a beta j where j is equal to 1 2 m naturally this solution will change previously the solution of this one if you consider that our solution we obtain the optimum value at this point x is equal to x star. Now, if this in place of 0 this alpha i equality constant in place of less than equal to g j less than equal to 0. If it is a beta j where this alpha j and beta j are very small part of positive quantity that means these are the small variation in neighborhood of 0 this is then what is the optimal solution of this problem and how this function value will change from the this perturbations of parameters. So, this we want to study it this one. So, our problem is now this is our original problem now that problem if the parameter alpha is in place of 0 if it is alpha which is a positive very small quantity positive quantity in place of less than equal to 0 it is beta j which is positive quantity, but very less than very small quantity then what is the effect of the optimal point on the minimization of this problems. So, that we want to variation of the minimum optimum value of the function due to the variation of alpha i beta j that we are going to study it. So, in other words the main aim is to study the study of variations in the optimal solution as some of the original problem parameters are changed is known as sensitivity analysis. So, that is we are going to study it. So, first we will see the effect of changing constant limit constant limits. So, this is our constant limits this one this if we change it with is a positive quantity and small then what is the change in optimal solution of this problems. So, naturally this x star let us call x star the optimal solution which will obtain due to the parameter variation of alpha i and beta i this new x star is a function of alpha i and beta i. Then i varies from or you can say that i varies from 1 to alpha i beta j 1 2 dot dot p and j varies from 1 dot dot m. So, this the solution of new solution of x star is the function of alpha i and beta i when alpha i is 0 alpha i is all are 0 beta i j we will get our original problem solution that mean minimization of original problem solution will get it. So, naturally the function value also the function value of this one is also function of alpha beta and where alpha i alpha means all elements of alpha 1 alpha 2 dot dot alpha p and beta is all elements of beta 1 beta 2 dot dot beta m. So, this the function value of f is now is a function of alpha and beta. So, if alpha is 0 beta is 0 that vector then we will get the original value of this optimum value of the original function at x star. Since, this alpha is change to that 0 is change to alpha i in the equality constraint and inequality constraint 0 is now change beta i the solution of this one also change it. So, one can, but we do not know how to how this f is related to alpha and beta explicit relationship is not known to us. So, we can now write by using the Taylor series expansion in this one I can write it now we can write it now like this way applying Taylor series expansion series expansion agree one can write it now alpha because f is a function of alpha and beta and alpha beta are the perturbation in the right hand side of the equality constraints alpha and right hand side of the inequality constraint is beta. So, this I can write and alpha is change from the nominal value is 0 that nominal value in the sense in the original problem alpha is 0 in original problem inequality constraint beta is 0. So, you are doing the Taylor series expansion around 0. So, 0 0 then we have a summation of if you see the our problem is that we have a p equality constraints summation i is equal to 1 to p then del f around 0 we are Taylor series expansion we are doing with respect to alpha i into alpha i plus, but plus j is equal to 1 to m partial differentiation with respect to around 0 this one with respect to beta j into beta j. So, what is the change in value of the cost function is alpha beta minus f of 0 is 0 this f of 0 indicates that why what is the value of the cost function at when there is no perturbation in the equality constraint on the right hand side. That means equality constraint right hand side is 0 inequality constraint right hand side is 0 that is the function value and we correspondingly that we got it some extra. So, when there is a perturbation is there that function value of the optimal value of the function or mean the minimum value of the function is a function of f and f is a function of alpha and beta. So, this indicates the change in change in cost function due to due to small change in small change in alpha i and beta j this is equal to we can write it summation of i is equal to 1 to p delta f 0 0 delta f alpha i into this plus summation of j is equal to 1 to m delta f 0 0 delta beta j into beta j that one. So, this we can write it if you see this one because this function this is the what is this function we can write it that one this is nothing but a this quantity is nothing but a i is equal to 1 to p minus lambda i star into alpha i plus summation of this quantity is mu j I will explain it what how it is coming mu j that is that is coming mu j minus mu j star into beta j. Let us see how it is coming that one so we can write the our Lagrangian function you see l of this is equal to our f of x plus summation of i is equal to 1 to p now our if you see this one that is our hand this one. So, what is our equality constraints is part of alpha i you take it in left hand side. So, h i x minus alpha i I can write it so summation of lambda i h i of x minus alpha i this plus summation of j is equal to 1 to m. Now, see this one this is less than equal to this one so what we can write it this I can take it in left hand side that means j g j of x minus beta j is less than 0. So, we have to add something in order to make it equality constraints. So, I can write it is nothing but a Lagrangian multiplied mu j g j minus if you call beta j plus s j square. So, it is nothing but a this is f of j x I can write it like this way i is equal to 1 to p lambda i h i of x plus mu i is equal to j is equal to 1 to m mu j g j of x plus s j square. This is when there is no perturbation in the right hand side of the equality constraint and inequality constraint this is plus some other when there is a perturbation is there some additional term we are getting here summation of i is equal to 1 to p lambda i minus that is minus alpha i. So, then again plus summation of this j is equal to 1 to m mu j minus beta j when there is no perturbation is there we found out the objective function of optimum below of the function of this one. Now, if you differentiate this with respect to lambda sorry alpha then you will get it lambda i. So, that is why we have written it here differentiation of around the origin means when the perturbation is not there around the origin this quantity is minus lambda i star. Similarly, if you differentiate these things change in this one this will be a mu j star minus. So, this now you look at this one this indicate when there is a perturbation inequality constraint perturbation in the right hand side which beta j is a positive quantity in the beginning we have considered it is a positive quantity. This indicates that we are relaxed the what is called the constraint. That means we are we are when beta is greater than 0 it indicates that we are giving the most what is called search space. So, there is a possibility of getting further what is called reduction in cost function value if the space design space or the search space is more compared to earlier stage then there will be a possibility of getting reduction in cost function value. So, just like if you see this one suppose we have a that is you just see this one according to our original problem if you consider that g j g j of x is less than equal to our original problem we are telling. So, let us call g j is equal to 0 I am this is something like g j let us call I am showing it this is x 1 you can think of it as if there are two variables are there for the time being this is this equation is g 1 of j d 1 of x is equal to 0. So, you can think of it as if we have a g 1 of s something like this x 2 minus x 1 is less than equal to 0 that g 1. So, this is something like this is a straight line. Now, what is this portion this portion is g 1 of x is less than equal to 0 and this portion is the whole half space this space and this space is g 2 or g 1 this is g 1 of x is greater than equal to 0 and any point on the line is g 1 is 0. Now, if you perturb this one g j of x is equal to beta j that is what I am telling then what is this situation is there then this line if you see this line is now is becoming here. So, what is this one just see from this equation if you take this g x is equal to beta again then this one will be g j of x minus beta j is equal to 0 I can write it this one like this way our condition this I am not equal to this condition I am now part of with the beta j. So, this I can write it now beta j is less than equal to this one. So, this if you consider this type of equation this on the line any point on the line g j of x minus beta j is 0. Now, this is shown that g j of x minus beta j is less than equal to 0 this portion and this portion is showing it g j of x minus beta j is greater than equal to 0. Now, you see previously when there is no perturbation these are but beta is positive quantity this is positive then our search space was zeta 1 equal to zeta 1 zeta j only one constant if zeta 1 of is less than equal to 0. That means it indicates the whole space of this space is the our constant on the line and below this line. Now, when you put this part of with a in space of 0 it is a part of with a beta j with a which is a small quantity is a small variations. Now, you see our design space or the search space is increased previously it was from here this space. Now, it has become from here to the whole that right up right portion of that straight line below this straight line. So, our search space is increased. So, we may expect that that function value cost that is cost function or the objective function value may reduce further from the previous one. So, there is a possibility is there or at most there will be no change in function value from the previous situation. So, that is we can come to conclusion. So, if you relax the what is called constraint that design space or the what is called the search space of the optimization problem is relaxed or increased which in turn we can say the there is a possibility of getting reduction that cost function value is in reduction compared to the earlier stage. So, keeping all this thing in mind we see this one that what is the we got this our expression. If you see f of alpha beta minus f of 0 of 0 this indicates the f of 0 comma j alpha is equal to 0 this is alpha is equal to 0 beta is 0 from there we got the cost function some value when there is no perturbation. When there is a perturbation that cost function value is changed because optimum value of the function is changed optimum value of the optimum point is changed corresponding function value is changed which is a function of alpha and beta. So, this value you have seen just now it is a minus summation of i is equal to 1 to p lambda i star alpha i minus summation of j is equal to 1 to m mu j star into beta j. Now, you consider from this we assume let us call we assume for the time being there is no equality constant present in the optimization problems. That means there is no perturbations here there is no because h of j h of i is not present in the optimization problems. So, this part will not be there let us concentrate with this one which is a inequality constants is there which is a less than equal to 0 that is part of with a beta j. Now, you see this one assume that alpha i is equal to 0 means this implies that h there is no inequality constant present in the optimization problem no equality constant for i is equal to 1, 2 dot dot p are present in the optimization problems or minimization problems or in the minimization problem. So, in that situation you see change in function value due to the change in that what is called beta j is what minus of this one. And we have considered if you recollect this one we consider this is a small quantity and positive if beta if mu value that is Lagrangian multiplier value. If it is a what is called negative quantity if it is a negative quantity you see negative and negative this becomes a positive. So, function value it indicates the function value is increased. So, this contradicts our assumption what we made it that if you relax the constraints that is what we obtained if you relax the constraints that means which in turn we have increased the search space. There is a possibility of reduction in cost function, but here it shows the cost function is increased when beta is positive and mu if mu value mu j Lagrangian multiplier value is coming if you consider negative then it values function value is increased. So, it contradicts our that relax conditions this one. So, it cannot be happen. So, mu value must be what is called a positive quantity agree. So, mu value must be positive then what in what contest we are telling this one mu value is must be positive that if you see this one that is our constraint equation. I will write it the constraint equation that mu j of x is less than equal to 0 this constraint equation when this active constraints are satisfied at the optimal point. When active constraints are satisfied at the optimum point means this means that from the switching condition mu j g j condition is equal to 0 when this is when this is 0 g j of 0 at the optimum point mu j value is positive non-negative number agree. That is we have shown it now here from this one that should be non-negative number mu j and. So, we are writing this one if mu j is less than 0 then this is equal to 0. Relaxation of the constraint for beta j greater than 0 results is going to results a increase in cost value. So, this is minus if it is less than this is minus minus plus so increase in value. So, this is contradicts this contradicts our assumptions that if you relax this one. So, this contradicts our that assumption when this constraints are relaxed agree. So, this cannot be happen. So, mu value mu j value must be positive. So, if you recollect when we are doing the switching conditions this condition when you are solving the KKT using the KKT necessary condition. We have assumed that we have considered when the active constraints are satisfied. That means when mu j of x is equal to x star if it is a 0 that is x what is called active constraints are satisfied. Then mu j value will be non-negative number and this proves here that should be non-negative number mu j it cannot be negative. So, this is our conditions we have obtained. Now, see this one what is the if you multiply it by the cost function if you multiply it by the cost function by a positive scalar quantity scaling the cost function by scalar what is positive quantity. Then what is affect in the optimal conditions not conditions what is the optimal point will change or not. So, next our study is effect of cost function scaling on effect of cost function scaling on lagrange multipliers. So, that is that means if you scale the objective function or cost function what is this effect on the lagrange multiplier that we want to study it. So, let us call so scaling cost function does not change the optimum point that means x star if it is a optimum point is x star x star will not without scaling. If you get the optimum value of the function at the optimum point this with scaling also you will get the optimum value of the function you will get different, but at optimum point will remain unchanged. But lagrange multiplier value will change it let us see what is this. So, suppose we have a problem minimize f of x k k is a greater than 0 positive quantity this and this x dimension is n cross 1 minimize this one subject to h i of x is equal to 0 i is equal to 1 2 dot dot p and g j of x is less than equal to 0 j is equal to 1 2 dot dot m. So, this is our thing so our problem previous problem was like this way minimize f of x subject to this constant agree whatever the optimum point you got it let us call x star. Now, I have changed the objective function which is multiplied by a constant positive constant value now question is does the optimum point will change answer is no first. Second question is that what is this effect on the Lagrangian multipliers lambda i and mu j what is the effect and definitely the cost function value will change it. So, let us say we corresponding to this objective function our Lagrangian function l bar we have considered is our cost function now our new cost function is k into x plus summation of i is equal to 1 2 p then lambda that corresponding to this new objective function. Let us call that lambda i we have considered that is bar corresponding to Lagrangian multiply bar into h i of x plus summation of j is equal to 1 to m and corresponding the Lagrangian multiplier for this one because cost function is changed by a multiplication factor k positive k that equal to mu j bar. This you have to convert into a equality constant that g j of x plus s j square. So, what is k I take it common k is a scalar quantity I can take it common f of x plus summation of i is equal to 1 to p same thing lambda i bar and lambda i bar corresponding to the new objective function Lagrangian multiplier associated with the equality constant and mu j bar is the Lagrangian multiplier associated with the inequality constant corresponding to the new objective function k into this. So, this is h i of x plus summation of j is equal to 1 to m mu j and that is I take it common. So, that is divided by k and mu j bar divided by k into g j into g j x plus s j square. Now, I will consider now you see if I consider this is because this is divided by constant quantity. So, whatever the sign will come in turn in lambda i bar the sign will not change by this scaling. Similarly, mu j what mu j value I know greater than is equal to 0 non negative number and k is the positive value and this will not change will not affect anything about its sign. So, let us call this I denoted by this one I have denoted by lambda i this I denoted by mu i. Now, you see it is something like this instead of Lagrangian function necessary condition if you see I am just it is this I can write it now this one I can write it is equal to l bar dot k into f of x plus summation i is equal to 1 to p lambda i h i of x plus sorry this is lambda i then j is equal to 1 to m mu j g j of x plus s j square. Now, see this one I have denoted lambda of this and now it is equivalent to if you see this one this equivalent to our original problems Lagrangian function as if original problem in minimize f of x subject to this constraint or this corresponding Lagrangian function is this one. So, minimization of this one is same as the minimization of that one so naturally the what is called our optimum point will not change it this is you can see from this one. Now, what is the Lagrangian function for the old system for the old optimization problem our Lagrangian multiplier is lambda i and lambda i what is this we got it lambda for new system is this one we got it. Let us call this is optimum value this is also we got it optimum value of that one again similarly mu j is equal to mu j star is equal to mu j bar star divided by k. So, therefore, lambda i bar star is equal to k into lambda i star similarly mu j bar is equal to k into mu j star. So, what is our conclusion our conclusion is if our original problem is mu j star. So, minimize a function f of x subject to this constraint what is the optimum point will get it optimum point at which we will get the minimum value of the function suppose x star the optimum point. Now, the function is scaled by a that cost function is scaled k into f of x and subject to the same equality and inequality constraint then our optimum point will not change it and our Lagrangian multiplier of the new system new optimization problem when we multiplied by k this is multiplied by k will be multiplied by original problem Lagrangian multiplier which is associated with the equality constraint. Similarly, the scaled function Lagrangian multiplier for optimization scale function minimization of scale function objective function this Lagrangian multiplier will be multiplied by k into that original system Lagrangian multiplier mu j which is associated with the inequality constraint. So, we can make a conclusion like this way now ultimately the optimum point point x star for both the cost functions f of x that minimization this and k f of x both cost function this to be minimized to be minimized optimum point this r's the optimum point is same that is what and or you can say, but the optimum Lagrangian optimum Lagrangian multipliers are related by multipliers are related as. So, is equal to k into lambda i star that means scaled of scale cost function objective function Lagrangian multiplier is equal to k into the what is called the original objective function Lagrangian multiplier this expression another is mu j bar star is equal to k into mu j star, but the our minimum cost function of the Lagrangian multiplier value of the function is changed that to have to multiply by k into f of x what is the and at that point optimal point. So, this next is our what is called scaling a constraint next is effect of scaling a constant constraint effect of scaling a constant we have a constant equality constraint and inequality constraint. So, effect of scaling that equality constraint or inequality constraint we scale by a positive number scaling with a positive quantity scaling a with a positive quantity again constraint on its Lagrange multipliers. So, this will study it means if you recollect this one that our problem is what minimize once again minimize f of x subject to h i of x is equal to 0 i is equal to 1 to p and g j of x less than equal to 0 j is equal to 1 to dot dot m. So, now we are doing the scaling this constraint equality constraint we scaled by a quantity let us call we multiplied by P i P i is capital P i is the scaling factor which is greater than 0 means positive quantity this is also multiplied by m i that is this is multiplied by P i this is multiplied by m i m j. So, if you multiply by a positive quantity agree the constraints are not changed at all. So, this I multiplied by m j. So, whatever the constraints are there previous problem was there original problem multiplied by constant quantity mind it I am telling it is multiplied by positive quantity the constraint will not change this one means that whatever the constraint we have the feasible region is there agree. So, feasible region of this problem will remain unchanged. So, our that we will see it later scaling a constraint does not change the constraint boundary does not change the constraint boundary. If it does not change the constraint boundary naturally it has no effect on the optimal solution. So, naturally so it has because objective function is changed only we have multiplied by each equality constraint or inequality constraint by a scalar quantity which is positive greater than 0. So, the constraint boundary will not change in other words that our design space or search space remain unchanged due to the even if you multiply the constraint positive constraint quantity. So, it does not so it has not effect effect on the optimal solution that means x star. So, this let us see this one what how it is not. So, as you recall this one this is our now constraints multiplied by this. So, now what is our Lagrangian this one if you see the Lagrangian equation of our new optimization problems is f of x plus summation of i is equal to 1 to p and this constraint is changed. When there is a only h of i g j of i we consider corresponding Lagrangian multiplier is mu lambda j associated with h j and mu j associated with g j. So, now I am considering new Lagrangian multiplier lambda bar our equality constraint is p i h i of x. This part is over next summation of j is equal to 1 to m mu j bar and what is our that equation mu j that m j not mu this is m j g j of x plus s j minus m j of x plus m j of x plus mu j bar new variables is I consider this because this is our constraint now inequality constraint we have to add some variables positive quantities in order to make it equal to 0 means equality constraints this. So, this now let us see this one what you can write it this then summation of i is equal to 1 to p lambda i bar p i h j of x plus m j of x plus m j of x plus summation of j is equal to 1 to m mu j bar m m j g j of x or if I take it m j this m j common if you take it m j common this plus s j bar square mu j this is not mu j m j agree. Now, m e is a positive quantity this is positive quantity. So, s square is a positive quantity divided by positive quantity is same here also this is positive quantity agree. So, this I mention it here that you are multiplied by positive quantity then constant boundary will not change it. So, this you can write it this is greater than 0 this is greater than 0 that one agree positive quantity. So, now you see f of x is equal to this I am now writing this since this is associated with only h j this I can write it summation of i is equal to 1 to p lambda i h i of x plus summation of j is equal to 1 to m mu this I am writing this I am writing as a mu j g j of x plus this quantity is another positive quantity mu j square agree. Now, you see this is nothing but our this even the Lagrangian function is same as our original problem Lagrangian function without multiplying the constant with this one. So, I can write it now this nothing but a you can define this nothing but a l of z. So, the optimum value of this function agree and optimum point agree optimum point will not change it because function l value this function will remain only the what is called the Lagrange multiplier are changed. So, we can write it that lambda i is equal to star optimal function is equal to lambda i bar star p i which equal to implies that lambda i bar star is equal to lambda i star divided by p i and i varies from 1 to dot dot p similarly we can write it similarly we can write it mu j star is equal to mu j bar star m i m j which is equal to mu j star is equal to mu j m i m j star this is and j is equal to 1 to dot dot m. So, if you multiply the constant by a positive scalar quantity both the constant equality on the optimum point does not change it that if you see Lagrangian function remain same. Next we can conclusion that we have a Lagrangian multiplier in both cases are scaled by that p i and m i this one. But does it the cost value will change since the optimum value is not change and the optimum function which on this cost function of the function is same. So, the cost value will remain same. So, this we have seen the what is called effect of that what is called optimality testing what is called post. So, next we just consider that if you just consider see this our first slide of this one we have just the sensitivity analysis post optimality analysis we have done it. That means, if there is a some change in parameters in equality constant all these things we have studied is effect and from there we drawn a conclusion that mu j the Lagrangian multiplier associated with our objective inequality constant that quantity must be greater than 0 greater than equal to 0 non-negative number when we are dealing with the optimization minimization of function that is we have seen under what is called active constraint that mu j value will be non-negative. And this and we have seen the if objective function is multiplied by a cost function some scalar quantity then its optimum point does not change it only its Lagrangian multiplier values will change it by scale this one. And next is if you multiply the constraints equality constraint on inequality constraint by f positive quantity the constraint boundary does not change it and which in turn the optimum point x star we have shown it will not change and this implies that optimum value of the cost function will not change it only the what is will change it that one we have seen the Lagrangian multiplier. The lambda i star means new optimization problem multiply factor is lambda i star by p i divided by this. So, next class I will just discuss the convex optimization problems how to formulate definition all these things. Thank you.