 So, let us now start on with constraint optimization, algorithms for constraint optimization, constraint optimization algorithms. Now, there are the constraint optimization methods for constraint optimization are again extremely varied and there are many different types of methods out there. But the simplest I will let us start with the simplest method which basically attempts to convert a constraint optimization problem into an unconstrained one. It is a very elegant and nice method and so let us just see what this idea here is. So, this is what is called a penalty method. So, in a constraint optimization problem what we are faced with is a problem like this you want to minimize a function f subject over a set s. Now, what a penalty method does is it says it looks for a function p. So, introduces a function p introduce a function p such that the following holds first p is continuous p is non-negative for all x and 3. So, p is a function now from R n to R. So, it is defined on the entire space. So, it is continuous it is greater than equal to 0 and it is 0 if and only if it is 0 if and so p of x is equal to 0 if and only if x is in s. So, when you are in the set it is in the when you satisfy the constraints or you are in the feasible region then p is 0 and everywhere else it is positive. But unlike the kind of function that we had seen in when we were talking of duality where we had defined this sort of if you recall we had defined these i 0 and i plus functions which went from 0 to infinity outside the feasible region these the function p is a continuous function. So, it is not just it is not it is not that it is not it is not an ill behaved function. So, it is a continuous function it has it is 0 on the feasible region and outside it is positive. But because it is continuous it will bring with it a little bit of baggage means that it is you cannot it is not going to be an exact it is not going to give you an exact to be formulation of the constraint optimization you would need to do a little bit more work in order to solve the constraint optimization problem using the penalty function ok. So, first let us look at a few examples of this sort of function. So, let us say suppose my constraint s right here examples. So, suppose my constraint s is just x or my feasible region s is just x z h of x is is equal to 0 right. So, it is some some one or more in equality constraints. So, in that case what is the correct penalty function for this sort of set one can take for instance p of x as simply norm h of x. Now, what does or norm h of x squared if necessary. So, why why can we do this the reason we can do this is that you can see that h of x is equal to 0 if and only if norm x is equal to 0. So, so property p holds for free. So, p of x is 0 only on this on the set s everywhere it is obviously greater than equal to 0 and if your h is continuous p is continuous right h is if h is continuous p is continuous right. So, so as a consequence we get that. So, let me write this here. So, where where h is continuous. So, if p h is. So, if h is continuous p is continuous on. So, therefore this is now a penalty function. So, how can one do this for an inequality function inequality constraint. So, another example would be say s which is x such that g of x is less than equal to 0 what what would be what would be an appropriate penalty function for this well one can do take for example p as as the maximum of 0, g of x. So, what does this do. So, this this is a max of if and g again is continuous and what does this this p of x do the p of x is the maximum of 0 and g of x. So, when so it is a maximum of 2 continuous functions one is just a constant function see it is a maximum of 2 continuous functions. So, it is also a continuous function. So, it so it satisfies 1 it is a max it is a maximum of 0 and something. So, it is always greater than equal to 0. So, it satisfies 2 and when is this equal to 0 well this would be equal to 0 only when g x is itself less than equal to 0. So, when this is less than equal to if so p of x is equal to 0 if and only if g of x is itself less than equal to 0. So, consequently it satisfies 3 also. So, this gives us another penalty function another kind of penalty function. Now obviously, this is not as nice as this function because it is it is it is it is non-differentiable, but so you can you can even do a little you can make this a little smoother. For example, you can also do this the whole squared and that will that will ensure that the derivative that that will give you a little bit more differentiability also at at the at points where p of x is equal to 0 alright. So, what is the method now what is the penalty method the method itself is this. So, what does one do you you define. So, so define this this this function q the function q of x given ck I will explain what this is. So, this is let us say x given c. So, this is defined as f of x plus c times p of x. So, you have your original function plus c times the penalty function. Where what is c? c is any constant, c greater than 0 is a constant. So, for a given c you will define this sort of function. So, what has this done it has defined it has given you a new objective it has given you a new function where where you have the original function plus a constant times penalty. Now, when you are on the in the feasible region the penalty function by by property 3 the penalty function should be 0. So, in that case therefore, the q is actually identical to f. So, in the feasible region minimizing q is actually the same as minimizing f. But then what we will be doing is not we are not going to put constraints anymore. The whole idea of penalty method was to convert the to convert a constraint problem to an unconstrained problem. So, what the method would do is it would do the following it would so, so take it will take an increasing sequence ck going to plus infinity. So, it has an increasing sequence of constraints ck going to plus infinity and by increasing I mean ck plus 1 has to be greater than equal to ck. So, this is a sequence like this where it goes to plus infinity. And then what does one do at each step for each k solve minimize q of x given ck over x in Rn. So, what are we doing at each step at each step you take your constant and minimize and minimize q of x given ck. So, what is going to happen with as your k increases as your k increases your ck increases and becomes increasingly large. So, whenever what this tends to do is that whenever the ck is becoming extremely large if your x goes into the region where p is not 0 that means where p is positive that is the region outside your feasible region it tends to blow up the value of c times p. So, if your x wanders into this region where p is positive because you are multiplying it outside by a large constant ck the value gets blown up. So, when you are minimizing q here when you are minimizing q the better value for x can be found where in the region where p is actually 0. That means in the region s itself. So, what it does is as k becomes larger your the minimization tends to gravitate more and more towards this minimization this unconstrained minimization in effect tends to gravitate more and more towards the feasible region s. And it starts producing looking for iterations in looking for solutions in the region s itself although you have not actually imposed them as constraints though you have not imposed them as constraints it sort of implicitly looks for those for solutions in that region because outside that region the objective value itself becomes very large. So, the iteration will tend to look for those solutions there. So, the result that we have for this is let me state it as a theorem. So, let xk so this suppose the generated here is xk. So, let xk be generated by the penalty method above then any limit point x of the sequence xk is a solution of the minimization of f over x in s. So, what does this mean? So, you keep doing this for over k and keep increasing it increasing your ck eventually you get a sequence the sequence will gravitate towards the towards even if it has a limit point it will that limit point will be a solution of this particular problem. Now, the thing that you that is interesting here is that when you are solving this problem for when you are minimizing sorry when you are solving this particular problem you are minimizing f plus something else over this entire region. So, what you actually end up doing which is somewhat counterintuitive may seem counterintuitive is that you end up approximating f from below. So, your objective value actually increases to the optimal solution rather than decreasing to the optimal solution which is what you would otherwise do in an optimization problem and that happens because of the because you are penalizing the constraints and then it is the nature of the penalty function that creates this effect. So, with this actually let me also mention to you that there is an slightly related type of method which is called a barrier method a barrier method only what it does is instead of instead of penalizing the entire constraint it tends to penalize the approach towards the boundary. So, as you go towards the boundary the barrier function blows up and so it tends to keep you within the constraint. So, in a penalty method you are you are searching all over the space in the entire space Rn without going out of the without imposing the constraint and then eventually you end up inside the constraint whereas in a barrier method you do not impose the constraint but you impose you penalize movement towards the barrier towards the boundary of the constraint. So, that gives you a different type of method. So, with this I think I will end today's lecture we can then we will continue on with more optimization algorithm in the next lecture.