 Welcome everyone. So today what I will talk about is a little bit more about constraint qualifications and then a little bit and then we will move on to the theory of duality for convex optimization. Now you may you will recall that we had I had told you about what is in an abstract sense what is called a constraint qualification. So a constraint qualification if you recall I had mentioned that we say that a constraint qualification holds if the tangent if the tangent cone can be described in terms of the constraint in terms of the gradients of the constraint. So let us say suppose so to describe this properly let me let us first let us take let let S be the set g i of x less than equal to 0 for all i from 1 to m. If you remember a of x star was what we call the active set it is those i in 1 to m such that g i of at x star is equal to 0 and we say that we said that we say that a constraint qualification holds if the tangent cone with respect to the set S evaluated at x star is equal to the set of these such that so these g i is remember are functions from Rn to R so S itself is a subset of Rn. So we say that the we say that the constraint qualification holds if the tangent cone can be written in this it can be expressed in this sort of form. It is all these such that the gradient of g i transpose d is less than equal to 0 for all i belonging to the active set. So if so we had if this holds then we say that the that a constraint qualification holds now we do not we I also alluded to you that one way by which one condition that ensures that these this holds the one condition that ensures that this holds was what was called the was the linear independence condition. So it said that if these gradients so the linear independence linear independence constraint qualification was one such constraint qualification linear independence constraint qualification was it simply said that all these gradients the gradients at star these are linearly independent. Now if these are linearly independent then some then that guaranteed as equality in this in this relation. So then essentially we could characterize the tangent cone as given by this particular set of inequalities. Now what we will what I will mention to you now are some more constraint qualifications. So some of these have names some of these are have been just referred to in the literature by kind of directly. So here is one constraint qualification. So suppose we are referring to some x star in s and suppose so we are I am here in this in the in these constraint qualifications I am going to be referring to s in this sort of form in the form that it is all those x such that gi of x is less than equal to 0. So this gives us constraint qualifications for sets that are described completely by inequality constraints but then you can always extend it directly to those which have equality constraints by posing the equality as two opposing inequalities. So suppose x star in suppose x star is in s and suppose there exists and suppose there exists an h star in Rn such that for each i in the active set we either have either gradient of i of gradient of gi transpose d is less than 0 strictly less than 0 or gradient of g value gi at evaluated at x star transpose d is equal to 0 and gi the bracket here and gi is affine. Then then we can then then the constraint qualification is satisfied constraint qualification holds. So in short this here what I have written here is a you can say is a sufficient condition for the for a constraint qualification to hold. So this sufficient condition for cq to hold. Now you can be you can make this so what does this say this says that my mistake here I should not say let me raise this mistake here in the notation instead of x star let me write d let me write d. So if you can find a direction d such that for every constraint you in that direction you are if you move in that direction for every constraint that is active and from that if you move in that direction for each constraint you become strictly feasible or you remain on the constraint but then the constraint is affine. In that case we say that then from there it would follow that there is equality here that the constraint qualification holds. Now there is a somewhat longish proof for this I am going to skip the proof if you want to look up the proof you can look up the nodes of invariant on the internet. So here is another constraint qualification actually this one is very popular it is a very popular constraint qualification and it comes out as a weakening of cq1. So let me mention this to you. So once again suppose actually here cq then the cq holds at x star. Suppose x star is in s and suppose there exists a direction d sorry I do not need the direction d here my mistake. Suppose x star is in s and suppose there exists an x hat in Rn such that the following is following holds such that for each i in the active set a of x star either gi of x star is strictly less than 0 and gi is convex. So this for us are all and also in c1 I did not mention it but these functions are all differentiable functions otherwise I would not be able to take derivatives here. So these are so this is strictly less than 0 and gi is a convex function or gi of x star sorry this is x hat my mistake today and gi of x hat is less than equal to 0 and gi is affine. So suppose you can find a point x hat here a point x hat like this an x hat in Rn such that for each constraint in the active set either the constraint holds with constraint holds strictly at x hat either the constraint holds strictly at x hat and the function the constraint itself is a convex constraint or the constraint just simply holds its gi of x hat less than equal to 0 and the constraint is affine. In other words for affine constraints all you are asking for is feasibility and for convex constraints you are asking for strict feasibility for convex constraints you are asking for strict feasibility. So naturally an affine constraint is also convex constraint so those that are so this one is to be applied can be applied only to those constraints that are non affine and yet convex. So for this you have what we are asking for is strictly feasible. So this sort of point this point x hat is what is called a Slater point and I will make a mention of it again it comes up in a very important way later also it is what is called a Slater point. So x hat is what is called a Slater point and this condition itself is sometimes referred to as the Slater condition that there exists such a such a point. Now there is a weaker version of this where you do not way you do not ask for the trouble with the way this way of writing constraint qualifications is that it asks us to check this for every i in the active set a much easier thing to do is not bother about the active set at all and to check that this holds for all i. And so sometimes when we refer to a Slater point we refer to that to this to a point where this holds these hold not just for i in a certain active set but rather for all i. So essentially what it refers to is that your the existence of us in that case what this is referring to is that your the feasible region S is such that there is a point that is in the interior of all the convex constraints and is feasible for all the affine constraints. So these points here which satisfy the convex constraints with strictly this point here where all the convex constraints are satisfied strictly that sort of point is in the interior of all the convex constraints. And this ensures that it satisfies all the affine constraints. So this put these two put together is effectively saying that you have a what is the point that is in a strict strictly in the interior of the convex constraints and satisfying the affine constraint. Now why does this have what does this have to do with the constraint qualification? Well you can actually check once I will just mention a hint here the hint is to check try int is try d equal to x hat minus x star in constraint qualification 1. In constraint qualification 1 you try out d as x hat minus x star use the fact that g is convex and g is differentiable etc and that should bring you back to something like this. So I would not say more than that I did not complete my sentence here so let me complete do that. So with these constraint qualifications done we will now move on to the study of duality in convex optimization. So here is my optimization problem so minimize f of x subject to gi of x less than equal to 0 and for all i equal to 1 to m. And although we are talking of convex optimization let me just write this out first for the moment in a little bit in a general sort of way so with hj of x equal to 0. So in a convex optimization problem obviously these would be linear but let this issue come up when we actually need it. Now if you recall I had defined this function called the Lagrangian function I had defined it as this summation lambda i gi of x i going from 1 to m plus summation theta j hj of x j going from 1 to p. So this was what was called the Lagrangian function and then now let me define the following other function. This function is d is a function of just lambda and theta this d is simply the infimum of the Lagrangian over the entire space infimum over x for a function for a fixed lambda and theta over the entire space. But this function is what is called there is a name for this this is called the dual function. Now there is a reason why it is called the dual function because you will soon see that it is actually related in very close way to duality itself. So the problem of the dual problem and so on comes up exactly from there. Now before we actually go a little deeper into this this. So is there something that can be noticed directly first. So here are a couple of things that you should note. First is that d is a function which is a point wise minimum of linear functions. What does that mean? So if you look at the Lagrangian as a function of just lambda and theta then it is actually linear in lambda and theta. What you are doing in this definition here is taking the minimum of this sort of linear function over a third variable x. So you have actually if you look at this as functions of lambda and theta you have in fact a family of linear functions of lambda and theta and what you are doing is taking for each lambda and theta the least of those. Now what sort of function would result from this? A point wise infimum of linear functions would end up becoming actually a so that actually becomes necessarily a concave function. So this is something you can prove for yourself that d is a concave function. It is a concave function of lambda and theta. Now this fact does not require this does not require f g etcetera f g h to be convex. So the problem does not need to be a convex optimization. The fact that this is that d is always a concave function is holds for any optimization problem like this. So as a consequence what we can say is if I suppose I pose this following other type of problem which is suppose I say look at this problem where I am looking to maximize with subject to lambda greater than equal to 0 and all theta the function d of lambda, theta. This sort of a problem is therefore is a convex optimization problem. This sort of problem is always is a convex optimization problem and why is that the case? The reason for that is because its objective is concave and you are maximizing the objective. So effectively it is equivalent to minimizing the negative of d. So if you are minimizing the negative of d that would be a convex minimizing a convex function over a convex feasible region which is just lambda greater than equal to 0 and theta unconstrained or in short theta in r p. So this is always a convex optimization problem. This problem is what we is let me give it a name it is what is called the dual problem and you will soon see that this is in fact nothing but the dual problem that you have encountered the same as the dual problem that you have encountered as part of your study in linear program. So well if this is supposed to be the dual problem then where are the relations of duality which is strong duality and weak duality. So let us first look at weak duality. Strong duality is where we will spend most of our time subsequently. So well.