 last class we have seen that how to solve a what is called optimization problem with constrained inequalities. Graphically we have seen how to solve it, then in general we see if you have a unconstrained optimization problem is there, we know how to solve this analytically as well as the numerical techniques that we have discussed. So, in general if you have a problem optimization problem in more specifically if it is a minimization problem subject to equality constraint and inequality constraint first we can convert them into a unconstrained optimization problem by introducing some variables known as unspecified variables known as the Lagrangian multipliers. So, if you recollect this one there are two methods one method is the elimination methods that this problem last class we have discussed that suppose we have to minimize this function f which is a function of x 1 and x 2 and it is given by this expression. It is quadratic form subject to let us call it is a equality constraints, if you have a here we have considered only one equality constraint. Then what will do it will eliminate the variables that means in this expression x 2 is expressed in terms of x 1 that taking this that side x 1 and then it is a function of x 1 only x 2 is a function of x 1. So, I have written f of x which is a function of x 1 and x 2 now it is a function of x 1 only x 2 is replaced by f of x 1. So, this is the expression so now this equality constraint is merged into a objective function or cost function. So, it is becoming a now unconstrained optimization problems. So, this problem we can solve it by using our standard technique either analytical method or it is a numerical methods we can do, but this type of elimination process is not easy for variables more than or number of equality constraints. If it is more several equality constraints are there then it is not a state forward case even for two equality constraint it becomes more tedious and complex. So, we can do only when that x 2 and x 3 we can express in terms of x 1 expressively then it is easy, but in most of the cases is not possible. So, we have to look after for another alternative methods and that method is called that you convert that set of equality constraint or inequality constraint again into a what is called unconstrained this equality constraint and inequality constraint will convert into a what is called inequality constraint we convert into equality constraint then the overall we will convert into a unconstrained optimization problem. So, let us see how to do that one so that is called basically it is a Lagrangian app multiplier approach Lagrange multiplier. So, our in general the optimization problem constraint optimization problem is described like this way minimize f of x which is a function of x 1 x 2 dot x n subject to equality constraint h i x 1 x 2 in general it is function of x n is equal to 0 and we have a how many equality constraints are there there are p equality constraints are there. And that inequality constraint g j which is a function of x 1 x 2 dot dot x n is less than equal to 0 for j is equal to 1 2 dot dot p. So, our problem is minimize this function subject to this type of inequality equality and inequality constraints this is a m not p m. So, this problem first we have to convert into a what is called by unconstrained optimization problems once you convert into unconstrained optimization problem then you can solve it by our different techniques either numerical methods or the our what is called analytical methods. Let us see how we can formulate to this one. So, this can I just mention you if you recollect that if you have a inequality constraints are there here I can convert into a equality constraint. Let us call you have a g of x is a function of this is less than equal to 0. So, that means that g of x which is less than 0 with this one I have to add some variable. So, that the function g of x and that one will be a what is called 0. So, you can apply g of j x plus a positive term you added x j square is equal to 0 when x j is equal to 0. That means it is equality constraint of that one it has a it may be g j of x may be equal to 0 or it will be a less than 0 when this has a some positive value this indicate g j is less than 0. So, this inequality constraint I can easily convert into a equality constraint this inequality constraint I can convert into equality constraint by adding what is called some variables s j square where this term is always greater than equal to 0. So, this so this equality constraint equality constraint this is inequality constraint I will convert into a equality constraint like this way introducing some variable x j as j square. So, now you see our problem is reformulated like this. So, this problem is reformulated like this way the problem is reformulated converting all inequality equality constraint again inequality constraint is converting inequality constraint. And then we are putting this constraint to the objective functions the problem is reformulated as minimize a function l new function l new introduce which is a function of x 1 x 2 dot dot x n. And we have a lambda 1 lambda 2 and lambda p then you have a mu 1 mu 2 I will explain what is this mu 3 dot dot mu m and there is a function s 1 s 2 dot dot s n. This function I am I am I am formulate by considering our original of cost function or objective function this is f of x 1 x 2 dot dot x n. Then this inequality constraint again if I put it here that objective function will not change this objective function value will not change when it reached to the what is called the optimum conditions. So, I can write it lambda 1 h 1 x 1 x 2 dot dot x n plus lambda 2 x 2 x 1 x 2 dot dot x n we have a such p equality constraints. So, you can write dot dot plus lambda p lambda p h p x 1 x 2 dot dot x n in addition to that that inequality constraint I have converted into equality constraint. So, that can be also added in the our new objective functions agree. So, that is mu 1 into that inequality constraint first inequality constraint is converted into equality constraint like this way you see g 1 of x 1 x 2 dot dot x n plus s 1 square this is then mu 2 g 2 x 1 x 2 dot dot x n plus s 2 square and so on we have a M inequality constraints M inequality in inequality constraints which is converted into equality constraint by adding s i or s j. So, this is M x 1 x 2 dot dot x n plus s M square. So, idea is that when this function has reached to the optimum value it must satisfy the this condition this condition and this condition which in turn it must satisfy these conditions when it is this condition is satisfied then this is 0. So, this is 0 multiplied by lambda 1 0 by multiplied by this similarly this is also 0 multiplied by mu 1 0 multiplied by this is 0 when it reached to the optimum conditions it must satisfy the our conditions. That means, in a what is called it must satisfy the our this conditions agree. But, when it is reached to the optimum value of this functions that it must satisfy the our equality and inequality constraints. Now, this I can write it if you see this is called this L is called Lagrangian functions Lagrangian function. So, mu objective function is known the Lagrangian function and lambda 1 lambda i where i is equal to 1 2 dot dot your p and mu j and j is equal to 1 2 dot dot m these are called the what is called Lagrange multiplier these are called Lagrange multipliers agree. So, this and this values lambda i values lambda i values always greater than unspecified it is a unspecified value it may be positive negative and 0 unspecified value and this mu i value is always greater than 0 for when you are minimizing the function of that L is greater than non negative. I will explain why it is non negative and this is the any value of that means, unspecified value it may be positive negative and 0 this thing I will discuss later. So, this is the new function now you see what is the our constant optimization problem is there that is converted into a unconstrained optimization problems agree. So, one can write this expression that is what is called Lagrangian function by matrix and vector notation form. So, this notation one can write this thing into this notation form. So, lambda L which is a function of x is an n cross 1 function of lambda which is a p cross 1 a function of mu which is a m cross 1 and function of s which is a election how you have defined lambda mu and all these things is equal to I can write f of x this is the f of x. If this is the f of x then this are I am writing in vector form. So, it is a lambda transpose h of x then lambda mu transpose then g of x plus s square. So, this agree where I am writing lambda is equal to lambda 1 lambda 2 dot dot lambda p transpose and lambda dimension is 1 p cross 1. Similarly, mu dimensions is equal to mu 1 these are all Lagrangian multipliers mu 1 mu 2 dot dot mu m transpose whose dimension is m cross 1. Similarly, this our s square that s square you see this one is nothing but a s 1 square s 2 square plus dot dot s m square again transpose this dimension is your again m cross 1 these are the things and I told you this value of s if you recollect this one. The value of s is here is a positive quantity s square is a positive quantity. So, these are all greater than equal to 0 when it is equal to 0 that mean g of s is equal to which is a less than equal to 0 when s is s i or s 2 is 0 all this thing then it is a equality constant of g of x is satisfied. When it is greater than 0 then it is inequality constant of g of x is g of j is satisfied. So, these are the things then where is h of x. So, we know what is lambda mu and s. So, h of x is equal to your h 1 of x or you write it h 1 of x h 1 of x h 2 of x dot dot h p of x or if and then g of x is equal to nothing but a g 1 of x g 2 of x dot dot g m of x and naturally this dimension is p cross 1 this dimension is m cross 1 and your s square dimension is s 1 square s 2 square dot dot s m square. This dimension is same as the g dimension because we are adding with g s square so that inequality constant is converted into a equality constant. So, this dimension also m cross 1. So, keeping all this thing is mind that is our mu all these things we have just mention it here. These are the all lagrange multiplier lambda and mu are the lagrange multiplier and these are unspecified and these values are non-negative that is we have specified. So, now once I got the what is called unconstrained optimization problem that instead of minimizing f. Now, I am minimizing l and we are embedded our inequality constant and equality constant in the objective functions of that. So, then our constant next is we can write it our what is called our KKD condition before that that is called KKD condition necessary conditions and sufficient condition in order to minimize these functions. So, this is called Lagrangian function we want to minimize that one next our problem. So, next is before we discuss that one we will just we have seen just now that if you have a x j of x is less than equal to 0. Then we can convert into equality constant by using a variable s i s j square positive quantity which will be this 0. So, this when s is equal to 0 that mean equality condition is satisfied when s is not equal to 0 means positive quantity that inequality condition is j of x is less than 0 when s j square is greater than 0. This is equality constant is satisfied when s j square is equal to 0. So, this let us call suppose if you have a some inequality constant like this way let us call g 1 1 one equality constant I am showing like this way it is equal to x 1 and x 1 is greater than equal to 0. Then x 1 is the our decision variable or design variables and that is given x 1 is greater than 0 means it is a side constant is given that x variable after doing the what is called optimizations minimization of the problem x cannot be negative non negative. So, this I can convert in our inequality type less than equal to 0 this one how we can make it this is nothing but a minus x 1 this indicate the x 1 is positive non negative this one. So, now if I am put it minus of x 1 that will be less than equal to 0. So, now you see our g 1 is the inequality constant which is our type of inequality we have used less than equal to 0 this type I made it. So, any inequality constant of this type I can convert into this time our z type less than equal to we have always written equality constant equal to 0 and inequality constant of type of less than equal to 0. So, suppose you have a variable x 1 and that variable limit is given x 1 is less than equal to s x 1 is greater than equal to a and less than equal to b. That means x 1 has a lower and upper range or bound of x 1 is given. So, let us call this is g 1 of x again this g 1 of x not g 1 of this is the bound it is given. So, I can this range I can split up into a 2 inequality constant. So, I can write it if you see this one I can write a minus if I write it a minus x if you write it a minus x what will be this one a minus x is less than equal to 0 both side a minus minus here minus x I mean separated minus x of separated x 1. So, this I can consider is same as g 1 of x another constant I can write g 2 of x I can write it that x 1 minus b is less than equal to 0. Now, naturally x 1 is greater than equal to a so if you just subtract a from x 1 then it will less than equal to. So, this is one equality constant and this is another equality constant if you have a range of decision variable or side variables range is there. So, that can be decomposed into 2 inequality constant of such type of inequality. So, this I mentioned you the before we discussed that our condition necessary condition there is important point is there to discuss the what is called KKD conditions what is KKD we will discuss next. So, that is a regular point regular points. So, regular point arise when equality constraints are present in the problem. So, regularly you can just write regular points arise when equality constraints are present in the problem in the optimization problem. Now, you see if you have a regular point that the regular point has a 2 features one feature is that if x star is a regular point agree and that regular point must be feasible in the sense that must satisfy the equality constant and inequality constant. This is the first feature of regular points next is that this regular points agree that the regular points you will get the what is called that equality constraints. Equality constraints will come from the problem statement it is given equality constraints and some of the equality constraints will come from inequality constraints. So, we can say the features there are how many equality constraints will get from the optimization problem that means, equality constraints will get h of x is equal to 0 that i is equal to 1 2 dot dot p there are p equality constraints and there is a possibility is there you if you have a x j of x is equal to 0 for j is equal to 1 2 dot dot n. Now, our original problem was if you see j varies from 1 to or you write it k in place of this j in order to make confusion write k k is equal to this agree our l is the because our basic equation of inequality constraint is x j of x is less than equal to 0 for j is equal to 1 2 dot dot m. Out of this small m inequality constraint of g j we have a l equality constraint of g of j. So, total i we have a because where you can say where l is less than m. So, if you see the our original problem we have a p equality constraints agree and there are m inequality constraint out of m equality constraint of g there is a small l equality constraints. So, in all together how many equality constraints we got it p plus small l. So, if you the point regular points if the regular points that x star if x star is the regular point then is must it must satisfy that the gradient of that g gradient of h and gradient of g k for equality constraint all these gradients should be linearly independent. Then that point will be the regular point of this optimization problems. So, now what is this I just whatever just I mention it I will just write it. So, that you can. So, there are two essential features of a regular point x star whose dimension n cross 1 first point is this regular point must satisfy is what is called all design constraint. That means it must satisfy all constraint the point x n cross n star which is a regular point if it is feasible it must satisfy this what is called all constraint that means must satisfy all equality and inequality constraint must satisfy all constraints this is one features. Next feature is that the gradient of the equality constraint and what is the equality constraint gradient g gradient of h i of x and i is equal to 1 2 dot dot p. As we will as bracket gradient of equality constraint equality as well as the active inequality constraint constraint that is gradient of g j g j of x this is the inequality constraint gradient of this for i is equal to 1 2 dot dot l. Where l is less than m and what is the active constraint active constraint active constraints are there for this value of x for this value of x if it is 0. That means constraint is more stringent strict that conditions. So, when that means i is equal to 1 i is equal to 2 i is equal to dot l this equality 0 equality constraints are there. So, this is called active inequality constraints this is called active inequality constraints constraint for the inequality constraints agree this is active. And in active equality constraints is switch on g i of x star if it is less than 0 then it is called inactive constraint. So, we have a gradient of equality as well as active inequality constraint active inequality constraint will come from the equality constraints agree for i is equal to 1 2 l and here i is equal to 1 2 p. So, these gradients that means p plus l gradients of which satisfy this equality constraints of the original constraint equations agree. If these gradients are linearly independent then we will say that point x star is the regular point of the optimization problems. So, our is this constraints at this constraints that is what we will learn at x is equal to x star n plus 1 you see this the gradient of equality as well as active inequality constraint for this at x is equal to this must form a linear independent set of vectors. If these vectors that means other words if these vectors are linearly independent agree the equality constraint and the what is called active inequality constraint that gradient of that vector is linearly independent. Then we will say the x star is the independent then x star x is equal to x star is a regular point regular point for the optimization problem under consideration. So, this is the regular point. So, KKT condition starts with the x star is the regular point that means the equality all equality constraints agree gradient of all equality constraints involved in h i as well as involved in h g j active inequality constraints means when g j is equal to g j of x is equal to 0 at that point x star if it is a this set of vectors that this set of gradients that means gradients of this functions if it is linearly independent then we will call it is a regular point of the optimization problem under consideration. So, this because why this required of that one we will see later this one. So, KKT KKT means curves cone taker it is called KKT condition is nothing but a first is find the necessary condition for the unconstrained of what is called constrain optimization problem which is converted into unconstrained optimization problem using the Lagrangian multipliers agree for that function find out the necessary conditions. So, let x star b a regular point now regular point we understand if you see the our statement of the problem here original problem this for x is equal to for x is equal to x star this condition must satisfy because this is the our the regular all stationary points you know how to find out the stationary points all stationary points is not a regular point but all regular points is a stationary points agree. So, if a regular point is there that point must be in what is called feasible region or it is a design space that x star must be in design space must satisfy all this equality and inequality constant out of this which are satisfy the equality constant we have a P equality constant and out of this M inequality constant there may be a small l equality constants may be there. So, this functions if you find out the gradient of this function and that function at x is equal to x star gradient of that function at x is equal to x star what are the vectors you will get it because either this is a scalar if you differentiate this one with respect to vector then it is get a vector. So, all this vector if it is linearly independent then I will call the x star is a regular point of the optimization problem under considerations. So, that is why it starts x is a regular point x is a regular point x is a regular point of the feasible set of the feasible set means if you put the value of x in the constant equation it will satisfy the all constant must satisfy that one then only if it is a if it does not satisfy at least one point equality or inequality constant that means our point does not lie in the design space or in the feasible space space. So, that will not give you the answer of the constant problem. So, this is and it is a it is a local minimum local minimum again it is a local minimum for the function f is a function of n cross 1 subject to our constrain a subject to constants s i h i x i h i h i x is equal to 0 i is equal to 1 2 dot dot p and and g j of x is less than equal to 0 j is equal to 1 2 dot dot m. So, this is the constant. So, how to solve this one we know lagrangian function how to form the lagrangian function first step first step. So, that is the k k d conditions what is the k k d first step from lagrangian multiplier l l I am not this l function of x lambda mu that we have already discussed that that one and this is a s this equal to if you see this one f of x plus summation of i is equal to 1 to p lambda i h i all functions I am writing in terms of i summation then this summation of j is equal to 1 to p sorry 1 to m then mu j multiplied by g j of x plus s j s j square. So, this is the and this is the case and this is called lagrange function. So, this function you have to optimize means minimization or problem is minimize this functions and we have written what is lambda all these things we have written. So, we will not repeat this one. So, this is the equation this. So, our necessary condition is what if you see the step 2 lagrangian function we have made it then step 2 is the necessary conditions or to find the stationary stationary conditions necessary conditions. If you see the way you are doing the unconstrained optimization problem solve that means first find out the what is called necessary condition that means gradient of their function s i into 0 now you are writing. So, this now you differentiate this with this is a function of x x is the how many components n components are x 1 x 2 dot dot x n. Similarly, lambda 1 lambda 2 lambda n you have to differentiate all this variable with l with respect to this mu 1 mu 1 mu 2 dot dot this and s is s 1 s 2 dot dot s m. So, this I am not writing repeatedly dot is function of this one and this is the function of x of k x of k means k varies 1 to n. When you write k is k is equal to 1 I am differentiated lagrangian function with respect to x 1. So, if you do this one see this what you will get it that one then both side you differentiate both left hand side you differentiate the function with respect to x k and k varies from 1 to dot dot n. So, this is the summation i is equal to 1 to n lambda i I am differentiated with respect to x. So, lambda is constant keeping other variables constant. So, I can write h of i x delta x k. Similarly, summation of that one I agree j is equal to 1 to m mu j is a constant quantity I can take and now it is a function of that one only x x g is a function of x other is not a function of x. So, you can write del of del x k then this g is a function of x g agree that g j of x plus this term is not will not come into the picture or you if you can write it finally, it will not come into the picture this and that must be assigned to 0. This is our necessary condition, but necessary condition is not complete this is we have differentiated this thing k is equal to 1 to dot dot n variables are there because we have a this l is a function of lambda also. So, again you will write it this equal to l this then you can write it that function of this you differentiate with respect to lambda i and if you use this is the lambda i see this one lambda 1 into h 1 lambda 2 into h 2 lambda 3 into h 3. So, I am differentiating first with lambda 1. So, it will come h 1 then h 2 h 3 in this way. So, you will get it this h 1 of x and that i varies 1 2 dot dot p agree. So, let us call this is equation number 2 this is equation number 2 this is equation number 3. So, you have differentiate with respect to lambda then what is left with respect to this with respect to mu mu j is equal to now you see with respect to mu j if you depend mu 1 i j is equal to 1 mu 1 j g 1 s 1 square. So, differentiate with respect to mu 1 other terms is not a function of mu. So, it will be a g j of x plus x j square g j of this equal to this equal to 0 because it is a necessary condition if I assign gradient is assigned to 0. So, this is equal to 0. So, that j is equal to how many 1 2 i j 1 2 dot dot m agree. Then you see this is the step 2 then you see step 3 feasibility check step 3 feasibility check conditions. If s j square is greater than equal to 0 this implies g j of x I have explained this in earlier also is less than equal to 0 agree. And this is j is equal to 1 2 dot dot m when s j is s j square is 0 then it is a equality constant is 0. When s j greater than 0 means positive then this equality constant less than 0 is satisfied. So, this in general we can write it. So, we have given equation number 1 2 1 2 3 this is equation number 4 then you give this is equation number 5 that is equation number 5. Then we can say that equation number 5 then step 4 because if you see this one we have still left 1 that means we have to do the find the partial differentiation of l with respect to s 1 s 2 dot dot s m. So, that is we called it is a switching conditions why it is a switching condition you will see later that by clubbing that equation 4 and that I will write it switching conditions. Now, this condition is as this differentiation of this with respect to x j because it is a function l is a function of s variables also this you have to assign. See if you differentiate this with respect to s mu 1 j is equal to mu 1 g 1 s 1 square. So, you have to differentiate with respect to s j. So, this is constant this is constant when you are differentiating with respect to j only this term will be there. So, this differentiation will come twice mu j s j is equal to s 1 square. So, this is equal to 0 that is j is equal to 1 2 dot dot m. So, let us call this is equation number 6. So, this equation I can write it this is all are necessary condition equation number step number 2 3 and 4 equation. So, if you see the equation number 4 which is differentiated partial differentiated with respect to mu j and step 5 or equation 6 partial differential l linear form with respect to s these 2 equation we can combine together and yields a resultant conditions 1. So, let us see that one what we can do. So, you multiply by both side of this equation that is this equation is mu j s j is equal to 0 both side multiply by s j. So, s j square is equal to 0. Now, recall this equation for recall this equation 4. If you form 4 we can write it this form 4 we can write it form equation 4 or recall equation 4 we if you see this one g j of x. That means, if you do the partial differentiation of l with respect to mu I am getting that one s j square is equal to 0 multiplied by both side the scalar quantity mu j. So, mu j g j. So, this is mu j s j s j square is equal to 0 and this quantity is 0 you see from here this quantity is 0. So, you see we are merge this 2 equation that is 4 and 6 which in turn that the equality constant of g j which is converted into a equality constant that g a g j plus s square these 2 conditions we have merged together. Actually separately if you see del l of del s is equal to mu j mu j s j is equal to 0 and if you see del l lagrangian function with respect to mu j that is we have seen that equation. So, this equation and this equation we have merged together and ultimately we are getting this equal to mu j g j. Is equal to 0 for j is equal to 1 2 dot m this is the condition agree. So, what we will do from now onwards if you want to solve this one necessary condition you will write it del l del l with respect to x del l with respect to lambda del l with respect to what is called s and with respect to what is called mu which merge together I will write this condition. So, this is the condition we have to write it and this if you see if I write this all this equality constant then I can write it mu 1 g 1 mu 2 g 2 mu 3 g 3 and dot dot m mu what is called m g m is equal to because each component is 0 is not individually just see then this I can write it if you see mu 1 mu 2 dot dot mu n m this transpose this is nothing but a we can write it g is g 1 g 2 dot dot g m and this equal to 0 this is scalar quantity 1 cross 1 see here this is a scalar quantity of 1.1. So, if you multiply by this this is in vector form we can write mu transpose g is equal to 0. So, this indicates that switching condition it is nothing but a orthogonality conditions also we write it this is the orthogonality conditions or orthogonality conditions agree. So, now look at this singularity this if you look carefully this one this let us call I is equal to 1 g 1 g 2 g 1 mu 1 g 1 is equal to 0 there are 2 possibilities they are mu 1 and g 1 is 0 I can consider mu 1 is 0 when g g 1 is not equal to 0 then this condition is satisfied it can be also another conditions may be g 1 is 0, but mu 1 is not equal to 0. So, if you have a what is called m such conditions so how many possibilities are there to make this conditions 0's. So, if you have a 2 or there let us call we have a 2 only 2 equations are there 2 inequality condition 1 is this equal to 0 and this is this is equal to 0 I can consider mu 1 is 0 when g is not equal to 0 then this condition is satisfied and I can consider g 1 is 0, but mu 1 is not 0. Similarly, here mu 2 is 0, but g 2 is not 0 when g 2 is not 0 g 2 is not 0 it indicates it is a inactive conditions when g 2 of x is less than 0 and may be g 2 is 0 when mu 2 is not 0. So, if you have a such type of m conditions are there in turn we will get 2 to the power of m conditions cases 2 to the power of m cases we will get it while we will solve the what is called that our what is called necessary conditions problems. So, next is last is your step 5 non-negative non-negativity of lagrange multiplier for inequality constraints look at the beginning that what is the we have just consider that lambda is the lagrange multiplier which was associated with the equality constraints. If you see this one our first slide we have made a lagrange function where lambda or lambda 1 lambda 2 or lambda p all are associated with the equality constraint whereas, mu 1 mu 2 mu 3 this is also lagrange multiplier, but they are associated with the inequality constraint g 1. So, next is this one mu j is greater than equal to 0 j is equal to 1 2 dot m this will be true this will be this condition must satisfy when we are finding out the minimum value of the function minimum value of function and the this type of inequality is used and for this is the non-negativity or we can get it non-positivity non-positivity non-positivity of lagrange multiplier for inequality constraints that means mu j mu j is less than 0 for i is equal to 1 2 this when we will use minimum value of function, but type is greater than equal to 0. Anyway we will this is the most general way will always will take less than equal to 0 and last check is your step 6 is the regularity check regularity check and regularity check you know once you find out the that optimal point x star then you check whether the all equality constraints involved in the optimization problems agree whatever you got it gradient of that one is linearly independent or not. If it is linearly independent you can say that you are obtain the minimum value of the functions, so you have to check gradient of active constraints must be linearly independent means this is you will see for which j j of x is equal to 0 j is equal to 1 2 dot dot l and this is the active constraints is a small and not m out of m inequality constraint l equality is equality constraint is there. So, that gradient must be linearly independent again so that in short you know how to test how to test the linearly independent test. If you have a set of vectors are there let us call v 1, v 2 and v n and that dimension of all these dimension is n cross 1 if alpha 1 v 1 plus alpha 2 v 2 alpha 2 v 2 plus alpha n alpha n v n is equal to this null vector this. If all the coefficients of associate with v 1 v 2 is equal to dot dot alpha n equal to 0 then it indicates that means it indicates the set of vectors f on v 1 v 2 dot v n are linearly independent. If any one of this is non-zero this indicates the set of vectors you can express a linear combination of these vectors any vectors. So, then we will call the set of vectors are linearly dependent. So, next class we will just work out some problems and tell how to do and also see the sufficient condition of the problems necessary and necessary we have done next is sufficient condition. Thank you.