 We have discussed about the convex set, then convex function and we have discussed the convex function. A set is said to be convex function, if any line segments form between any two points and that belongs to that line segment belongs to that set, then it will be called a convex set. Then we have discussed the convex function. The function is said to be convex function agree that of n dimensional set in a convex set S is said to be convex function if and only if this condition is satisfied that f of alpha x 1 1 minus alpha x 2 that function value is always less than equal to alpha into x 1 plus 1 minus alpha into x 2 and this alpha varies from 1 to 0 to 1 any values this one this physical interpretation of this one this function if you see this one it is nothing but a that one this let us call f of f of x z 2 variables x 1 and x 2. Then this x 1 and x 2 belongs to a convex set and then we can write it that any value between x and x 1 and x 2 I can write it alpha into x 1 plus 1 minus alpha into x 2 alpha is any variable from 0 to 1. Then this function value at any point on this interval that x 1 to x 2 any value this function value is always less than equal to the chord is from 0.1 to 0.1 to p 2 what is the chord is from and that on the line any point on this chord this will be greater than function value. So, this is if it is satisfied this condition it is called the convex function. So, then we have discussed some of the properties of the convex function. So, a function is said to be convex function if and only if that Hessian matrix of this function is positive semi definite or positive definite over the convex set that what is the function is given if the Hessian matrix of this function is positive definite or positive semi definite over the convex set then it will be called the function is a convex function this is the test for convex function. Then let us see what is the convex programming problem. So, now we will discuss convex programming problem. So, if you have a g j of x is a function which is defined which is having a this type of function we have a let us call m functions are there and each is convex these are convex for j is equal to 1 to m. Then if this function are convex then g j of x is less than equal to e j is also is a convex function agree that intersection of this convex set is called is defined by individual constant set or individual this. So, that in other words you can say if g of x g j of x is a convex function then g j x less than equal to e j also convex set their intersection is also a convex set agree this is also convex function g j. So, next is your convex optimization problems convex optimization problems. So, any optimization problem we know we have a objective function as well as the constraints agree. So, a convex function a convex optimization problem is one a convex optimization problem a convex optimization problem is one which have the following form is one of the form this following form. What is that objective function is minimize f of x x is a n variables x is a n dimensional variables this equal to this function will be a convex function. And convex function how you test it you find out the Hessian matrix of this function if it is a positive semi definite or positive definite then it is function is convex function. So, the function which you are supposed to minimize that function must be convex and also subject to equality constraint h i of x is equal to a i transpose x minus b i is equal to 0. So, this is our equality constraint and that equation is affine function means linear function is called affine function means or it is can linear in x linear function in x. So, a function is set to a convex optimization function then objective function or cost function must be cost function must be convex function equality constraint is affine function means linear function and inequality constraints x j of x is less than equal to 0 that function must be also convex function. Then we will call it is a convex optimization problems that means in short we can say the objective function or cost function if it is a convex function and inequality constraint also a convex function and equality constraint is the affine function then minimization problem will call a convex optimization problem again is your that our j j how many functions are there in general we have also mentioned earlier that this we have a that is equal to m and the i varies that a how many equality constraints are there i varies you can write it that i varies from 1 to dot dot p. So, next is your quadratic optimization problem. So, in short it is a q c problem q c problem quadratic optimization problem. So, you write it here back at q c problem. So, in q c problem only the you see the cost function or the objective function convex function, but it should be quadratic form convex function must be in quadratic form convex function may be any form, but it is a special case the convex function must be in quadratic form and equality and inequality constraints are affine functions of x. Then we will call it is a quadratic optimization problems quadratic optimization problem. So, according to our definition the quadratic optimization problem is one of the form that minimize f of x x is a n variables are there and that must be I told you convex function that convex function is a special form which is quadratic form. So, in quadratic form already we have discussed earlier x transpose p x plus general quadratic form is like this q transpose x plus some constant r and it since our dimension of x n cross 1 immediately we can know what is the dimension of q bar. Because this objective function is a scalar 1 and this dimension is 1 cross 1 and this is since it is n cross 1 p must be n cross 1. So, this will be a what is called a convex function, but it is a in quadratic form. So, now this function minimize this and what is in quadratic form this whole thing is quadratic form general quadratic form and this quadratic form function must be convex function. We have already defined this quadratic form function will be convex function provided the Hessian matrix of this function is a positive semi definite or positive definite matrix. Then if it is a quadratic form and convex function then subject to we have equality constant x transpose h i of x is equal to h i transpose of x minus b i if I consider this is equal to 0 above i is equal to 1 2 dot dot p and g j of x is equal to you can write if you write it you can write into this form is g j of this is less than equal to 0 j is equal to 1 2 dot dot m m equations of that. So, if you write it in matrix form this one I can write it this is nothing but a your this quantity I can write it as if it is a g is a matrix form into x is equal to b and b is your dimension if you write it this is matrix form clubbing all this i is equal to 1 to p this dimension is p cross 1 and since n is this one g dimension you know that it will be p cross n similarly I can write it that one g of that one I can let us call since it is a a better you write it a agree since it is a b you write it a. So, this I can write it g of x g of x k x is equal to less than equal to your some constant you can instead of less than 0 you can put it some constant let us call it is a c i c j some constant also you can put it. So, this equal to c j and that dimension is your m cross 1 this is n cross 1 this will be this dimension will be m cross n agree. So, it is writing in a matrix vector form this. So, this both this function or this function equivalently this functions which is equality function is we have a p equality constant we have a what is called m equality constant inequality constant this all the function must be affine function this should be affine function of x means linear this is also affine these and these are equivalent form. So, these constants are affine function of what is called x now you see if you take the since affine means linear functions if you take the Hessian matrix of this affine function linear function that Hessian matrix will be 0 because if you take the partial derivative of this function twice then it function will be 0 this one. So, this is quadratic optimization problem and in quadratic optimization problem we minimize the a convex function which is in quadratic in nature agree over a convex set. So, our practically in a quadratic in a quadratic the problem we minimize a convex function convex quadratic function convex quadratic function over the over a feasible region formed by feasible region that is intersection of intersection of finite number of finite number of half half space half spaces and hyper planes. So, hyper planes is nothing but a that constraints that is equality constraints are there that is your h that is a i j x a i j into x minus b i is equal to 0 this is the hyper plane suppose 3 dimensional case x 1 x 2 3. So, it is you can write a 1 into x 1 plus a 2 into x 2 plus what is called a 3 into x 3 is equal to let us call it is b 1. So, it is a hyper 3 dimensional hyper plane and half space is formed from the inequality constraints means either this that half space either if it is a less than equal to 0 then in one side of this surface it will be there if it is greater than this then it will be another side of the surface. So, it is generally it is expressed that is what is called g i j g j of x is less than equal to c j in more specifically I can write it this one g j of that d j transpose of x minus d j of x we can the d j of x this minus c j if you take bring this that side c j is equal to less than equal to 0 that we can write it. So, this is inequality constraint either it lies one half of this surface or other half of this I mean depending upon the less than equal to 0 or greater than equal to 0 and it will be on the surface when you equal to sign is there. So, the quadratic problem is nothing but a minimization of a quadratic function quadratic convex quadratic function over the region which region intersection of this equality and inequality constraint equality constraint we call hyper plane and inequality constraint is a high space space. So, or you can say combination of this one is called the poly hydrant over a range of over a poly hydrant. So, what is poly hydrant poly hydrant a poly hydrant is same this one poly hydrant is defined as the solution set the solution set at the solution set of a finite number of equalities and inequalities constraint a finite number of a finite number of linear linear equalities and inequalities. So, I told you that linear equalities linear equalities constraint is the your hyper plane and linear inequality constraint is the hyper space. So, let us see with an example suppose we have a equality constraints. So, in a let us call is a the equality constraint and inequality constraint is a function of two variables x 1 and x 2 when I put this one constraint there x 1 is greater than equal to 0 and x 2 is greater than equal to 0 this indicates x 1 greater than 0 this indicates this space. And when x 2 is greater than equal to this indicates that this space that x 2 is greater than equal to 0 this one is x 1 is greater than equal to the in addition to that if if you have a constant equality constant just like h i transpose of x minus you see what I am written. Now, this is not that constant you have written a i x that means h i of x when you are written is a i transpose of x minus b i is equal to 0. This is equality constant and inequality constant g j of x is less than equal to c j which I am writing it this one is that symbol we have used d j this one is d j transpose into x minus c j is less than equal to 0. Suppose if you plot it I told you it is the function of 2 variables x 1 n x 2 and if you plot this one this constraints. Let us call we have a some equality constant or inequality constant is there let us call this is our equality constant h 1 of x that means it is on the line this indicates is on the line and this is the inequality two inequality constants are there let us call this in this satisfy g i is less than equal to g i is less than g i minus c i is less than equal to in the this space. Similarly, another inequality constant it indicates let us call this area again. So, our feasible region of the optimization problem is that one. So, it is an intersection of equality constant and inequality constant intersection of this one. So, it is a poly hydrant and intersection of this point is called vertices of the poly hydrant. So, this is called a region of our feasible region of our optimization problems. That means if you problem is like the minimize function of a subject to let us call one equality constant which is shows on the line itself. Another two let us call g 1 is this one this is g 2 is that one and two inequality constant whose feasible less than equal to this that g 1 is equal to less than equal to c 1 and g 2 is less than c 2 it represent this area this portion and this represent this portion again. So, this is the our feasible region and it forms with the intersection of equality and inequality constant. You see this indicates the two halves either when it is g 1 is less than equal to c 1 we have assumed this is that portion when g 1 minus g 1 minus c 1 is less than equal to 0 this portion when g 1 minus c 2 is greater than 0 this portion. So, that divides into two halves in n dimensional case it is divided into a n half spaces. So, an equality constant it is on the what is called in this case on the line if it is more than two dimension variables it is on the hyper planes that is. So, each vertices are the vertices of the poly hydrant and we will show this later after few lectures. So, we will show it the function that function which are going to optimize that solution of that optimized value of the function will be any one of the optimal solution will get the vertices of this one any vertices of this one not any particular vertices of this one. But this point this point this point all are feasible solution out of this five vertices 1 2 3 4 5 one vertices will give you the optimum value of the function either maximum or minimum that is we will show it there. So, now if you see this one suppose let us call the our that objective function is like this way because this is a convex function is let us call it is like this way and this is our objective functions f of x and we are assuming that function value is increasing in this direction increasing this indicates the increasing the value of function increasing the value of f x. Now, you see clearly if you move this function towards this direction. So, if we pass through this vertex this is one of the solution feasible solution if is pass through this one is a feasible solution is there again pass through this one when it is go on in this direction the function value is increasing or if our problem is maximization of this function I have to move this function in this direction like this way and up to this you see this point also solution of the our problem. That means at this point over point it will give you the maximum value of the function beyond that function value no doubt it is a more than this point value, but the solution is not feasible in other words the solution of this beyond that the curve this curve is not what is called if you see this one this is not a feasible solution means it does not satisfy at least one of the equality or inequality constant may be more than one also. So, our conclusion is if you if you just do the what is called convex optimization problem that function must be convex function sorry quadratic optimization problem if you doing the function must be quadratic form convex quadratic form and the equality and inequality constant are your affine functions. Then you will get a poly hydrant and each vertices of the poly hydrant is a feasible solution of the problem either maximization problem or minimization problems. Then out of this vertices one vertices will give you the optimum value of the function whether you want maximization problem or minimization problems. So, this is called the poly hydrant again next is we have just discussed is your what is called quadratic optimization problem next is our quadratic constraint optimization problems quadratic constraint quadratic problem. So, next problem previous problem is we have kind of quadratic optimization problems where that our objective function is a convex function, but it is a quadratic the function is a object function is quadratic convex function and the our constraints are affine function whether equality constraint and inequality constraint both are in affine function. Next is our problem is quadratic constraint and quadratic problem quadratic constraint quadratic problem in short it is a called q c quadratic constraint quadratic problem q c q p. So, it is it is one of the form it is one of the form the form is our minimize function x f of x which dimension is n cross 1 this you have to minimize and that function is a quadratic form not only quadratic form that must function must be convex function. So, this function is the quadratic form and it is a convex function that x transpose let us call I am considering p x plus q 0 transpose of x plus r 0. So, this dimension this since function is a scalar quantity and our x I have consider n cross 1 correspondingly q dimension you know immediately and this since x is n cross 1 this will be n cross n this and this dimension 1 cross 1. So, this function quadratic constraint quadratic problem that this function will be quadratic convex function. So, this function is convex quadratic form convex quadratic function. So, our cost function is convex quantity function same as a quadratic optimization problem in addition to that our what is called subject to equality constraints agree and inequality constraints. So, our x transpose p j x plus q j this transpose plus r j is less than equal to 0 j is equal to 1 2 dot m. There are m inequality constraints are there which is a quadratic form non-linear non-linear function, but that function is a quadratic form agree function of this one and how many function are we have a m functions are there and this function it is a quadratic form on not this one that we do we know that you find out the Hessian matrix of this one quadratic convex function or not. If the that Hessian matrix of this function is positive semi definite or positive definite then function is a quadratic convex function this one and in addition to that we have a equality constraint which is a function of this is a affine function this is a affine function means linear function linear function in x. So, that that is is quadratic quadratic convex function convex function. So, our objective function is quadratic convex function then our inequality constraint is quadratic convex function, but we have a m inequality constraint are each is quadratic convex function and our equality constraint is affine affine function in x agree then we will call this problem is called the quadratically constraint quadratic optimization problems quadratic constant quadratic optimization problems q c q p problem agree. So, in short we can write it in a and now you see our constraints is what previously our constraint in equality constraint and inequality constraint was linear affine functions intersection of that function we called is a polyhedron. That means inequality constraint forms the hyper half space and whereas equality constraint form is a hyper plane the intersection of hyper space and hyper plane all these things we get in polyhedron and the polyhedron intersection points are called vertices of the polyhedron. So, here is no more this is a affine function this is a quadratic convex function agree just form is a ellipsoid form so the intersection of ellipsoid and the linear equation this form a ellipsoid in a quadratic constraint quadratic problem we minimize a constraint. Convex quadratic function over a feasible region how this feasible region is formed the intersection of ellipsoids this is formed over a feasible region that is the intersection of intersection of intersection of ellipsoid that means when p j is greater than equal greater than 0 agree. So, this in previous case all equality and linear equality constraint in equality constraint was affine function and that they form is over a over a region of polyhedron, but here over a region of what is called ellipsoid because this form is the elliptical form of this one. So, remarks just see remarks when this p is equal to 0 this quadratic not this one that function quadratic optimization function you see that one. In quadratic optimization problems if you see quadratic optimization problem if p is equal to 0 then what is this form this term will not be there this is there and this is the linear equation affine equation. So, our objective function is linear inequality and equality constraints is linear. So, quadratic optimization problems transform into a linear optimization problem when the p is equal to p matrix equal to null matrix this term will not be there. So, linear quadratic optimization problem special case when p is equal to 0 this quadratic optimization problem boils down to a what is called linear programming linear optimization problems. In quadratic constraint optimization problem you see our objective function is convex quadratic form as well as inequality constraint is quadratic what is called convex function. In quadratic convex function inequality constraint if we assign if you see this one if you assign this quantity if you assign this p j for all p j j is equal to 1 to m this matrix is null matrix then this term is not there inequality constraints only linear term is there here I missed a term x is not x is missed here. So, if p j is equal to 0 this term will not be there j is equal to 1 to m all cases that p j is 0. So, this is now becoming a linear affine function, but our objective function is a quadratic convex function and when p j is 0 this tool equality and inequality constraints are becoming a affine function. So, the q c q p problem turns out to be quadratic optimization problem when p j equal to 0. So, it is special case of this one now next our problem is that how to solve such type of optimization problems that is next question. We use the our KKT condition what we have then we have to solve it that one that one way of doing another way of doing is this by linear programming a quadratic optimization problems we can solve by using the linear programming for this one. First is considering the what is called the KKT condition and then after that KKT conditions if you do its necessary condition then you solve it by using the linear programming. So, must know what is the linear programming. So, next topics will be the linear programming how to solve a linear optimization problem using that numerical techniques. Before that just one thing I will just discuss about the what is called that your theorem one convex function convex function of convex function convex function of convex function is called that this is called composite function. You can say function of a function function of a function which is every function each function is a convex function agree, but there is a some different properties is there. So, assume that theorem tell you assume that f of x is a convex function is a convex function convex function is a convex function on the set on the convex set s is always when you write s is a convex set on the set convex s means when you will get on the set it is a convex set agree. And over the range you can say over the range f of x sorry x is less than equal to b is less than equal to x a agree. And this on convex set s if h of y if h of y is an increasing convex set sorry increasing f of x is a convex function over the range on y is less than equal to f of y less than equal to equal to b then g of x is equal to h of define h of y is a function of y is a function of y is a function of x agree y is a function of x agree y is a or you can say y which is a function of x. So, f is a you see f is a convex function and h of y is also h of f x is also convex function, but increasing convex function over the range of a to b agree. So, then it is called this function is called composite function which is also is the convex function agree this is also this is a composite function, these are the composite function also convex function it is nothing but a convex of function of its function if function is a convex function another function which function of that function, which is increasing in nature in the same interval then it is called composite function over the function of that composite function is also convex function. Let us take one example that our f of x example f of x is equal to x square is convex function. This convex function if you say if you just plot it x versus f of x means y let us call it is a parabolic type and it is a convex for over the wall range of x minus infinity to plus infinity convex function everywhere you can write everywhere. So, f of x is a convex function and h of y is another function, which is a e to the power of y and this is that is h of y is the increasing function and it is a convex function is exponentially increasing function. If you see if you plot it this one is exponentially increasing and it is a convex function by definition of convex function. You see any point on this curve join two points on this curve join if you join together you will get a curve and this two points belongs to a convex set and any point on the convex set function value is always less than what is the value of the point on the curve from that value is correct. So, this is and this is equal to y let us call y is equal to f x is equal to this and this is increasing convex function increasing convex function everywhere. If you plot it this one e to the power of this is y and e this is 0 from 1 I am plotting what is I am plotting f of y f of y I am plotting you see from it will start from 1 and go on increasing this one and this point is a convex function and increasing value of this function. This then g of x that g of x is equal to h of that is I mean h of y is what f of x is h of a is equal to f of x is what is x square and what is h of y is a function of is nothing but a g of x. Now, I can write it is nothing but a h of x square and is nothing but e to the power of x square and this composite function composite function is also convex function. Now, how you check it this is convex function you just take the twice differentiation of this function with respect to x then you will get this function value is always greater than 0 for any value of x. Let us say if you take the second derivative of this if you have more than one variable I told you that function you have to differentiate means that Hessian matrix you will get twice that is function is to differentiate twice that mean in turn you will get Hessian matrix if the variable more than 1 then that matrix must be positive definite or positive 7 image and in this case it is since it is a single variable you just do it this one you will get if you do this one you will get e to the power of x square plus this into 2 x square which is greater than equal to 0 for any value of x in the range because we have seen that f of a function given a example f of x what we have considered x square it is a convex function everywhere in the domain that mean minus infinity infinity to plus infinity. Similarly, that we have a function of h of y which is e to the power of y again e to the power of y that is also a function of y everywhere and it is increasing convex function then g of x is the function of y which in turn we can write x square is this way and this function is a convex function the composite function function of a function both are is a convex function resultant function is a convex function. So, this is our conclusion for this one now I just I mention it then how to solve the quadratic optimization problem by using that what is called the linear programming. So, we will start for now what is linear programming agree. So, next topic says your linear programming methods for optimum design. So, we know first is we define already we have defined that what is linear programming that objective function that means cost function that means f of x if it is n variable this function is linear function and subject to the constraints to the constraints and we have a constraint equality constraint and inequality constraint that h i of x is equal to 0 that h i of x I can write it into this form a transpose x minus b i is equal to 0 from and i is very to 1 to p agree this must be linear and also g also g j of x agree is also that less than equal to 0 or less than equal to c i some constraint agree less than equal to c i or some constraint that is also our c j and j is equal to 1 to dot m this is also linear then this problem is called linear optimization problems that means you find out the value of x. So, that it satisfy this constraint as satisfy the both the constraint as well as the function value will be minimum or maximum cost function this cost function let us minimize this cost function or maximize. So, this problem when all cost function as well as inequality and equality constraints are linear that problem optimization problem is called linear programming problems. So, to solve such type of problems either analytically or numerically. So, first we have to convert into a standard LP problems. So, how we will convert into a standard linear programming problem. So, first we will define what is a standard linear programming problems. So, our problem is minimize the objective function that objective function is f function of that x which is function of n variables. Let us write their detail expression and that expression this function is linear. So, x 1 c 1 into x 1 c 2 into x 2 and we have n variables. So, it is a c n into x n. So, we have to minimize this function and this function you can write it that c 1 c 2 dot c n all are known and this function value is real and known may be positive negative 0 or sub function value is known real known function known and real coefficients. This I can write in terms of vector form where c is equal to you write it that c 1 c 2 dot c n is a column vector and c transpose will be row vector multiplied by column vector this will be a scalar quantity. So, our objective is this one subject to equality constraint and inequality constraint is it not. So, our standard problem is there our standard LP problem is minimize this one subject to that is why I am calling it is a standard problem subject to that subject to the equality constraint subject to equality constraint only suppose if you have a inequality constraint I always I can always convert into equality constraint agree. So, that is my standard LP problem is telling minimize this linear function like this way and subject to this equality constraint x 1 plus a 1 x 2 plus dot dot a 1 n x n because we have a n variables are there is equal to let us call it is a b 1 similarly in second equation a 2 1 x 1 a 2 2 x 2 plus dot dot a 2 n x n is equal to b 2 and in this way we have a m equality constraints are there a m 1 x 1 a m 2 x 2 plus dot dot a m n x n dot dot this is a b m. So, our standard LP problem please remember minimize the linear function minimize the linear function subject to our equality constraints and this inequality constraint b 1 b 2 b 3 dot dot b m all are greater than equal to 0 this is the they put it like this all are greater than 0 and x i is greater than equal to 0 for i is equal to 0 for x i is greater than equal to 0 for i is equal to 1 to n this also must put constraint like this way. So, minimize this function subject to equality constraint like this way right hand side of this one which is a constant term this is a constant non negative number non negative numbers this and x i is greater than 0 for all values of x 1 x 2 n. So, if your problem is if you having a inequality constraint convert into equality form keeping the constant term in the right hand side positive then it is a standard LP problem. If I write it matrix and vector notation form which we can rewrite this optimization problem minimize f of x is equal to c transpose x this one subject to a into x is equal to b and b dimension is m cross 1 n dimension is x dimension n and a we have a m equations are there. So, that dimension is m cross n agree our b is you can write greater than equal to 0 and x is greater than equal to 0 x is vector that we have each component of this vector x 1 x 2 dot dot x n is non negative number a greater than equal to. Similarly, b is a vector of m cross n that each element of b is a non negative number that means right hand side of the matrix equality that a x is equal to b that constant matrix b must be what is called positive. So, now our problem is which is unknown this x is unknown. So, our problem is minimize this function subject to this constraint equality constraint agree in other words you solve a x is equal to b in such a way that function below will be giving minimum value agree you are looking for some value of x 1 x 2 dot dot x n. So, that this function below will be minimum as well as this constant are satisfied. So, basically it is nothing but a solution of algebraic equation a x is equal to b this is known this is known from the description of the optimization problem this is known and this is unknown. So, I will in general that what is called any problems are there equality constraint in inequality constraint if it is a linear form agree I can always convert into a x is equal to b form keeping the right hand side term is positive quantity that we can form any problem linear programming problem can be convert into standard LP problem. So, I will stop it here today next class I will continue this one.