 What is the what is the idea of a cutting plane method? The idea of a cutting plane method is suppose you have an optimization problem in which you have a feasible region that looks like this sort of a convex set. What it would do is it would try to approximate this convex feasible region using hyper planes. It would try to approximate this convex feasible region using a hyper planes and if and at each iteration try to improve the description or the description of these hyper planes. In other words, it will keep adding hyper planes to this feasible region to the point where the approximation starts getting to the point where it starts looking almost like the like the original constraint. Now, the beauty is because you are adding hyper planes at each step, what you are minimizing over at each step is not the original convex region but rather just a polyhedral. So, this sort of a problem is potentially much easier because you can use techniques that are available for linear programming in order to in order to address this particular problem. Now, you would obviously ask how would you use linear programming here because your objective is not necessarily linear but then there is a very simple trick to actually convert any optimization problem to a problem where the objective is linear. So, let me first tell you this. So, suppose if I have an objective an optimization problem like this where you are minimizing fx subject to x in S is there a way by which I can convert this problem to a problem where the variable now is x is there a way by which I can convert this problem to a problem where the objective is linear. So, the answer to this is yes, you can do this you can introduce a new variable t and do a minimization over x as well as t and add an additional constraint which says simply that f of x is less than equal to t in addition to x lying in x. Now, any of you can check that this problem is these problems are actually equivalent minimizing this over t and x is the same as minimizing just simply fx over x. So, what has happened as a result your objective function which could have been any nonlinear objective function here has now become the objective has now become linear which is so this is one of the reasons why it is actually somehow you know in optimization people tend to think that the difficulty in optimization is always to do with the objective whereas in reality the difficulty is all in the constraints you know or any kind of of complexity complication in the objective can always be pushed into the constraints. So, the geometry of the constraints is what makes the problem hard not so much the geometry of the objective. So, without loss of generality any objective any optimization problem can be written as an optimization of a linear function over some constraint and that is what we have done here. So, as a result of this what we can do is we can start off assuming that this this here is the star is the form in which you have been given a problem we have been given a problem in which we are minimizing some linear function. So, let us say you are minimizing some linear function c transpose x subject to convex constraints g i of x less than equal to 0 that I will say m convex constraints i going from 1 to m alright. Now, the cutting plane method which is what I am talking about here so let us call that here cutting plane method. So, what a cutting plane method would do is this so in an abstract form what if the general form is so would given a polyhedron you at each iteration k you have a polyhedron pk. So, we are going to assume that the objective assume that you are minimizing c transpose x subject to x in s at each iteration k this is your original problem at each iteration k you have a polyhedron pk that outer approximates s we have a polyhedron pk that outer approximates it alright. Now, you what you do is instead of instead of solving the original problem you solve you do this you minimize c transpose x subject to x in now in pk now this here this here is a linear program this is now a linear program alright. So, if you if x k belongs to s that means if it is feasible for the original problem you can even can stop because you have now found a solution of a lesser constraint problem that lie that is feasible also for your original problem. So, you can now you can stop and declare this as the solution alright but at the but if x k is not feasible for the original problem then what do we do well if x k is not feasible for the original problem then you are in this situation where you have found say a point like this. Suppose this is your original feasible region here and you have constructed an approximation of it using these using hyper planes and got to pk and you have now found an optimal solution say at this corner point here and this is suppose your x k at this stage now what this x k is not in the feasible region s but the feasible region s is convex and x k is outside it then what that tells us is that there must therefore exist another hyper plane like this there must exist a separating hyper plane a hyper plane that separates x k from s and what one can do then is that well add this hyper plane to your definition of pk. So, effectively what and that would then tighten the original hyper plane pk and generate for you a new sorry the original polyhedron pk and generate for you a new polyhedron pk plus 1 and then you have minimized then c transpose x over pk plus 1 and then go on. So, if x k is not in s what one does is find the separate find a hyper plane separating x k and s and then define pk say denoted by denoted by say a transpose x less than equal a k transpose x less than equal to b k. So, suppose this is your hyper plane then you just simply define pk plus 1 as pk intersection x such that a transpose x a k transpose x k less than equal to b k. So, what happened is the original your original hyper plane the original set was this pk as that has been outlined now the new set the new constraints would be this this this and so on and with each iteration your outer polyhedron will become will continue to shrink and it will continue to better approximate the feasible region. So, the advantage of this is that at every step you are only solving a linear program and if solving that linear program is cheap and is something that you can do easily you can effectively solve a convex optimization problem using linear program alright. So, now how do you why do we need this to be convex because we need the guarantee that that a separating hyper plane exists. So, that is something that we have only in the case of a convex optimization problem. The other reason why this convex optimization works very neatly with this is because it all the generation of these new hyper planes defining these new hyper planes becomes very easy when the problem is convex. So, for example, in this in this particular problem that I wrote out here where you have where your constraints are gi of x less than equal to 0 for i from 1 to m. So, if if your point is infeasible so, if if your xk if xk is greater than 0 sorry sorry if xk is infeasible which is effectively saying that gi of xk is greater than 0 then what one does is what one can do is simply notice the following. So, notice that so these are I assume that these are all convex. So, if xk is infeasible that is gi of xk is greater than 0 and for let us suppose you choose the i which is most infeasible that means gi of xk is also greater than equal to gi of xk for all j for all j going from 1 to m. So, if so if this is this is the most infeasible one out of these right. So, then you can then in that case the new cutting plane or the new new hyperplane separating hyperplane is defined as x such that gi of xk plus gradient of gi of xk transpose x minus xk is less than equal to 0 alright. Now this is this is a hyperplane in x and it is defined use in terms of xk. So, this is linear in x and xk is simply a parameter here. Now why is this a separating hyperplane pick the reason for that is well because if I take any if I take any point if I take any point that is feasible. So, for any feasible point any feasible y it must be that gi of y is less than equal to 0 and from convexity it also has to be that this is greater than equal to gi of by convexity this has to also be greater than equal to gi of gi of xk plus gradient of gi of xk transpose y minus xk right. So, what does this mean that any feasible y will always satisfy this particular thing that means the in particular this equation here this here should be less than equal to 0 right whereas on the yeah. So, this particular thing would always be less than equal to 0. So, what does this mean? So, this means that all feasible the entire feasible region of your problem the entire feasible region of your problem must be contained in this particular half space this half space that is written that is defined here right. So, when you find a point xk you just add this particular inequality constraint to your definition of your polyhedron pk and that gives you an additional half space in that contains that defines for you this half space here that contains the original feasible region right. So, in short by this what the summary is that if you have a convex optimization problem like this particular simple tangent condition of convexity also gives us ways of generating hyper planes that would set of the kind that we require right ok. So, with this I think I can I will wind up this lecture and we will take up interior point methods in the next lecture.