 So, let us just do a proof of this. So, one convenient way of writing this particular condition is to simply write this lambda star transpose Ax minus b equals 0. Now, why is this, why can I write it like this? The reason is because I know that Ax minus b or rather b minus Ax is always greater than equal to 0. Since b minus Ax is greater than equal to 0, moreover lambda is also greater than equal to 0. So, then there inner product will be 0 only if you have this sort of situation where if there is slack in one of them, then the other one must be 0. So, this would be a compact way of expressing this. So, let me write it more neatly. So, for x star in omega p and lambda star in omega d, the above condition, the complementary slackness conditions, when complementary slackness conditions are equivalent to lambda star transpose Ax minus b equal to 0 and a transpose lambda minus c, the whole transpose x star sorry lambda star here, x star equals 0. So, now we want to show that the necessary and sufficiency of complementary slackness for the optimality of x star. So, part 1 is suppose x star in omega p is optimal for the primal LP. So, now if x star is optimal for the primal LP, is it possible that the dual is infeasible? It is not possible because the duality theorem of linear programming we learned that if the primal has a solution then so does the dual. So, it cannot be that the dual if you have a finite optimal solution for the primal then you cannot have that the dual is infeasible. So, which means that what this means is that omega d is not empty. Moreover, omega d is not empty and so it cannot be that the dual is infeasible and moreover there always exists a solution to the dual and the optimal values are equal. We know that from the from the theorem of from the from the strong duality theorem. So, optimal value optimal value of primal LP equals the optimal value of dual. What does this mean? If I look at c transpose so c which means that first not only that the omega d is not empty there exists a lambda star in omega d such that c transpose x star which was my primal optimal value is equal to b transpose lambda star alright. Now if you compare this with this particular this strong duality statement compare that with the weak duality statement which is written here the weak duality statement which is written here then what and this this weak duality statement remember was for all x in omega p and all lambda in omega d. So, consequently we get by taking x as x star and lambda as lambda star what are we going to get? We are going to get that c transpose x star is equal to lambda transpose A x star is equal to b transpose lambda star by combining with weak duality. So, now what does this say? What does this say this what is this what is the red statement here say? Well, the red statement simply says that now I can do the following I can put I can take this part together there are actually two equations here there is one equation here and another equation here. So, let me take one of each of them separately. So, what does what this is effectively saying is that lambda star transpose A x star minus b equals 0 and and that A transpose lambda star minus c the whole transpose x star equals 0 right. And now since this this must since if these are if the if this has to be if these have to be equal to 0 what does this mean? This means that it has to be that component wise component wise they should be equal to 0 that means see remember this high, this quantity is always less than equal to 0, this quantity is always greater than equal to 0. This quantity is always this quantity is always less than equal to 0. Sorry, this is greater than equal to 0 and like and this is great and this is also greater than equal to 0. So, for the inner products of these vectors to turn out to be equal to 0, it has to be that component wise they are actually there and they are each 0. Otherwise, otherwise, you will not get that the inner product ends up 0. In the first case, you would get that the inner product is negative otherwise in second case, you will get that the inner product is strictly positive. So, what does this mean? This means that lambda j, lambda star j times the times this sum that we had here which is times this sum or lambda star i times this sum which is which is which is captured in this inequality that must be. So, lambda star lambda star i times summation over j equals 1 to n a i j x star j minus b equals 0 minus b i equals 0, which means that what does this mean? If now a i j x star j minus b i can be either equal to 0 or can or strictly less than 0, these are the only two possibilities which means that so if it is equal to 0 this holds trivially, if it is strictly less than 0 then the other the only way that you can have the product equal to 0 is that lambda star i itself is 0. So, that which means that if a summation a i j j equals 1 to n x star j minus b i is less than 0 must imply lambda star i equal to 0. And similarly, we can show the other way around, the other similarly sum over the sum if this inequality holds with strictly then the corresponding x star j must be equal to 0. So, this is one direction of the complementary slackness condition. What we have concluded so far is if x star is optimal then it has to be that these complementary slackness conditions must hold. Now let us look at the other direction part 2 the other direction. So, assume that so assume that lambda star transpose a x minus b equals 0 and lambda star transpose a minus c the whole transpose x star equals 0 for some x star in omega p and lambda star in omega d. Now, what does this what does this say well we can rearrange this a little bit. So, the first the first equation the first equation simply says b transpose lambda star equals lambda star a lambda star transpose a x star. Now, the second one the second one the second equation it says again says that. So, the second equation I made a mistake in writing this sorry the second equation similarly says that lambda star transpose a x star is equal to c transpose x star. So, now what does this mean this means that b transpose lambda star is equal to c transpose x star. So, the very fact that these two conditions hold for some x star in omega p and lambda star in omega d from there we conclude that b transpose lambda star equals c transpose x star. And now what does this mean we know from weak duality that we know from weak duality that an inequality in this direction must hold and if equal and we know from the strong duality theorem that if equality hold then it has to be that these two are optimal in. So, then it follows that x star is optimal for the primal and that lambda star is optimal for the dual. So, what does this so this completes the proof and what so what have we learned from this we have learned basically that as far as optimality is concern optimality of linear programs is concerned essentially it comes down to just okay. So, what is this theorem teaches is basically teaches us that if you have you if you take a candidate optimal feasible solution x star for the primal and candidate feasible solution for the dual lambda star what they are optimal if and only if they satisfy these these two equations. Now, these two equations if you see them they are actually not linear equations it says that lambda star transpose something in x must be equal to 0 and likewise something in lambda transpose x star is equal to 0. So, this so this is the where the hardness of of linear programming actually creeps in. So, the first cut the simply checking that x star and lambda star belong are feasible is the matter of checking that they satisfy bunch of linear inequalities and it is easy to generate solutions to linear inequalities. But you have to also for for finding an optimal solution you have to effectively end up solving some nonlinear equation even though the original problem was just the linear problem. And this is the this is the root of why this is linear programming is is not trivial. But but you will soon see that this kind of nonlinearity comes up in all types of problems involving inequality constraint. The nonlinearity is in is in this is in this product equation. So, the lambda star sorry this was supposed to be x star the lambda star is multiplied with this to get you and that product this this quadratic thing has to be equal to equal quadratic equation must be 0 likewise this quadratic equation must be 0. Yes, but it is if you look at it as as equations in your variables x star and lambda star then these are not linear equations anymore. Another way of thinking about expressing the same thing is that if you look at this condition this condition simply says that if something is true then something else is true. It is a conditional condition statement it is not merely asking you to you cannot write this as simply a solution of linear equation. It says that if if this is strict then that must hold and if it is not strict then there is no particular it says it says nothing in particular. So this so the so this is the this is why linear programming is actually harder than it than it appears because eventually you have it at at its root even though the original problem is just involves only linear formulations at its root to solve the problem you are ending up to making a solving some nonlinear equation. So that is one thing but having said that the if I gave you two candidate solutions to just verify that they are optimal is very easy all I need to do is just check these two check that they are feasible and simply check that this whole. So finding one may be harder but verifying is absolutely easy because all I have to do is just check this. So what this has done is taken a problem of which was of which is linear programming and reduced it to just simply checking some nonlinear equations checking for the satisfaction of some nonlinear equation. Now you will see why this is why this is a you know a significant simplification because we started off thinking of linear programming saying that all solution it must have a solution on an extreme point but then there were so many possible extreme points it was not easy to characterize them and we said well if I gave you even one extreme point how do I confirm that it is in fact optimal without comparing with all the other extreme points of the linear program all of this is extremely hard to do this is this in comparison is some you know something that you can potentially do much more easily yes not necessarily not necessarily. So the earliest algorithms for solving linear programs were actually went up out their business by searching over extreme point just cycled over extreme point and made sure that you are getting to a better extreme point at each step and that is how they tried to find the solution. Modern algorithms for LPs try to attack this directly try to solve these nonlinear equations directly yes yes yes so effectively solving linear program amounts to finding an x star which is which is in the feasible region of the primal and alongside a lambda star which is in the feasible region of the dual such so they so it means being feasible for the primal and being feasible for the dual simply means that they must satisfy these linear they must lie in these polyhedral and but in addition to that they must also satisfy these nonlinear equations that is what that is what it means to solve a linear program all right. So this particular condition the reason I will also give you a bit of intuition on why this condition this condition appears the reason for this is that if you look at the dual remember I had told you that I had mentioned this once and this I will mention this again in later also that if you look at the dual variables the dual variables are actually the same as what we were earlier calling Lagrange multipliers. So when we were looking at optimization problems with equality constraints we had these additional variables which we denoted by lambda 1 for each constraint they came out of out of doing applying implicit function theorem on the constraint and so on now those additional variables are actually the Lagrange multipliers. Now if you look at the complementary slackness condition what it effectively says the complementary slackness condition effectively says that I need to have only Lagrange multipliers for those constraints that hold with equality if so that although the constraint is stated as an inequality if it holds with equality then there is an applicable Lagrange multiplier for that constraint if it does not hold with equality which means if this inequality is strict then the Lagrange multiplier for it is effectively 0. So that is what these conditions are saying and the situation is completely symmetric between the primal and the dual the variables of the primal are the dual variables of the constraints of the dual or the Lagrange multipliers of the constraints of the dual and likewise the variables of the dual are Lagrange multipliers of the constraints of the primal and so you can you can make the same claim about the dual as well. So if there is a constraint in the dual that that that is that is strict then the corresponding Lagrange multiplier or equivalently the primal variable must be 0 because it does not count effective. Again as I said we will see this in see this some more as we go further into a nonlinear optimization.