 So, we started this, we went on this journey because we were talking of optimization with inequality constraints and the first kind of problem in that the first kind of the simplest sort of you can say problem of optimization with inequality constraints is when you are minimizing a linear function over linear inequality constraints. So, minimizing C transpose x where the x is of your variable subject to the constraint that A x is less than equal to b. So, what are you doing here? You have your objective now is a linear function, the constraints are what are the constraints? The constraints are linear inequalities, what do it is a common region for that lies in all of these inequalities. So, it is a common region of all of these half spaces. So, the feasible region is a polyhedron. So, we are optimizing a linear function over a polyhedron and that is what is called a linear optimization or a linear program. A linear program as so you have a linear objective, the polyhedral feasible region. So, this sort of problem is what is called a linear program. So, let us look at what we, let us try to see what we can say about solutions of a linear program. So, firstly, the way this problem has been posed, we are optimizing a linear function over a polyhedral region, a polyhedral feasible region. Can you show me how this problem can be transformed into a problem that looks like this. So, how do I transform it from this to this, can does not everyone understand what it means to transform? So, we would like to transform, you would like to take this a problem like this, do some transformation that we were that we had discussed earlier, introduce variables, change type of constraints, etc to whatever transformation you can to bring it to this sort of form. Now, when I say this form, I do not necessarily mean that I am only concerned about the form of the problem means this a here does not have to be the same a here. Similarly, this b and this b does not have to be the same, this c, this c does not have to be the same. In fact, this x and this x also does not have to be the same. But I should be able to recover the solution of one from the other. So, can you tell me how do I do this transformation? So, the first thing is we have seen was the introduction of slack variables that lets you convert inequality constraints into equality constraints. So, the first let us try to do this. So, first introduce a slack variable slack variable s and that then gives you this equivalent optimization. You are minimizing c transpose x over the constraint. Now, A x plus s equals b, s is greater than equal to 0. And now your object, your this the variables over which you are optimizing is both x and s. So, now, is this problem question now is, is this problem in the form of this problem? What is the difference in the form of these two problems? So, the key difference is that here only if you here remember what is the role of x in this on the problem on the left hand side, the role of x here is that of the decision variable. So, it is representing all the decision variables together. All of them are required to be greater than equal to 0 in this problem. Whereas, if you look at it on the if you look at the problem on the right hand side, the decision variables now comprise of both x and s. It is these together, but the problem only has some of them greater than equal to 0, which is the s component of it is greater than equal to 0. The rest are not we are not imposing greater than equal to 0. So, this is not in the same form as this. The problem on the left hand side has the form where all decision variables are greater than equal to 0. The right hand side has only some of them. So, we have to somehow do more work to get it to this form. Remember, focus is on form. Of course, x itself may be you cannot ask for x suddenly to be greater than equal to 0 because it may not be valid in this particular problem. So, you have to transform x into some other using some other variables in such a way that you get it to this sort of those variables end up having this sort of form. So, what would be the way to do that? So, one simple way of observing this is notice that x is just can is a real vector it can take any both it can have positive as well as negative components take any component of x then that is just a real number it can be positive or negative or 0. But every real number can be written as a difference of two positive numbers always. So, component wise I can always write x as the difference of two non-negative vectors. So, so I can do this change of variables I can replace x you can replace x by u minus v where u and v are both non-negative. Since u and v are non-negative the difference but I am looking at the difference the difference can take both positive values negative values can take 0. Every possible value of x can be generated as a difference u minus v where u and v are non-negative. So, if I make this transformation this will not change the this is this transformation is going to be equivalent. So, I can make this change in minimizing c transpose x now becomes minimizing my x gets replaced now by u minus v u minus v a as is going to be multiplied by u minus v plus s equals v. But I am going to keep u and v non-negative. So, so my I will have inequalities s greater than equal to 0 u greater than equal to 0 v greater than equal to 0. Now what are my variables now the decision variables of my optimization s which is earlier remains and in addition to that I have u and v also. So, I am now optimizing over u v and s. So, I can just write this whole thing in a eta form we write it like this minimizing over u v s this sort of function c minus c 0 s now question is this in the form of this problem on the problem that I was looking for yes it is in this form because all my variables are now non-negative I have equality constraints or the other constraints are all equality constraints and I have a linear objective. But you see in the process of getting to this form my x is completely disappeared. So, when you are changing the form of the problem you should be open to changing redefining your variables redefining your constraints introducing new variables etc. So, but focus should be on the form not on the on retaining the identity of particular variable. So, what it means is that what the conclusion from this is that if I give you any linear program like this which is optimizing a linear function over a polyhedron after some transformations I can bring it down to this sort of form now can you tell me if you can make this even more specific. So, I have brought the linear optimization problem to this form can you make it even more specific say more about remember the transformations I was we had discussed we had discussed things like eliminating constraints for example are there some constraints that you can trivially eliminate from here. So, look at we have equality linear equality constraints here. So, if there are rows of A that are linear combinations of some other rows of A then it has to be that then you can eliminate those. So, if there is a row of A here look at the matrix A there is some row here that is happens to be a linear combination of these two other rows and you have B on the right hand side here the corresponding B here is your B this is B this is A. Now, when I do row transformations I can we are using row transformations I can basically eliminate make these all of these 0 when I am doing row transformations what am I doing I am take multiplying this row by some multiplying this the rows by some constants adding them to other rows and subtracting them from other rows etc right. So, by doing that I end up eliminating this make this entire row 0 so it is as good as not present what should happen to this after those transformations if these equations are consistent this should also become 0 right otherwise these equations are not consistent in which case there is the feasible region itself is empty right. So, if this with this and this with this and the original form of these three equations if they are consistent then after these rows transformations not only should this disappear this should also become 0 right. So, in short that entire constraint should become irrelevant. So, if the feasible region is non-empty then without loss of generality I can do these transformations and get rid of all these linearly dependent additional constraints. Now, is this changing the geometry of the problem? Is it changing the feasible region? It is changing the number of constraints, but it is not changing the feasible region right because the feasible region is the set of x's that satisfy these constraints and the same x's that satisfied your earlier set of constraints continue to satisfy the reduced set of constraints also because you have only removed the linearly some linearly dependent constraints correct. So, the feasible region does not change but constraints get removed no x the dimension of x remains the same it is only the number of constraints which is changing right. So, remember feasible region is a region in space constraints are just an algebraic way of representing them there are many redundant ways by which you can represent the same region and we are removing all this redundancy by getting rid of these dependent equations right. So, if the problem is feasible then I can do the above transformation then by row transformations we can reduce the problem to this form we get something like this you are minimizing a linear objective C transpose x again over the form of the constraints is still the same A x is equal to b x is greater than equal to 0 form of the constraints is still the same but what can we say I can now say that if I look at the rank of A that is then that has to be equal to the number of rows of A. So, A is full row rank. So, by doing so this is another further reduction so I had a general problem like this where I had an equality constraint A x equal to b and x is greater than equal to 0 I have now said well that also I can be more specific I can take A to be full row rank can you be even more specific then is there any further further reduction possible this A is not no more the same A that you started with the A we started with is this the A we ended up with is this sort of A and this may have dependent linearly dependent row. So, question is that I asked was can is it possible to make this problem even more specific can you say something more sorry no x is greater than equal to 0 that is just a constraint. So, we can make it more specific make it more specific by to the point where we now begin to solve the problem. So, but that is not what we are trying to do we still want to read we do not want to dive deep and begin you know solving the problem but we want to still looking at the form itself try to be more specific about it. So, instead of an arbitrary A now we have an A with which is full row rank what about b in the in all these transformations b has actually left been left untouched. So, b can actually be taken as greater than equal to 0 and this is without loss of generality can you tell me why so without loss of generality b can be taken as greater than equal to 0. So, if I if I try to do what you are saying which is that I write b as b1 minus b2 take the one of the as a difference of two non-negative b you will take one of them to the left hand side but it is not being multiplied by x. So, the point here is a very simple observation actually that here you have equality constraint. So, they remain valid even if I multiply any one of the rows by minus 1. So, if I have a b which is negative I can just multiply that particular row throughout by minus 1 the corresponding row of A will get multiplied by minus 1 b gets multiplied by minus 1 and I get back on positive b right. So, by doing this I can get without so without loss of generality I can assume b to be greater than equal to 0 right. So, what this means is so this gives us the final form that you can do minimize c transpose x A x equal to b x greater than equal to 0 b greater than equal to 0 A full row rank. This is what is called the standard form standard form of a linear program. Now, why care of why do all this the reason is because once you have a standard form standardization like this helps in creation of technology on top of it right. Once you have a standard form you can people can create methods for solving that particular problem of that particular form. If you have too many different forms then there is no fixed form you do not know which one to choose and so on right. So, you will see that many solvers usually assume that you are starting of your linear program in the standard form. So, they demand that you enter your problem in a standard form right. Of course, you can anything that is written in this sort of form is at the end of the day a special case of the first definition that I mentioned. It is a eventually a linear optimization of a linear objective over a polyhedral feasible region. So, if the solver demands that you write it in this form you can also write it in this form clearly. So, we will end here and we will continue next.