 Let me show you another example of another example of a problem. Suppose, let us look at problem number 6. So, suppose you have the, let us consider the case where suppose my equality constraints, these equality constraints, this are actually linear. So, this is actually the same as, so hj of x equal to 0 for all j equal to 1 to p is equivalent to doing, say Ax equals b. Suppose, so what this means is, so if I more precisely, if I look at all the x's for which hj of x equals 0 for all j going from 1 to p, then that is actually the same as the solutions of this linear equation, system of equations Ax equal to b. And now of course, this will be this sort of thing where has an interesting solution only when p is less than the number of variables which is n. So, if you have, if my x is in rn here and my, so then this is, so my number of rows here, so a has to be in p cross n, sorry, number of rows is p. So, if you have this sort of a problem where you have equality constraints like this. Now, when you have equality constraints, so or that is basically another language, the system of linear equation, we can always find the solution set. So, the solution set of a problem like this would look like this, it will look like some fx, sorry, it will look like some fz plus x0, where z ranges over say some rk. How do I get it to this sort of form? Remember, we are talking of p less than n, right? You would get, you get this in this sort of form by you need a particular solution which is x0 and then you need a, so any solution x that of Ax equal to b, whenever you have a solution like that, then you can always add to it and take some vector y say in the null space of a, take the vector y which is in the null space of a, then that x plus y is also a solution of this system of equation. Now, the null space of a has its own basis, right? The null space of a has its own basis. So, let the basis of the null space, let the basis of the null space be the columns of f, right? So, if I take the basis of the null space as the columns of f, what I need to take any solution x0, a particular solution x0 and then add to that the entire null space of this, of a that would become my, that then gives me the same, that is the same as the solution set of my original set, yeah. No, no, no, I am saying suppose that is the case. So, it is not equivalent, suppose it is, okay? So, we are considering only that case where this is, where you have linear equality constraint. So, in that case then now what happens is because you are able to do this, your entire problem can now be transformed. So, you can now do, you can get rid of your equality constraints altogether and do this substitution. What is my variable now? Now, my variable is z. So, I can, so and z is just anything in rk and there is no constraint on, there is no sign constraint or anything like that on z, it is the dimension of my null space, you can see, okay. So, this problem is now what have we done as a result? Every x that satisfies Ax equal to b has been written as equivalently as an fz plus x0 and I have substituted for all such x's as fz plus x0 and put them back in. Once I do that substitution, this is equivalent, to the original. So, all the x's that satisfy Ax equal to b can be written in the form fz plus x0. Likewise, every x, everything that is in the form fz plus x0 satisfies Ax equal to b. So, as a result my equality constraints can be removed altogether. So, if you, this kind of way, if you have a way by which you can solve for one set of variables using the equations or the constraints that you have, you are welcome to do so. That gives you an equivalent optimization problem. So, this problem here, this problem let us call this problem 6, 6 is equivalent to problem star. In what sense? What about the optimal values? They would be equal, optimal value of star is equal to the optimal value of 6. How do I relate the feasible regions? So, 6 is written in the z space which is rk star was written in Rn. But still they can be related, they can be related because of this relation. So, through this relation you can, so for every x that is feasible for star, there is a z that is feasible for 6 and z is equal to and x is equal to fz plus x0. For every, so x is feasible for, sorry, for every x that is feasible for star, x0 is the, well it is what is called the particular solution of this linear equation. Let me show you one case where again cautionary note, while you are eliminating equations and so on. So, this, so the one case where you should be careful is this sort of case. Suppose I have, let me suppose I have this kind of optimization problem. So, this is not exactly elimination, but it is rather some other form of transformation you can say. And suppose I have again linear equality constraints, so simplicity, linear equality constraints, some Ax equal to b. So, this is my, this is one problem. Now what I do is I, let me suppose I multiply these constraints, the equality constraints by a matrix m. So, I let me left multiply it by m. If I am left multiplying by m, what can I say about this problem 7 compared to the original problem? So, if m is non-singular, then of course you can just multiply again back by m inverse and you will get back the original problem. Now the reason I brought this up is, so is because there are in many cases m is not singular. Now if m is not singular, like for example, m is suppose just a vector, just a row vector, a row vector that you are multiplying on the left by and like on the left hand side and likewise on the right hand side. What you are effectively doing is doing a weighted sum of all your constraints. Now one of the things that students tend to do many often when they are all trying to solve optimization problems, they see these equality constraints as equations and then they go about, okay, why do not I add these equations, let us see what happens, this cancels, that cancels, etc, etc. And in the process what they are doing is applying linear transformation. Now they may get a solution of these equations. You really need to carefully ask, have you captured all the solutions? You may get one solution while you do this, but you may not get all because generally you have more variables than equations and what you will miss out is this whole range of solutions in the null space. So if m is not singular then this kind of elimination whatever you do by adding equations, subtracting equations, etc. That is, all those transformations are effectively amount to multiplying by some m. So be careful whether your transformations are actually giving you all the solutions or not. So that is the reason for explaining this. Does the sign of m matter in all this? Can you subtract equations? Can you add equations freely? The sign is immaterial, because you can always add equations, subtract equations, no problem. What matters is non-fingularity. Now what if I had to do the same thing with my inequality constraints? If I suppose I had these inequality constraints, let us suppose we write them for simplicity as some Cx less than equal to D. If I had inequality constraints like this and if I multiplied both sides by another matrix say, multiplied both sides by another matrix suppose p, is that valid? What do I need on p? So for this to be valid means that firstly inequality has to be preserved when I multiply. So p, so it has to be the k, what are you doing effectively? You are multiplying each of the inequalities by a scalar and each of them needs to be preserved, otherwise this whole operation is not going to be a scalar. So you need that p has all entries non-negative, has all entries positive. And of course you need then non-singular so that this is this is equivalent to the original. So when you have inequality constraints as the other danger of flipping the direction of the inequality and changing the entire region over which you are optimizing. One last point let me make about again this is not exactly related to transformation but this is again an important trick. Suppose you have an optimization problem like this which is over two variables some variables x and y and you have constraints also over x and y. So you are minimizing over both x and y so this is less than equal to 0. The question here is is this the same as optimizing first over x, first over y and then over x. So fixing an x I optimize this over y or vice versa fixing for optimize first over x and then over y. Question is does the order matter or when we are faced with an optimization problem we can freely we can optimize in any order we like. No, no you are eventually minimizing over both. Question is can I do piecewise? When I do minimize over one first keeping one fixed then minimize the resulting thing over the other variable may not be but I am after that I am revising over the first one also. See I am minimizing over x after I have minimized over y. So it is incorrect to say that y will be fixed. So let us understand what we are doing here. So when we are minimizing over x and y the objective is to goal is to pick jointly a pair of x and y. So you have to pick that pair combination correctly. So when I am doing saying first minimize over y then over x what I am doing is fixing a value of x finding a y. So my necessarily my y is going to be a function of x. So it is not so this is something to be remembered when you do it analytically of course you will get y as a function of x but when you are doing numerical computation this will this is often not evident because you will get back some number you will just input that number and forget that it is dependent on x anymore. So y is a function of x I put that y back as a function of x then I will get some complicated function of x in which y has been eliminated and then I will I optimize that over x. So this can always be done it does not matter what the order is but there are some practical issues though. Remember whether you can whether a pair of x comma y is feasible or not is not evident until you test the pair itself. See that pair whether it is satisfying all the constraints or not is not evident until you test the pair. So it may not be that for every x you can get a feasible y. So you fix an x outside you search search search you may you satisfy the first constraint second constraint third constraint you do not satisfy. In what what does it mean you cannot satisfy that constraint there is no y that satisfies first second and third together which means that the x itself needed a change. So you need to start with a different x and then you search again over y or maybe then again start prior different x etc. So in the presence of constraints this whole procedure gets messy. If you do not have constraints in the in the unconstrained case if you just simply had this sort of problem this is absolutely no no issue you can always do this minimize first over y then over x or minimize first over x then over y. As I said in the presence of constraints also it is valid but it is messy if this this is this is this is sometimes the easier thing to do when you get when you notice that your objective has some structure that it is simpler in one set of variables than in the other target those variables first optimize over those get those in terms of the others substitute back solve all right fine. So this is also another case of you can say roughly speaking transforming optimization problems you are getting rid of some variables by optimizing out them out okay all right. So we will stop here.