 Hello and welcome. So, as I had mentioned in the last lecture, we will look at first a methodology that extracts optimal solution by adding one more unknown to the system of equations so that we still get a square system and we can get an exact solution. We will also try and see how this is ensuring that we are getting an exact solution despite adding an unknown. So, let us begin. So, let us begin our discussion on an optimal solution strategy that makes use of this particular aspect. And let me first make a mention that the technique that we are discussing goes by the name of Lagrange multiplier concept or Lagrange multiplier method. As the technique name suggests, it was suggested by the famous mathematician Lagrange for solving mathematical optimization problems where you have objective function and you have also constraints. The Lagrange multiplier method is that which adds one extra unknown but the feature is that it does that in a consistent manner or problems that have equality constraints. Here, let me also make a mention that many optimization problems which have constraints can also have what we call inequality constraints that is less than or greater than kind of constraints. The Lagrange multiplier is specifically suited for problems where the constraints are in the form of an equation or an equality. Now, in order to understand the philosophy, let us recall the basic fact that a solution will be optimal at a point if and only if all the partial derivatives of the objective function are 0 at that single point simultaneously. This is a mathematical requirement that all the partial derivatives must go to 0 at a common point so that that particular point becomes an optimal solution. Now, we make use of this requirement to extend the methodology and say that if I can also satisfy the constraint exactly at that same point along with the equations that I get from partial derivatives then the solution that I get at that point will also be exactly optimal for the constraint optimization problem which means if I did not have a constraint and if I had only n partial derivatives then my optimal solution would correspond to that one single point where all the partial derivatives are driven to 0. The same logic is extended to say that in the context of a system which has n plus one equation which contain n partial derivatives and one equality constraint. If I can satisfy that equality constraint also at the same point exactly then the solution is also exact under the constraint and this is the basic philosophy of the Lagrange multiplier method. Now, in order to do that we take recourse to the concept of constraint error which means we define a parameter that will represent the error in the constraint at multiple points and so that this error actually becomes 0 only at the point where the constraint is exactly satisfied and that point also is the point where all the partial derivatives are driven to 0. But in order to do that we have to have a reformulation of the problem. The reason is that now the solutions that I am talking about must be sensitive to this constraint error. So, which means that while generating the partial derivative my formulation must be sensitive to the changes in the error in the constraint as the solution evolves. This can be done in a very very simplistic manner reasonably elegantly by augmenting the objective function through the addition of a term corresponding to the constraint error along with an additional unknown which is commonly called a waiting factor. So, which means that I define a constraint error we will see how this can be done then I give a weight to that error and now this weighted error I add it to the objective function. By doing that you will immediately realize that whenever I take partial derivative of the objective function the augmented objective function partial derivative will not only contain the effect of the basic objective function but also the constraint error and because of this it is possible for us to then solve for n plus 1 unknowns simultaneously under the constraint that all the errors at the optimal point will become 0. We will see how this is going to be done. So, the additional unknown or the waiting factor that we use for the weighting the constraint error is called the Lagrange multiplier. It is a scalar quantity which is multiplied to the error quantity and this is treated as an unknown and it acts as a weight for the error due to constraint. Here it is worth noting that if the error actually goes to 0 then no matter what the value of this parameter is the objective function will be exact and we can also clearly see that exact optimal solution will be obtained when all n plus 1 equations that is n partial derivatives and n plus 1 constraint equation are exactly satisfied in an equality context. So, let us now see what kind of formulation we are going to get when we are trying to solve a constrained optimization problem. So, as you recall we had given two scenarios in the context of multi-stage rocket design. The first one was that we would like to maximize the burnout velocity for a specified payload constraint or we would like to maximize the payload for a given burnout velocity constraint. Let us now see what will happen to these two scenarios from a constrained optimization point of view when we make use of the Lagrange multiplier and add the constraint error. So, let us recall the two expressions that we have seen earlier regarding the mission payload fraction pi star and the mission ideal burnout velocity v star. As we can see both pi star and v star are functions of the stage payload ratio pi i s which are the design variables in the present context. Let me again mention here for the sake of completeness that the number of stages n is a parameter. The epsilon i the structural ratios are again going to be based on selections that is going to be done through a separate exercise. And similarly ISPI which are the specific impulses corresponding to the rocket motor will again be selected through a set of available values based on the overall purpose and the available database of such propellants with the design agency. So, pi star is a function of pi i s and v star also is a function of pi i s. These are the two relations which will alternately be used either as objective function or as constraint which means if I use the pi star as an objective function v star becomes a constraint and if I use v star as objective function pi star becomes the constraint. Let us now define what is mean by the constraint error. So, let me go back to the previous expression just to give the basic philosophy. If you look at these two relations which can either be used as objective function or a constraint then if we are looking at this as a constraint then the equality relation tells me what is the constraint. Now, if I take for example the ln pi star relation and if I take the summation on the right hand side on the left hand side what I get is ln pi star minus this summation equal to 0. So, I am saying that as an equality relation the right hand side becomes 0. Now, I say that this particular condition will hold good only at one specific point and not in a general sense. So, that the expression ln pi star minus the summation i equal to 1 to an ln pi i is nothing but my constraint error in the state is the overall mission payload fraction. Similarly, I can take the right hand side expression of the velocity to the left hand side and it becomes v star plus g naught into the summation and that represents the error in constraint due to velocity. And now what I will say is that this is the error term because if it is nonzero it obviously means that my pi star is not equal to the sum of ln pi i's and similarly v star is not equal to the right hand side and which means that there is an error. So, in a such a simplistic manner we can now talk about creating a formulation that we use these two expressions alternatively as the constraint error. So, we define e pi as ln pi star minus i to the power i equal to 1 to n ln pi i or error pv equal to v star plus g naught summation i equal to 1 to n is pi into ln epsilon i plus 1 minus epsilon i pi i. Now, we are assuming that everywhere else this will be a nonzero quantity indicating that until you reach the optimal solution as defined by the n partial derivatives. So, please note the n partial derivatives where they go to 0 at a single point that is the point at which I would also like this error to be driven to 0 which means now I must connect this error to those n partial derivatives. Let us see how this can be done and for that as I had mentioned we will first define what are called the objective functions which are augmented as shown below. So, now I define an objective function h pi which talks about the objective of pi star as sum of ln pi i i going from 1 to n and to that now I have added the velocity constraint error through the Lagrange multiplier whose symbol is lambda. Now, let us just look at this objective function which is augmented and immediately you note that if the constraint error goes to 0 the objective function will be exactly for pi star. So, there will be no error in objective function and if there is no error in objective function all my partial derivatives will be exact so my optimal solution also will be exact. Similarly, if I look at the second function for V star here I take the velocity expression of minus g naught and the sum and to that I add the constraint error due to the mission payload fraction again multiplied by the same symbol lambda and I call this h v as the augmented objective function for velocity and similar to the previous one we note that if the constraint error due to the payload fraction goes to 0 the velocity objective function will also be exact. Now, the next step when we look at these two augmented expressions it is clear to us that the partial derivatives of above function will not only contain the partial derivatives of the objective function but also the partial derivative of the constraint error. So, the equations that we are going to get the n equations corresponding to n partial derivatives will not only contain n design variables that is pi 1 to pi n but will also contain lambda as the n plus 1. So, the n partial derivative when we generate it will contain the n plus 1 unknowns in n equations what we will do now is that the n plus 1 equation is nothing but our constraint equation in the form of an equality relation. So, we will say that those n plus 1 variables in n equations are related to each other through the constraint relation which fixes the n plus 1 and that brings us to an important point that the Lagrange multiplier lambda or the weight is actually coupling all the design variables this coupling is resolved under the condition that the constraint is exactly satisfied so that we get a constraint consistent solution. What it means is that when we satisfy the constraint exactly the constraint error goes to 0. The moment constraint error goes to 0 the solution of constraint equation directly gives us a solution for lambda which then says that this is the value of lambda for which the constraint error goes to 0 and if I substitute that value of lambda into the remaining n equations which are from partial derivatives which also say that all the partial derivatives also go to 0 we will be able to directly solve for n values of pi which will be the exact solution under that constraint once we get those n values of design variables we just use them to obtain the stage wise mass configuration along with the total lift of mass. Thus to summarize the constraint optimization technique based on the Lagrange multipliers is found to be adequate for arriving at the best possible stage payload solutions. Of course we note that we need to solve for one additional constant in order for us to incorporate the constraint high. So, in this lecture we have seen the basic philosophy as well as the formulation aspects of defining a constraint optimization problem using the Lagrange multiplier approach in which as you have noted we make use of the constraint error as part of our objective function and then create a set of n plus 1 coupled algebraic equations in which the coupling parameter is the weighting factor lambda which is evaluated from exact satisfaction of the constraint. Indirectly we know that if we satisfy the constraint exactly then we will have the exact optimal solution for the given problem. In the next lecture we will now see through an example how this particular methodology works and what are the nature of solutions including their features that are important for us to consider. So, bye see you in the next lecture and thank you.