 So last class we have solved a problem that is we have to optimize a functional agree we have to optimize a functional without any constraint that means unconstant optimization problem we have seen how to solve using calculus of variation technique then we have considered the application of calculus of variation to control problems if and we could not complete that whole exercise so let us recollect what we have discussed last class consider a dynamic system described by x dot is equal to f which is a function of the variable state and the control input to the system ut and time so this may be a linear function may be non-linear function which we did it that a dynamic system which can be described by a nth order differential equation that dynamic equation is converted into a n first order differential equation and they are coupled each other and that differential equation may be linear or non-linear so our problem is to optimize this performance index what is this performance index one term of the performance index is the functional integral of the functional and another is a fixed method is terminal cost that means x time t is equal to t f this state is x t f is known and t is is equal to t f that terminal cost is given so you have to optimize this performance index in order to find out u to u of t control input such that that performance index is minimized and it as well as it satisfy the dynamic equation so the control input will dictate the response of what is called state so we have a performance index this is index ion omitted ion so ion index so this is the system input is there we have to design that input such that the performance index is minimized in our case in the sense that this is the integral part of the performance index and the fixed part of the performance index which is the terminal t is equal to t f this terminal cost is known to us so our objective is now that if you see u of t you find out u of t that minimizes the performance index that so this is a what is called constant optimization problem previously we have discussed the minimization of a functional without any constraints but here now we are considering the constraints and this problem is also known as bulge of problems so what we did it that terminal cost what is the terminal cost that we want to like to push it in the integral part of this performance index so this terminal cost that d f s d t d t we can write it in this so this terminal cost this is the terminal cost cost the terminal cost will express in terms of integral and that constant term that is t is equal to 0 and the terminal cost is fixed known so using this equation in our original part of the performance index that we get it this one just to put it the terminal cost expression in term so this and this that is what we have seen earlier slide so our problem minimization of the performance index to minimization of the performance index that one minimization of the performance index to that one the terminal cost expression in terms of this and this that what we have seen earlier slide. So our problem minimization of performance index 2 minimization of performance index 2 that that one minimization of performance 2 this one that performance index 2 this is because we have replace this thing in terms of integral part and constant part is same as minimization of the performance index 4 subject to the constant x dot is equal to f of x that constant. So up to this we have discussed last class so now this is the constant term so minimization of 4 is equivalent to minimization of this integral part of that one is same because constant term what is this minimization of that one at what point that what point or what trajectory of u 2 this function will be minimum with constant term also at that value of u star or trajectory the function will be minimized. So instead of equation 4 we can rewrite the equation 4 here we can write it optimization of optimization of equation 4 is equivalent to that of j is with this integration of t 0 2 I am writing same expression except that constant term x of t u of t t dt plus t 0 2 t f d s x of t which is a function of x and t s terminal cost into dt. So minimization of that function is same as the minimization of this objective function. So before that further derivation we just recollect what is called chain rule in differentiation. So let us call we have a function f which is a function of x t x is a function of x t y also function of time parameter time t and z of t agree. We want to differentiate this with respect to time t differentiation of this with t by chain rule del f del x into x dot dx dt x dot of t plus del f del y y dot of t plus del f del z z dot of t z is a function of time. So we differentiate the z with respect to time t z dot of t. So this chain rule I will apply here to find out this one. So say what once again f is a function of x y z and x is a function of time t y is a function of time t. So differentiation of x with respect to t is nothing but a partial differentiation of f with respect to x because it is a function of x y z keeping y z constant you differentiate this one and then differentiation of x with respect to t differentiation of f with respect to y keeping x and z constant then multiplied by y dot similarly differentiation of f with respect to z keeping x y constant then multiplied by z. So this chain rule I apply here because it is a s is a function of time x t and t agree. So this I can write it then j is equal to j dot of this t 0 to t f then v as it is we write as it is this term x of t this is capital x of t u of t t dt and this will write it by using chain rule. So this will be a plus t 0 to t f so what will write it differentiation of d s dt differential with respect to time t because s is a function of x t and t so what will write it that one that will write del f del s with respect to x of t into that x dot of t this is capital x plus del f del s of this with respect to dt and t differentiate with respect to time t that means it is one that whole thing into dt so for this portion we have written that one so now see what we can simplify that one this expression if you consider this is equation number six agree this performance index that means equation four minimization of equation four is equivalent to minimization of equation five because in equation four there is a constant term is there so optimize at what value of u you will get the maximum or minimum value of the objective function without considering what is whether considering the constant term I will get the same stationary point u star of t to optimize this function again so equivalent it is equivalent to say the optimization of objective function four is equivalent to what is called optimization of objective function five subject to constant same constant what we can x dot is equal to f x of t u of t comma t so this equation this by using chain rule we have written this subject to our same constant subject to x dot of t is equal to f x of t u of t of this is the subject so this is a what is called constant optimization problems we know very well how to in static optimization problems also we have seen how to convert a constant optimization problem to an unconstrained optimization problem and then what we have discussed in calculus of variation is our first problem that if you have a functional what is called j which is a integral part in view of x t comma x dot of t comma t dt then we know how to optimize that function so first our job is convert this problem what is called constant optimization problem to a unconstrained optimization problem or problem is find use u such that find optimum trajectory of u such that this performing index or objective function is minimized subject to this constant and that constant is a dynamic equation so that let us call this is equation number two so our job is convert convert equation six and seven into unconstrained optimization problem using lagrange multiplier so that is what we have discussed earlier that way we can do it next is what is our modified performing this when you use the concept of lagrange multiplier what is the our modified modified performing index so our j is equal to t 0 to t f v x of t u of t t dt plus integration of del s back a function of time x t this del x of t whole transpose see this one that part I am writing but here you see if you consider x is a if you consider x is a vector of dimension n cross one then you have to it will be a row vector row vector multiplied by column vector then it is a scalar quantity the whole thing is a scalar quantity agree so this you have to take transpose agree transpose that one so that will be transpose into x dot of t plus del s x of t t del t this one and this whole is dt just this equation this and this a club together I am written in that one this plus because it is a unconstrained optimization problem because constant optimization problem now I converted into unconstrained optimization problems agree so language multiplier is used and that dimension is if the dimension of x is n cross n that dimension will be n cross n and that must be a column row vector because the product of this must be a scalar one so that is equal to f of x the dynamic equation the constant equation you can say this is t bracket this is t bracket minus x dot of t and this part is 0 so I multiplied by constant what is called our lagrange multiplier this is the lagrange multiplier lambda is lagrange so this is the equation number I can write the whole thing if you write it this whole thing I can write that is okay this bracket this whole thing you bracket is completed here and dt so now see this performance index of that one is same as before because this part is 0 when it is optimized the function of that is our objective function this part will be because this must satisfy the our constraint it must be 0 that one so this I have just written it so that is equivalent to t0 to tf t0 this is tf t0 to tf that I use the another function name is lagrange function so that is the function of if you see xt ut the lambda t and t whole into dt that whole this plus this plus this plus this whole I am denoted by l so this is this l is called lagrange function so let us call this equation number is 8 so now you see it is now it is our original problem what we have considered we have to optimize a functional agree without any constraint now it is becoming same problem so I can apply the same technique what we have discussed earlier to find the optimal value of the functional subject there is no constraint agree so what is the necessary condition what is the sufficient condition we can easily derive so for your convenience I just will derive this one because l function now is different from v so let us see this one where l which is a function of x of t u of t lambda of t and t which equal to v the integral part of that equation 8 I am writing so that is equal to v function of x of t u of t lambda of t and t agree plus if you see this one plus lambda this transpose then f of x of t u of t then t agree so what I did it here this term integral part this term and only this part not considering this and this part I have written together this plus the remaining term so what is the remaining term is del s del x of t of t del x of t this whole transpose into x dot of t plus del s which is a function of x t of t differentiation with respect to time t minus lambda transpose x dot of t so this is the things so this and this that means which is a v and that lambda transpose of x transpose t agree this we denoted by a function x h which is a function of lambda x u t and lambda t and t see this one that means if you just consider the integral part of the objective function that plus the constraint what is x dot is equal to f x f that constraint I have right hand side of the constraint multiplied by a vector agree that term I have considering a function which is denoted by h and that h is called Hamiltonian function we will see if you split up this function into this form that will be convenient when we apply to our control problems agree and that control problem description when it is given into a state phase form it will be convenient when you will express this thing into a Hamiltonian form and what is the left over terms this this this so del s x of t of this del x of t whole transpose x dot of t plus del s x of t t del t minus lambda transpose x dot of t so this is the so lagrangian equation is nothing but a Hamiltonian function plus some of the differentiation of s terminal cost with respect to x transpose x dot plus differentiation of terminal cost this with d t and then minus lambda transpose into x dot is that one is expressed in this one so once you do this one then I can write now this part of at part of condition what is the value of j at at the part of condition part of condition means if you see this you can that j is defined this one when x is part of with a x t plus delta x t ut part of with a ut plus delta t agree and t is that one t is equal to t so this trajectory what does it mean our job if you see this one we have to look for a optimal trajectory of u star agree which in turn this performing index will be minimize subject to this constraint so let us call whatever the u star is there u is the optimal trajectory around this u around this optimal trajectory u there is a neighborhood of u there is another trajectory agree and that trajectory if you consider that trajectory what is the and corresponding functional value of the objective function will be what corresponding to that part of trajectory when u is part of by ut of delta u of t similarly with the control action of this one x will be also part of so I am writing this one now t 0 to t f and that time t final time is t it is part of with a delta t f agree then v and if x star is the optimal trajectory and we have given the perturbation of delta x t similarly with u star is this and part of with u star of t u star is the optimal trajectory and it is part of from the neighborhood of u star is delta u so which in turn x also will change from optimal trajectory x star to delta x t and this t plus delta s of this delta x whole transpose that and x dot because if you see this one this is the value of this one you find out this value of that one differentiated away with respect to this along the trajectory of this one so this star plus x dot star plus delta x dot of t this is the that part we have written part of trajectory of that one then what is left left the del s of del t whole star is there just that one plus this term plus lambda of t transpose then f x star of t delta x t plus u of t u star of t plus delta u of t then t agree minus x dot minus this f of x i am written this minus x dot is what is x dot x dot star of t plus delta x dot of t of integration dt so what i did it this suppose j is the value of the function now i part of u by u plus delta u naturally x will be part of x plus delta u agree and then we are finding out here what is the new objective function when we part of the trajectory u star to u star plus delta u and x star to x star plus delta u so this and the term integration t 0 to t delta tf because our final time also that is changed to tf to delta tf so now this is the objective function value what is the incremental of incremental functional value so one can find out this is nothing but a if you see this is nothing but a i can write it t 0 to tf lp means part of model part of functional lp dot of t dt this whole thing is part of p stands for part of the whole thing is this so this i can write it equal to this is delta tf now whole thing i just can write it this that t 0 to tf lp dot dt plus tf to tf plus delta tf lp dt so it is a part of lagrange function this lp so this i can write it nearly equal to if you this think of with this this is as it is t 0 to tf lp dot this dt and this is nothing but a lp is a functional square functional and area under this curve from tf to tf plus delta f if you just consider just like this way it is the trajectory from this is the tf agree and this is the your tf plus delta tf and this is our lp dot function agree this is the lp i am plotting this function now what is the area under this come tf to tf is nothing but a this whole thing agree so one can write this is nearly equal to find the ordinate of the lagrange function value at t is equal to tf so find lp this is lp what is called if you see this is lp and there is a another curve is that that one what is this one this is l dot this is equal to tf this and this is the t is equal to that is tf to tf plus delta tf this is without part of lagrange function without part of lagrange function so area actually I have to find out the area from here to here with a part of molar is same as I can write it is same as the area find out the function of the lagrange function value at t is equal to tf this function that means this ordinate you find out multiplied by delta tf so this will be approximate because delta tf very close to tf so I can write the area under this curve is same nearly equal to area under the lp from tf to tf plus delta tf plus delta tf that you can write it so that is why I am writing nearly equal to that one so if it is so then I can what is the variation of functional value variation of functional value delta j a the variation of functional value j a dot minus j dot is nothing but a t0 to tf plus delta tf agree lp part of function of lagrange function value this one j minus t0 to tf that original lagrange function without part of this one this way so that just now we have come to this part we can write it t0 to tf lp dot dt and that is nearly equal to I can write it nearly equal to l dot value the find value of the lagrange function t is equal to tf into delta tf minus t0 to tf l dot delta t so you club this and this club together so this nearly equal to t0 to tf lp dot minus this dt plus l dot t is equal to tf delta tf agree so this we got it that that one now one can write it this this is now as before we have discussed this this one agree now what we can write it for this one this nearly equal to if you write more details t0 to tf lp is what v lp is that lp function I am writing what is the function lp is equal to x dot t plus delta x of t u star of t plus delta u of t again is a function of this and lambda of t of t minus that one I am writing that is minus l function of x star of t u star of t lambda of t of this whole bracket dt that this part in details I have written is a function of part of input and part of output again that we have written and then the remaining term is that one plus plus plus lx star of t u star of t lambda of t t bracket close find the value t is equal to at tf multiplied by delta tf so let us call this equation number is we have given equation number up to eight so now this equation is equation number nine now see this one this part if you do the Taylor series expansion then use the what is called chain rule all this thing as we did in earlier we can simplify this to first part of the integral of that one we can simplify and how what is the final expression will come using I am just writing it using Taylor series expansion and integration by parts by parts rule we obtain I do not want to repeat this one because this we have already discussed when we have considered the functional j is equal to 0 to t 0 to tf v function of x t comma x dot t comma t dt when we are deriving that what is the necessary condition for the functional to be optimized there we have used that operation please refer that derivations then you will get it Euler's equations so and if you take the first variation of that one if you take the first variation of this if you do the Taylor series expansion and take the first variation of the functional then you will get del j is the first variation of the functional first variation of the functional that equal to if you do the Taylor series expansion and use the what is called integration by parts and simplify then ultimately you will get the first variation this is the this delta j is the variation of the functional variation variation of the functional again means when t 0 to tf plus delta tf and part of states and inputs are part of x in turn x also part of that is denoted by j minus without part of the integral of the that difference is first that is variation of functional out of this split up into two parts that first variation of the functional second variation of functional is there third variation functional is there and so on so we are just considering the first variation of functional for the necessary condition for the functional to be optimized so let us see that one what is the functional variation after simplification it will be t 0 to tf del l dot del x of t minus d of dt del l dot del x dot of t so this you compete at optimal trajectory at optimal trajectory this whole transpose delta x t dt plus some of the terms are remaining terms are like this del l del u because it will come from the what is called Taylor series expansion in first order terms agree so this star transpose del u of t so use a vector of dimension m inputs if you consider use a vector of dimension m inputs in the beginning if you see what we have considered that one here if you just know not this one here if you consider our original function x of t u t x u is the number of inputs that m cross 1 x is the n cross 1 so the partial derivative of l with respect to u if l is a scalar u is a vector of m cross 1 so this will be a vector so you have to transpose multiplied by delta then you will get a scalar quantity that so this plus del this del x dot of t whole transpose star delta x of t agree and that is t is equal to tf that is we have derived this by substitution Taylor series expansion and substitution you will get this expression then now see you consider our that popular lemma that we consider that lemma is 0 to t 0 to t f I am repeating this one once again have you have a g of t then delta x of t dt is equal to 0 so g of t is a continuous function and it is differentiable each and every point in the interval this and delta x is the small change in that variable small element and if you integrate with respect to time t that value will be 0 provided provided if and only if g of t is equal to 0 at every point over the interval at every point point over the interval t0 to tf so if you consider this equation number is 10 9 you have the if you consider this is equation number 10 then using this lemma I can write this quantity will be 0 provided that this quantity 0 delta x is not equal to 0 this equal to 0 so our necessary condition just as before we did it that this del x of t minus d of dt del l dot del function of this del x dot capital x dot of t whole this delta x of t whole this is equal to 0 and what is the dimension of this one l is a scalar I am differentiable with respect to n with respect to x whose dimension in n cross one so this dimension will be n cross so this star indicates that if you solve the differential equation agree for that one you may need some other boundary condition let us call if you solve this one then whatever the trajectory you will get it that is the optimal trajectory the star indicates that optimal trajectory of this one so let us call this equation is equation number 11 so this part is 0 now you see when you have made an increment you start to delta u this this is not equal to 0 this is not equal to 0 and this is independent of this is the you can write independent independent contour variable variable so this is not equal to this part so this must be equal to 0 in order to make that our d t is there here if you see the last equation there will be a d t is here d t here missed this one so this equal to will be 0 so next condition is del l dot del u of t that compute along the trajectory if you do it this will be a 0 whose dimension is m cross 1 because u is dimension is m cross 1 I am differentiable with respect to scalar quantity so that will be m cross 1 let us call this equation is equation number 12 so we have a you see when we have a 2 necessary condition this and this necessary condition you got so this is 0 this is 0 only the term is left with you with us is that one another term is left here because I just another term is left here plus please please correct it plus l dot star l dot star t is equal to t f find out the l dot star by t f into delta t f this is that term if you recollect that that one we did it that is the this term this term so this we have done a teller series expansion that one teller series expansion of that that one and that term is this one that term we missed it here so this so first term second term and third term what we got it due to the teller series expansion of this this and this we got it up to this again then what is the left over term is there so our when you use the lemma this and this then we got it that what is called this equal to 0 and that equal to del l del l with respect to u to transpose that will be 0 that 2 necessary condition in addition to that still delta j first variation of the function is not 0 what is the finally left over with delta j finally we left over with delta j is equal to nearly equal to you can say the delta l dot star put l dot star or you can start you give it here t is equal to t f into delta t f plus del l dot del x dot whole star delta x t t is equal to t f that means x t f means x is equal to this is x x t f t is equal to t f so that value is I can write it now this if you see this one our figure let us go back to our figure the optimal trajectory of this one this is our a this is our b this is our d and this is our c so this is our let us call t 0 and this is our t f and this is our t f t f plus delta t f agree and this value is this value if you consider this value is and this value and that that we have considered delta x t f and this value we have considered delta x f as we discussed earlier so we can write it delta x f that means from this to this point that may coordinate off c and the this distance of that one is delta x f is equal to delta x t f plus x a dot this is x capital x x a dot x a dot this is our x a of t this is our x star of t which is equal to x star of t plus delta x of t agree I just find out the slope at this point so this is nothing but this slope is x dot type t is equal to t f multiplied by delta t f this I can write it that means this height plus this engineer this one multiplied this height is equal to delta x f so this I can write it delta x t f plus what is this one I can write it delta a dot of this delta x dot we can write it delta x dot is equal to x x dot star t plus delta x dot t whole t is equal to t f delta t f now see this is a small quantity very small quantity this is also small product of this is you can write it so we can write it the nearly equal to this we can write it delta x t f is equal to x dot star t f into delta t f this product is third term you negate this one so this we can write it so now equation what you got it equation 12 and then let us call this is equation number that is that is equation number 13 you consider the equation number of 13 is that one agree so now use this value x t f x delta x t is equal to t f delta x t f I can use the value from here agree using in using you can write it from 13 using this expression that means I will write x delta t f is equal to x f minus of that one so what you can write it from 13 delta j is equal to l dot this star t is equal to t f and delta t f plus this one delta l dot x dot of t whole transpose star t is equal to t f now I am writing x delta t f this is x delta t f value is delta x f minus dot star t f into delta t f agree this equal to that this is I am writing is that quantity in place of delta x t f I am writing is that quantity if you see that one just like this so this is equal to we got it agree now you see what we can write it delta t f delta t f if you consider together then what we can write it see simplifying that equation if you simplify this equation you can write it or delta j is equal to l dot minus del dot del x dot of t whole transpose x dot of t x dot of t that whole star agree the whole star then put t is equal to t f just I am writing that one this term plus this term into this term I am writing t is equal to t f so first is star I am writing it here you calculate the star then t is equal to t f then what is left delta t f plus what is left only this term is left this term this and this term is left so that is I am writing is del l del x dot of t whole transpose star t is equal to t f is equal to delta x f see this is delta x f delta x f now in order to become this is zero that means what is the condition that delta j first variation our necessary condition is if you see the basic necessary condition delta j must be zero so in order to make the necessary condition we got it two condition in addition to the two the third condition is that l this minus del l del x dot of t whole transpose x dot of t this star evaluate t is equal to t f delta t f plus delta dot delta x dot of t whole transpose star t is equal to t f delta x f is equal to zero so our first variation in order to become first variation of this one this part is zero this is one necessary condition and that part is zero the second condition and third part is what is called that we got it that must be zero and that depends on the our condition let us call our final time is fixed t f is fixed then t f is fixed means delta t f is zero delta t f is zero so this is automatically zero so only an x t f is free that means delta x f is not equal to zero it must in order to make zero this must be zero agree so in other words you can say in other way you can say if t f is free and delta x f delta x f is fixed agree this is zero so this part will be zero and t f is free mean delta t f is not delta t f is not zero so this must be zero in order to make this is zero so let us call this equation number is equation number last equation we have given is equation number 13 if you see the 13 last so this is let us call equation number 14 so equation 14 equation 14 equation 14 is the general boundary condition in terms of lagrange g n function agree now if i summarize this one in order to optimize the our original problem where the terminal cost is there and the integral part of term what is called performance is there in order to minimize that one and subject to the constraint x dot is equal to f of x comma u of t comma t then our necessary condition is first this you have to solve it once you form lagrange g n function then you calculate that one or solve this one and also d l d l d l with respect to u is equal to zero you solve this equation using the boundary condition of that one and this boundary condition if both the end point final point that means time is free the x of t f is also free then you have to you have to assume two boundary condition you will get this equal to zero this equal to zero agree so this indicates the equation number 11 equation number 11 12 and 14 need to be solved 14 need to be solved to obtain the optimal solution to obtain the optimal u star of t and hence x star of t because u is independent variable that will drive the state in the optimal trajectories and hence x star of t so this is what we got it as I repeat once again what is the our problem was here on the very beginning just see this one agree next class we will discuss the what is called that using Hamiltonian function how we can solve that one problem