 So, last class we have discussed the what is called the dynamic programming of a discrete time system using principle of optimality. We have described the basic principle behind this and also we have taken one example which we could not complete during that time last class. So, let us recap the example and complete this example now. So, our problem is given the discrete time system it is a first order difference difference equation initial state x of 0 is equal to 8 is given. Our problem is minimize this performing index the standard performing index and it is in quadratic form and this is the terminal cost is given. So, our problem is using the principle of optimality find control sequences u 0 u 1 assuming there is no constant on the control signal u of k. So, we will start what is called principle of optimality principle we will apply that means from we will start from the backward pass we start with the backward pass. So, if you recollect this one that our terminal cost is that one. So, from x 2 2 we will move x of 1 state and for that one what should be our control sequence u of 1 you have to find out. So, j of 1 performance cost when we will move from j 2 to j 1 what is the performance cost that minimize this performing index and that is we are writing from this expression. If you see n is equal to j is equal to 1 x of phi of 1 phi of n x of n that is the terminal cost we know this one now n is equal to that is your 1 k 0 to 1. So, k is equal to 1 then this means 2 x of 1 whole square 4 u 1 of whole square that means we are moving from the terminal cost j 2 to j 1. So, that is why we have written it into this form. So, what is that from j 2 this is the terminal cost at j 2 now we are moving with the j 1 j 2 plus j 1 total cost. So, this is the expression now this x 2 we can write it in terms of x 1 from the dynamic equation dynamic discrete time equation given to this. So, that x 2 is now replaced by this expression and this remaining is same. Now, you see here that minimization of this performance index depends on the x 1 because we have to minimize this term selecting the u of 1 and we have also seen there is no constant impose on control input u of 1. So, that means so if you see this one then our optimal u is the function of x 1 from the previous slide you see is a function of x 1 only. So, it is nothing but a problem of optimization problems. So, partial differentiation of u of 1 with respect to this function cost function with respect to u of 1. So, if you do this one the u is involved in u of 1 involved here and in this expression. So, this will come with this expression ultimately we got u 1 in terms of x 1 is this. So, x 1 again you see I cannot find out u of 1 until and unless I know x 1, but I know x 0. So, x of 1 can be expressed in terms of x 0 and u 0 where u 0 is unknown, but x 0 is known to us. So, if you use this expression into that j 1 expression that is this and you know x 1 this and u 2 in terms of x 1 I have written this one and j 2 expression is nothing but a x 2 minus 20 whole square and x 2 is replaced by that one. So, now it is that terminal cost required from the from j 2 to j 1 what is the cost is involved with this one. So, now this if you can simplify ultimately it will boils down to this expressions this after simplification. So, next we will go from j 1 to j 0 that means what is the cost for moving the backward pass from j 2 to j 1 total cost. So, it is not a j 1 this is j 0 and it is a function of x 0. So, if you see this one. So, this now you say I will write this expression of j 1 here. So, u of 0 then half twice x square 0 plus 4 u 0 square plus j 1 and j 1 just now we got it this expression and here we will express x 1 in terms of x 0 and u 0. Wherever x 1 is there we replace by x 0 and in terms of x 0 and u 0 from the dynamic equation of discrete time systems. So, this expression if you see that first is first expression is x 1 square. So, x 1 square is nothing but a 4 x 0 minus 6 u 0 whole square. Again because you see this our basic expression for dynamic equation when that k is equal to 0 this x of 1 is equal to 4 x 0 minus 6 u 0 what I have written it here in place of x 1 square I have written that one. Then 2 plus see this one that expression get away 2 into 12 into x 1. So, I will replace x 1 by x 0 2 into 12 into x 1 means 4 x 0 minus 6 u 0 this is 4 x 1 minus 60 divided by 19 that whole square. Then next term is like this way 4 if you see this one 4 x 1 minus 20 divided by 19 whole square 4 x 1 is here 4 x 0 minus 6 u 0 divided by this 4 minus 20 minus 20 divided by 19 whole square. So, this and that equal to 0 this one. So, we have to write here this is the mean ok this 0 will write it here. So, del u sorry del j divided by del u of 0 that one this whole expression that half twice x 0 square plus 4 u square 0 bracket closed plus 4 x of 0 minus 6 u of 0 whole square. Because this is the performance index this performance index we have to use you have to minimize using the proper choice of u 0. So, you have to differentiate gradient of this with respect to u 0 I am finding out this is a capital L u 0 you write if it is small because I we have consider small because it is scalar input u of 0 small. So, this plus twice into 12 4 12 into 4 x of 0 minus 6 u of 0 minus 60 divided by 19 bracket whole thing square plus 4 into 4 x 0 this same expression I am writing minus 6 u 0 minus 20 divided by 19 whole square this 4 into x 1 4 into x 1 minus 20. So, 4 into x 1 4 into x 1 this multiplied by this this whole thing is your 4 x 4 into this whole thing is x 1 minus 20 divided by 19 whole square is equal to 0. So, if you simplify this one differentiate this with respect to u term is involved here here here here in all terms except the first term of this one. So, if you do this one then we will get it that 4 u 0 just you differentiate that this is u square 2 2 to cancel 4 u 0. So, you just check it this one 2 into 4 x 0 minus 6 u 0 whole star minus 6 from the first term of differentiation of this one will get twice this and differentiation of that quantity minus 6. So, twice into this then plus twice 19 square then bracket 2 star bracket 12 4 into x 0 minus 6 u 0 bracket close minus 60 then bracket close into minus 72. So, that you use this one so that plus the third this term differentiation this we have done it this we have this we have done it now this one that 1 19 square then 2 star bracket 4 into 4 x 0 minus 6 u of 0 minus 20 bracket close in star minus 24. This is simple what is called differentiation of that j 0 with respect to u of 0 is equal to 0. If you solve this one because I can write it now u 0 in terms of x 0 and then since I know the initial state of x 0 I can easily calculate u of 0. So, using x of 0 is equal to 8 in the above equation we get u of 0 is equal to 4.81. Please check it this calculation so we got it this one we know u of 0 and we know x of 0 then immediately we can find out x of 1 by using what is called the dynamic state equation of the system that means discrete time system. So, basic equation of this one if you recollect this is x is equal to j is equal to 4 of x k minus 6 u of k and x of 0 is equal to 8. So, immediately I can find out x of 1 k is equal to 0 x of 1 is equal to 4 of x 0 minus 6 of u 0 and you know the value of x 0 u 0. So, 4 into 8 minus 6 into 4.81 whose values will come 3.14. So, in its state value we know it then we can find out next we can find out x of. So, u of 1 we can find out look this expression u of 1 we calculated in the last class if you see that u of 1 we have calculated in the last class that u of 1 expression is nothing but a 12 of x of 1 minus 60 divided by 19. I will show you this one that last class from equation 2 you can write it that from 2 you can write it from 2 we can write it we know x of 1 now. So, this is nothing but a 12 x of 1 you got it just now you got it 3.14 minus 60 divided by 19. So, that value will come your minus 1.175 check this below. So, now once you know u 1 you know x 1 immediately I can find out x of 2 now x of 2 is equal to 4 of x 1 minus 6 of u of 1 is equal to 4 of x 1 I know that x 1 value I got it 3.14 minus 6 into u 1 value is minus 1.175 that will come near about 1.19.61. So, and from starting with back pass we have found out the trajectory of x 0 x 1 x 2 and simultaneously we are getting the optimal control sequence of u 0 u 1. So, which completes the forward pass. So, first we have done backward pass then once we complete the backward pass that means j 2 because terminal cost n is equal to 2 is j 2 you got it we have no terminal cost from j 2 to j 1 we found out by selecting the proper by selecting the optimal choice of u of 1. Then from j 1 to j 0 we found out by proper selection of means in other words it is a optimal selection of u of 0. Once you completed the backward pass then we have to do the forward pass which completes like this way finding out x 1 x 2 then u 1 then x 2 in this way we found out. So, this is the complete solution of the problem which we have discussed last class. So, now we have we will consider a new topics which is called time optimal minimum time control problem of a linear time invariant system. So, linear time invariant invariant system that by the name itself that this statement of the problem is like they are given the system dynamic equation our job is to find out the control sequence in such a way. So, that we can reach from our initial state to final state within a minimum time. So, let us take this one consider the linear time invariant system linear system time invariant system described by x dot o is equal to a of x t plus b u of t and our initial state is given x to 0 is given you can given. So, let us assume the input number of inputs is m and number of states is n and immediately we can find out the dimension of a matrix and system matrix and input matrix dimension immediately we can find out. So, our job is to that to transfer the state x of 0 to a final state x at t is equal to t f is x f transfer this with the sequence of control with the with the optimal control law in a minimum time that is all. Before that we made an assumption first assumption the system must be the system is completely controllable as at the rank of b a b a square b dot dot a n minus 1 b is equal to n where n is the dimension of the system dimension of the system or state vector dimension. So, this is the first assumption this is the system must be completely controllable then second assumption is made admissible control means the input must be satisfy the following constraints in the sense input is now constant. So, for what we have discussed we have not made any constraint on the input when we did the optimal problem solution by using calculus of variation we have not made any constraint on the inputs as well as on the states here will just first consider there is a constraint on the input if you see the physical point of view that control effort or control signal we cannot apply whatever is coming from the controller output that we cannot directly apply to the systems because all physical system has a limitation of control signal input. So, you have to restrict it. So, you have to constant the control input signal which is coming from the controller we have to restrict it in general. So, in general it is saturation element is used it here here we are telling admissible control the control inputs the inputs must satisfy the following constraint. So, what is the constraints are imposed on the input controller that u of t what is the control input is coming from the controller or from the design of the controller that must be restricted with a u min of t or less than equal to u max of t or in other words you can say if the u min value lower limit and upper limit are same suppose this u max is 5 volt agree and u minimum is also 5 volt. Then in that sense I can write it u of t the each component of u mod must be less than equal to alpha i magnitude where i is equal to 1, 2 dot m for all t. Suppose we have considered input magnitude in lower upper bound is 10 volt and lower bound bound is minus 10 volt. Then I can write it absolute value of this is equal to less than alpha i alpha i is 10 volt. So, different control inputs that upper bound and lower upper bound agree and lower bound if are same that alpha i may be different. Suppose control 1 input upper and lower bound if both are same in negative sign with the then it will be a 10 volt u 2 may be 5 volt all these things. So, this is the thing say alpha i you have restricted to 1 that upper value of the control input is positive 1 volt and lower bound of the control input for i is equal to u i i is equal to 1 is equal to let us call minus 1 volt for all cases it is plus minus 1 volt. So, problem statement though we have explained problem statement now we are writing the statement of the problem statement. So, it is desired to apply and optimal control input u star of t that satisfies the constraints u of i of t let less than equal to 1. You can say i is equal to 1 2 dot m since m is the number of for all cases with the limited restriction we made it same and drives the system x dot is equal to a x t plus b u of t drives the system state system from initial condition or initial state from initial state x t 0 is equal to x of 0 to the desired state desired final state. X t f is equal to x f in minimum time in other words the our optimization performance index optimization problem is state optimization problem is described with a performance index p i in otherwise minimize the per minimize the performance index this t 0 to t is equal to t f 1 into d t is equal to t f minus t 0 if t 0 is 0 minimize that one t f that is our problem. Now, you see this one that this problem our integrant of the performance index is one is not a function of u. So, in other words you can say our problem is like this way our initial state x of t 0 x of t 0 is equal to x 0 and we have to find out the optimal path such that the performance index is minimized in the sense that state x of t 0 x 0 will transfer with the optimal control signal to a desired state x t f is equal to x f in a minimum time. Let us call this is the optimal path in a minimum time that is our problem. So, we shall call the optimal path in we call such a controller is the is minimum time control minimum time control problem. If the initial if the final state is terminating is at the at origin then this problem is called the optimal time regulator problem. If note when x t f is equal to 0 means at the this at the origin of the state origin comma we will be dealing with time optimal time regulator problem or regulator systems. So, now how to solve this one if you recollect our earlier problem that in optimization problem we have given the description of the systems then we have given a performance index of the corresponding performance index our problem was to solve the u to find out the optimal control law u of t such that the optimal performance index is minimized. So, we have used the technique calculus of variation to solve this problem. Please recall all the steps what we have considered earlier that means first we have considered that find a what is called Hamiltonian functions agree, but in that problem we have not considered any constraint that u was unconstrained state also unconstrained but here now our problem is coming constraint on u then how to solve this one basically the procedure is similar to that one only just one step we have to see very carefully which is different from other earlier procedure. So, first we will define defining the Hamiltonian function Hamiltonian function. So, what is the Hamiltonian function if you recollect that our performance index is there our performance index is there agree. So, we form the Hamiltonian function by taking the integrand part of the performance index here if you see our integrand part of the performance index is one. So, our Hamiltonian function is h is equal to h function of one that the integrand part of the performance index one plus the costed vector lambda transpose this and that is function that is next is corresponding to our problem is a of x t plus b of u t and in general it is the dynamic equation right hand side of the dynamic x dot is equal to a x plus b u costed vector transpose multiplied by that right hand side of the dynamic equation of the systems this is the our Hamiltonian function h function and this part first part in our case is the integrand part of the performance index in this case since it is a time optimal problem the integrand part of this performance index is one. So, and other we will follow the procedure that once we form the Hamiltonian function so, let us call this is the equation number one agree. So, the resulting now I can write the resulting condition for optimality can be derived similar to earlier case can be derived using calculus of variations. So, what is our calculus of if you see x dot is equal to del h del lambda of t is equal to our x dot state vector which is nothing, but a x of t plus b u of t that we have derived earlier if you see this one. So, let us call this is the equation number two then costed vector we can write this is the state equation costed vector we can write it del h del x of t is equal to lambda minus lambda dot of t is equal to lambda dot of t is equal to now if you see the del h. So, we have differentiation this is nothing, but a transpose lambda of t minus. So, this is the costate equation. So, let us call this is equation three. So, now if you solve this one then you can find out this. So, lambda of t is equal to from this one equal to e to the power of minus a t of t lambda of 0 agree. So, now this is the costate vector equation solution we know it that one. So, now we can write it that another is delta h now look at this one this is more important del h del u of t is equal to 0 then what we can write it from this you see del h del u. So, it is nothing, but a b transpose your lambda. So, that is equal to b transpose of lambda is equal to 0 agree b transpose lambda is equal to 0. So, now you see this does not contain u. So, you cannot find out that one, but why it is does not contain u because our in the what is called in the Hamilton function is a linear function. In u that is no quadratic function of u agree. So, this we cannot handle like this way. So, this does not you can write this does not involve u of t because h is a linear function of u because h is h is the this you see this one when you are differentiate this one that this you see this one when you are differentiate this one that will be a plus not this one agree please change this one that my lambda this is the costate vector equation. Now, I am differentiating with respect to x. So, that will be a transpose of this. So, the solution of this one is that lambda is the lambda t solution is this one is simple state equation solution is this one. So, because h of t is linear in u of t. So, it clearly indicates clearly to minimize to minimize u of t now sorry h of t really minimize h of t. Now, you say to clearly minimize u of t you have to select u of t clearly minimize h of t you have to select u of t such that the function is minimum. So, clearly to minimize h of t h of that function Hamiltonian function agree select u of t u of t to make lambda transpose t b u of t that part lambda transpose b u of t lambda transpose b u of t select u of t to make this as small as possible. So, that is our requirement. So, this is called Pontesian minimum principle. So, we have to choose we have to minimize h because we cannot minimize h this one by selecting u because there is no function of this delta is there is no u terms is involved because of that what is called Hamiltonian function is the linear function not a quadratic function or non-linear function that is why you will not be able to do. So, what we have to do we have to use the minimum Pontesian minimum principle here. So, in order to minimize this select u so that this should be as small as possible. So, naturally so this it may be noted that it may be noted that the control signal u of t affects the x h only through only through tell me this expression if you see this one that the control u affects h control u affects h only through this term lambda transpose b u t. So, only through the minimum lambda lambda of t transpose b u of t which equal to lambda just now I have find out the solution of lambda of this one you see lambda of this transpose of this lambda 0 transpose if you take transpose both side lambda 0 transpose. So, this is equal to lambda 0 transpose e to the power of minus a t b u of t. So, the important is the Pontesian Pontia Pontia gene minimum principle that p m p states that that Hamiltonian function Hamiltonian function h of t should be minimized should be minimized by should be minimized over all over all possible control input over all possible control input. So, what does it mean this permits including the constant you see h which is a function of if you see x of t then u of t then lambda of t and t. So, you find out the optimal control law u starts the star indicates the optimal what is called quantities optimal quantities this is not start. So, less than equal to h x star of t then u of t that lambda star of t then t for all permissible input that means u of t belongs to u of a u a stand for all permissible admissible permissible admissible control a for admissible control u a this indicates u a admissible control. That means, the control input is restricted to this constraint with this one. So, this for all time t belongs to that 0 to t f if t 0 this is our t 0 if t 0 is 0 then 0 to t f or it is not 0 t 0 to t f. So, what is our equation we have given it 3 let us call lambda is equal to this equal to our equation number 4 the costate equation costate vector equation is 4 then we have written that what is called the Hamiltonian function we have defined a Hamiltonian function is here agree. So, this let us call this equation is equation number 5 agree. So, now note what is the corresponding our Hamiltonian function h of this is equal to 1 plus lambda transpose lambda transpose because we have just noted in terms of initial lambda 0 transpose then u e to the power of a t then a of x t plus b u of t let us call this is equation number 6. Now, see this one lambda what is our Hamiltonian matrix is considered that is first is Hamiltonian matrix is considered that we in place of lambda t I am writing it that quantity and then remaining is as it is remaining is that is. So, this now from this equation you see when this function will be minimum that a Hamiltonian function. So, this I am now writing into 2 parts this into lambda transpose of 0 e to the power of a t into x of t plus lambda transpose of 0 e to the power of minus a t b u of t. So, we have a choice with u t now suppose this quantity is positive. So, this quantity is what this vector is n by m the dimension of b this dimension is n cross n and this dimension is 1 by 1 row n column. So, if you see is nothing but a that is number 1 row m columns 1 row m columns and this dimension is m rows 1 column. So, this is a vector of this one agree. So, now I can write this will be minimum that from 5 and 6 from 5 and 6 I can write it from 5 and 6 we can write the each component of each component of u we can write u i of t star is equal to plus 1 when lambda transpose of 0 e to the power of a t and b i what is i b is a if you see this one b is a how many rows are there n rows m columns. So, first column of b is b 1 second column of b is b 2 i th column of b is b i. So, I am writing this when and this will be what this is a n row 1 column n row 1 column and this is n by n and this is 1 by n. So, this will be a scalar quantity this. So, if this absolute value of this one or if this value is less than 0 each element of b I am considering each element this is you will each element of u I am considering if this quantity because this is what whole thing is a your this whole thing is a row vector. So, first each element of this row vector if it is a less than 0 then this should be a positive then product of this one this means which one the corresponding element of u must be positive if this is negative corresponding i th element of u agree must be positive. So, that is why I am writing this positive now if lambda star of j e to the power of a t b i is greater than 0 this is scalar if you see this is scalar this is also scalar then it is this. So, ultimately that this function value will be reduced minimize in this way. So, this so u we have to switch either plus 1 or minus 1 when this when you are followed by this point this quantity if you see from here to here I just from here to here if you see this here to here you have a one row this like from n columns this is the n columns this is the n columns and what is this one I am writing it u 1 u 2 in this way m rows. So, i th row let us call this is i th column of this multiplied by i th row this is the i th column this is the i th row it should be multiplied. So, i th column of e to the power 0 e lambda transpose 0 e to the power of a t i th column of that one if it is negative then the i th row of u must be positive if it is a positive that must be and resultant will be always negative. So, that will be a function value will be minimum as possible. So, this so we can make it what is called this one up to infinity, but we have a restriction that input must be a plus minus 1 that one that is why you have to be this. So, in short we can write both the equation we can write commonly we can write it using the signum function as g n is equal to lambda transpose 0 e to the power of a t b i. So, which because this is a scalar we can write it this equal to minus as g n b transpose b i transpose e to the power of minus a transpose of t lambda of 0 which is nothing, but a as g n b i transpose e to the power of minus a transpose t this whole thing sorry this whole thing from here to here this is nothing, but a lambda of t. See the solution of lambda of t we have just found out that from the costate vector lambda t e to the power of a t lambda 0. So, e to the power of a t lambda 0 is lambda i the signum function if this quantity is greater than 0 agree then that value is positive means switching to the positive since is preceded with a minus sign. So, this is a u i is negative if this quantity is negative then sign is signum function of this one indicates it is a negative negative negative positive. So, it is a positive u is positive. So, that is the our we can write it this ultimately minus signum function of this one b i transpose lambda of t is let us call equation number 7. So, the form of control. So, what we can write it the form of control defined by equation 7 is referred to as a bang-bang control since the control is switching between the two limits since the control switches between its limits agree. So, this is called the bang-bang control. So, next class we will taken one simple example and explain you how this control action can be taken into consider for solving the what is called time optimum control problems. So, we will stop it here now.