 So, this is a form of the equation. Now, in addition, we will have this here b times q dot term. So, this is a form of equation that you will have. So, let us b q dot. So, we are ignoring gravity in this case. So, I am now like not representing the dependence on q and q and q dot of c and d. So, this is our equation. If you recall the controller tau is having this form dA plus now I will combine this b plus c together here times nu minus d into r. So, if you now see this A is given as nu dot and this nu is given as q d minus lambda e. So, remember these are all vectors here and this is matrix here. Typically, diagonal matrix and positive definite matrix. So, A will be equal to q d double dot minus lambda e dot. So, we will use this form this A and this while substituting. So, here if you come now, like your b q. So, let us say let us start with d on d here. So, that we are consistent now with the. So, this d q dot q double dot plus now again we combine here this b plus c terms q dot is equal to now if I substitute this here d times A is q d double dot minus lambda e dot plus b plus c q d minus this should be q d dot should be not here. Minus lambda e minus k d times r. So, if I bring these terms on the other side now this becomes now d q double dot minus q d double dot plus lambda e dot. Now is b plus c where I can take out and q dot minus q d dot plus lambda e and then this k d times r is equal to 0. Okay, now this I can write in terms of error again e double dot. Okay, so you remember e is defined here as q minus q d dot q minus q d. So, e double dot and then its derivatives will follow accordingly. Okay, so now this becomes e dot plus lambda e plus k d times r. Okay, so I can see that this is nothing but r dot. If you see r defined here is in the controller I want to see that in controller here so that you have a clarity. So, r is e dot plus lambda e. Okay, so I can just write it here again r is equal to e dot plus lambda e. So, this becomes d times r dot plus now b plus c times r plus k d times r equal to 0. Okay, and for this we need to propose a lay up now function. So, can you think now for such a kind of a system if you see what could be it's not very intuitive because we don't see the derivation of these from the energy perspective here because we don't have this form that we started off with d q double dot plus all those kind of things. Okay, so it doesn't have directly some energy based connection we can do here right now. Okay, so therefore we can directly kind of go on using like now this function candidate v as half. Okay, now I want see when I take a derivative I will need this d r dot to be substituted. So, and d we know to be inertia matrix where I can use that d as a inertia matrix here but instead of now error in terms of e and e dot and other kind of a thing we can consider this in r terms and this is r transpose. Okay, is what I consider as energy function. Okay, and this function candidate if you see in terms of r, if you expand it in terms of error one can see that both for e and e dot consider as a it will be a quadratic form in e and e dot and that will be positive definite. Okay, because these positive definite matrix or if you say directly in terms of r it is it is you can directly see that this is because these positive definite this has a positive form and r contains all my system vectors. See the system has only first order kind of a form here. Okay, so I don't need now r dot to be coming in this be expression at all. If this has been r double dot then I would need r dot to be coming here for the positive definiteness. Is that part clear that is what is a important kind of a concept here based on our understanding of positive definite functions. Okay, and we want to propose now this function candidate which is positive definite. Okay, and so now you can work out further from here suppose I take a derivative okay v dot we have now you remember v is equal to r transpose d times r. Okay, and if I take v dot here factor of half right? Of course yeah, sorry. So half now I see again by using symmetry property I can combine two terms here. Okay, and I will do that for the other part I will first consider r transpose d dot r plus now two times so I will not consider half here. Now r transpose d r dot. Okay this is what I am considering here because r dot transpose d r dot r dot transpose d r is same as r transpose d r dot because of the symmetry of the d matrix. Okay, so that is what I have property I am using here. Okay, so with that I will get this d r dot kind of a form here which was there in my previous equations if you see d r dot is coming here. Okay, this is a form d r dot comes here. Okay, this is what I substitute now. Okay, as minus b plus c r minus k d r. Okay, so let me do that here maybe here. Okay, so that I get more space. So half r transpose now minus b plus c r minus k d r. Okay, so let me erase this. Okay, so plus my term r transpose d dot r will be there. Yeah, so with this now I can combine this terms in a way we bond for d dot minus 2 c to be coming up. Okay, so r transpose. Okay, now I use this c term that side but here only like you know now this is my negative b plus k d and r. Okay, plus now this half r transpose now this will be d dot minus 2 c d c from the other side from this side here this side and r. Okay, and I know that this is for d dot minus 2 c skew symmetric kind of a property we get this term to be zero. Okay, so this becomes only now minus r transpose b plus c b plus k d and r. So with b positive and r k d to be chosen to be b is actually our damping matrix. So it has to be positive and then k d some value you choose positive value for the k d matrix and then like now you see that this has a property less than zero. Okay, so see r vector it has both you know e plus lambda e dot e dot plus lambda e. So both my e and e dot are going to kind of get represented in r but they are like some kind of a linear combination. Okay, so if I now argue from here like know this argument is important so by Lyapunov theorem I can say that with this I can drive this r to go to zero as t tends to infinity asymptotic stability for r because I have represent system in terms of r only so far. So so given a system the r is equal to zero equilibrium if I consider that equilibrium will have a asymptotic stability based on this proof. Okay, so you can like you know see through the details of the definition of the statement of the theorem and you can see that this will follow. So r will turn to zero as t tends to infinity. Now what it means that r tends to zero r has e dot plus lambda e. Okay, this will tend to zero as t tends to infinity. Okay, so now if I represent this as e dot so if this tends to zero that means e is equal to lambda e dot. Okay, oh sorry lambda e here. Okay, so e dot is equal to lambda e minus. Okay, and this will be having a so if lambda is chosen to be diagonal matrix positive definite diagonal matrix then e dot is equal to minus lambda e would like you know lead all the e dots. Okay, e 1, e 2, e 3, e 4 like that. Okay, we will have the same form e 1 dot is equal to say lambda 1 e minus lambda 1 e and think that. Okay, and if I now write the solution for each of the elements e i okay will be then equal to e to the transpose of sorry now this is exponential. Let us let this exponential. Okay, exponential of minus say now this lambda 1 small lambda for each of these diagonal elements t. Okay, that is the solution of a differential equation e dot is equal to the minus lambda 1 e e i dot is equal to lambda. So this is lambda, let us say lambda i here. Okay, it will be lambda i. Okay, so all the errors are then driven to zero. Okay, based on this. So these are the arguments one can use and then one can say that okay so error derivative so error is these driven to zero and error derivative also is to be will be then driven to zero. Okay, so this will also tend to zero. Okay, one can see from whatever ways like you know directly from here also one can see this will follow or one can say that okay if r tends to zero and e is tending to zero then e dot also will tend to zero from this this part definition of r. Okay, so these are kind of arguments one can use and like one can establish this proof that you know the error indeed asymptotically goes to zero and and you know this is happening for the case of your controller which has a tracking kind of a nature this q d dot is coming here q d dot is not zero. Okay, so q d is in general function of time. Okay, so when the when the desired trajectory is function of time you can apply this controller and you can see that the error so error which is difference between q and q d. Okay, this is error q and q d actual trajectory and the desired time varying trajectory. Okay, so this is not a constant part it is like a time changing term that error is driven to zero. Okay, so what it means is that like your system would you know if I draw here just kind of like you know represent so if this is my trajectory desired. Okay, so this is my q d. Okay, then my let me draw this actual trajectory is originating say we start at the same point but or maybe some other point we don't know where this you know starting point for the for the trajectory would be but this will kind of like you know go and settle down and follow this trajectory exactly. Okay, so this is kind of like you know initial transient which will die down and then it will kind of very perfectly follow this that's what it means that your error is is driven to zero for the tracking trajectory. Okay, so any trajectory that you are given which is smooth function of time differentiable function of time your controller is going to follow that trajectory in with very small kind of a transients will die down and then it will completely follow that trajectory very nicely. Okay, and other part again I'm reiterating that okay we see that this controller has this you know we are this these terms which are feed forward terms okay this dA and B plus C times nu. Okay, these are few feed forward terms that we are using and then this is a feedback term. Okay, so this k d and r you know this is a feedback term so feed forward term and feedback term combination we are using here to derive so that's so this term will take care of most of the like you know the control action to get to this trajectory and say these transients are initially driven so this initially you'll find with the transient this term is going to be contributing more to the control action okay and once the transients die down like you know this term also will go down so you can afford to use very high gain for this k d part here when you use high gain for k d then your trajectory will be maintained along the desired trajectory in a very very nice fashion. Okay, so this is how like because most of the you know terms that are there to take care of like you know input needed for the system to be along the trajectory are already feed forward here okay so because of that the system will anyway going try to kind of go to that along this trajectory and whatever small errors coming because of dissonances and other uncertainties in the system they will be taken care of by this derivative I mean the feedback part of their term and that's how like you know you are driven nicely along the along this trajectory and now this controller if you see this you know the applicability it is applicable for all kind of mechanical rigid body systems also some kind of a flexible body systems it is applicable to some certain kind of conditions okay so so so this gives you very powerful tool for you know control implementation for any future mechanical systems okay that's where like we'll we'll stop here there are other things like you say suppose I don't know this D very well okay I have some uncertainty in D so D is known as D plus delta D okay or B and C are also not not known completely how do I kind of handle that uncertainty part and make this make the controller robust there are kind of ways to do that basically the idea there is to kind of like you know keep track of what is my previous torque input and use that in some way to to you know update things in such a way that I attract this the as I robustly do the do that the tracking then one can say okay look I want to adapt to the parameters parameters are changing okay then again like you know you you can have some certain ways to deal with that kind of situation also so BC parameters are changing in time slowly of course that's change is not as fast as your system bandwidth I mean system bandwidth is control is much faster but if that is changing then also you can kind of do the the other controllers development that is possible but the base of this all analysis is is what we have covered right now okay so as you want to have the small changes of the robustness or adaptiveness you know you will have some you will find some additional terms defined and you know some some update law for the system parameters can come up in the in the process of control implementation so of course all those things will come as a additional computation cost okay so you see this controller computing will be much costlier than just like you know having the feedback term computed okay in terms of your microcontroller implementation okay so that's how like you know we we we should think about implementation and theoretical development together in in some sense to or you do the theoretical development and see okay what is it really viable for my implementation or not those kind of things need to be thought about so that's that's how like you know we we start making conclusions so here we have made this conclusion for our alone first r tends to 0 as t tends to infinity that is a that is a thing that we are getting by applying Lyapunov asymptotic stability theorem and now beyond that we are using some arguments of our own okay these are not arguments from the Lyapunov so these all this from here whatever we are doing here here onwards down is all like our own kind of arguments saying that okay when r tends to infinity then e dot plus lambda i tends to infinity that means like no e dot will tend to minus lambda e and based on the solution one can conclude that each of the components of error okay or entire error consider as a vector would proceed to 0 as t tends to infinity okay so that is a kind of idea that we built in here