 Hello everyone. Welcome to yet another session of our NPTEL on nonlinear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. We have for a while now been motivated by our desire to drive uncertain autonomous systems such as the SpaceX satellite that you see in the background. And in our quest we have already seen how to analyze and lately how to design controllers for uncertain dynamical systems. And these adaptive controllers I hope that will help you to design and develop algorithms for your own practical dynamical systems in your own research areas. So what we were doing until last time was that we were starting to look at the tuning function method and in the quest to understand the tuning function method we had to talk about the notion of control Lyapunov functions. So that is what we did. We started a last session with a control affine system which is essentially a system which is linear in the control. It is a multiple input system but linear in the inputs. And so there is a drift vector field f0 and there is control vector fields fi. And we then defined what is a control Lyapunov function which is essentially similar to a Lyapunov function but the derivative has the control term appearing in it. And therefore when we talk about the negative definiteness we take infimum over the all possible control signals. We take the infimum over all possible control values in fact not signals and then compute the derivative and we want that to be strictly negative for non-zero values of the state in a local domain, Br. So there were two alternative ways of defining this. One was using this infimum the other was saying that if the control terms contribute nothing then the drift term has to contribute negative values. So along with this and what is called the small control property which is essentially essentially required for ensuring continuity of the control near the equilibrium. We are able to talk about a necessary and sufficient condition which is called the Archstein's Sontag theorem which essentially says that having a control Lyapunov function is equivalent to having a C infinity stabilizer, almost C infinity stabilizer. So if you have a control Lyapunov function then the system satisfies the small control property if and only if there is a C infinity stabilizer. So the if and only if is what was rather important for us the one side of it that is if you know that a function V is a control Lyapunov function then the Archstein Sontag universal formula actually gives us a way of constructing an almost C infinity stabilizer which means that the controls so constructed is in fact smooth everywhere but at the origin which is the equilibrium where it is continuous at least. Yeah and so there was a nice you know expression for such a control Lyapunov right and the other way around which is also very important for us to understand because that is what we have been doing in the past is that if you design if you choose a V which is positive definite and smooth and all that and then you take the derivative and you are able to come up with a control law such that this V becomes negative definite then this automatically also means that the V function that we chose to begin with was a control Lyapunov function all right and this is what we have been doing yeah we have been designing a control laws we have and the update laws for the estimates using a suitable Lyapunov function or a candidate Lyapunov function and so these were in fact control Lyapunov functions right so instead of choosing whichever control came out of this particular method of just intuitively choosing a control just like we see in this example where we took a system x dot is minus x q plus u and we took a V which was x squared by two and after taking the derivative sorry we took the V is x squared by two and after taking the derivative we somehow have an intuitive idea of what the control should be so we just choose it as minus x and you know this is this is in fact a smooth controller everywhere including the origin right so therefore it means that this V is a control Lyapunov function so instead of choosing this particular project this particular choice of the controller right that is u equal to minus x we could have in fact use the information that V is a CLF to construct a you know controller using the Arch time Sontag formula all right so so the idea is that we can use either the Arch time Sontag formula to construct a controller or we can also use our own intuition using a particularly or suitable choice of control Lyapunov function right so this is essentially the this is essentially equivalent methods essentially equivalent methods okay so that's really the idea yeah so of what we did last time so what we want to do now right what we want to do now is to continue our discussion on adaptive control Lyapunov functions because if you notice until now we did not we did not actually talk about any uncertainties in the system at all right we only talked about a standard non-linear control system there was no uncertainties any right so now we are in a of course in this course we are in a situation where there is uncertainties also yeah so we want to talk about what is called adaptive control Lyapunov functions right so let me first mark what we did in the last lecture we did lecture 9.3 which was control Lyapunov functions right and now we are on lecture 9.4 and this is where we start with the definition of an adaptive control Lyapunov function right so we already saw what is what it means to be globally adaptively asymptotically stabilizable system I want to remind you of this because this is going to show up in a discussions soon so system of the form 1.1 is said to be globally adaptively asymptotically stabilizable if you can find a control law that is a feedback law which depends on the state and the parameter estimate and also a tuning function tau again depending on state and parameter estimate and an adaptation game says that you have an adaptation law and a control law right and such that both x and theta hat states are globally bounded and x goes to 0 as t goes to infinity all right so this is the what it means for a system to be for the system 1.1 to be globally adaptively asymptotically stabilizable okay right so let's begin with you know this is rather long leash introduction let's begin with our definition 2 which is that of an adaptive CLF right so what is an adaptive CLF or an ACLF it is essentially a smooth function now of both x and theta okay because there is a parameter involved remember that theta is a constant right is it because we are talking about theta and not theta hat so theta is a constant there's just a constant parameter so uh so essentially an adaptive CLF is a function VA is called an adaptive CLF for the system if what happens there exists some positive definite gain gamma such that for every theta remember for every theta which is a vector in RP VA is a CLF for this modified system what is the modification this term okay this term is the modification so VA we'll see the reason why this modification and not any other modification right VA which is a function of the state and the parameter the constant parameter is an adaptive CLF if there exists positive definite gain gamma such that for every value of parameter theta VA is in fact a CLF for this modified system 1.2 okay that's it yeah so CLF and ACLF are just related by this modified system that's all yeah great I will soon see like I said why this particular modification right and nothing and nothing else right right this what does it imply for VA to be a CLF of this modified system this is essentially our definition right we have assumed u is in a real number for now so essentially we need that the infimum of over the control values uh V dot VA dot that is del VA del x this is negative yeah so this has to be more precise for all x not equal to zero yeah for all non-equilibrium values yeah this has to hold only if x not equal to zero at x equal to zero we don't need this to hold right because V dot may be zero at x equal to zero right uh so this has to hold for non-zero it's okay okay now uh let's see is there anything else I want to say right uh you might ask uh why del VA del theta is missing but that's because del VA del theta times theta dot is zero because theta is a constant right which is why del VA del theta term is missing okay in this expression okay just remember that all right so then we have a nice little uh equivalence result right what does the equivalence result say it says that it's essentially the equivalence of the adaptive CLF and CLF of the modified system okay what does it say it says that there exists a feedback an adaptive CLF and a gain gamma such that alpha globally asymptotically stabilizes 1.2 at the origin for all theta and rp with respect to the Lyapunov function VA right we are calling this a Lyapunov function because now the feedback alpha is already specified so no longer there's no longer a control right the control is already specified to be alpha right therefore we can treat this as a Lyapunov function directly not a control Lyapunov function so we are saying that statements one and two are equivalent uh equivalent the statement two is just this there exists an aclf VA for 1.1 yeah so this rest of the statement is separate yeah so let's not worry about the rest of the statement the equivalence is between existence of aclf VA for 1.1 which you have already defined and existence of feedback VA and gamma such that alpha globally asymptotically stabilizes equilibrium zero for all theta with respect to the Lyapunov function VA yeah the same function VA but here it is an aclf and here it is a Lyapunov function because here the feedback is already being implemented okay so so what why the equivalence between one and two again like I said the equivalence is only until here this is a separate statement that's not required I mean that's not part of the equivalence but anyway I mean this is important too we will read it soon so one implies two is obvious we say yeah because existence of a alpha VA gamma combination implies what implies this is true like implies this sort of an inequality holds what is this sort of an inequality this is the left hand side is just the definition of VA dot right this is just I'm going to read it sorry VA dot along the system trajectories notice again that theta is a constant still like we just do it for all values of theta so theta dot is constant it's just a constant parameter so therefore there's no del VA del theta term okay so this is just del VA del x times x dot the only difference is that now in x dot the control is substituted with an alpha and anyway the gamma appears here okay what do we know that if you have an alpha VA gamma combination available or a tuple three tuple available then you will always you which globally asymptotically stabilizes the origin with respect to the Lyapunov function VA it essentially means that the VA dot along the system trajectories substituting for alpha is going to be less than equal to some negative definite function which means w is a some positive definite function in x for all theta right so this is important always for all theta because theta is not cannot play a role just by changing theta I cannot lose my asymptotic stability okay so what does 1.3 imply 1.3 immediately implies that VA is an aclf right why why you might ask what do you need for an aclf let me make this smaller you need that for some you take the infimum over all possible values of control and you want it negative yes for non-zero x of for non-zero x of course here I'm saying there exists one value of control for which this is negative definite right so if there exists one value of control for which it's negative definite that it is negative then it's done right I'm satisfying this already this implies this because I can find one choice alpha for which this holds yeah because I'm taking infimum the smallest over all possible choices of control and here I'm already saying there exists this choice alpha such that this holds right because this is the same as this substituting for alpha against u okay therefore because I can find a u equal to alpha such that this holds it means that infimum also has to be smaller than that at least like infimum cannot be larger than that infimum of this for all values of u has to be smaller than what you get for alpha right so it's as simple as that okay so I mean so essentially I mean if you want me to write it I can write it as inf over u del v a del x f plus cap f theta plus gamma del v a del theta transpose plus g u has to be less than equal to this guy yeah is always less than equal to this guy which means this is yeah which means that you have what you want yeah so therefore v a is an aclf for 1.1 yeah and that's what is I apologize and that's what is this statement right that there exists an aclv a is exactly this aclv yeah v a is exactly this adaptive clf now if you want to show that two implies one that existence of aclf implies being able to find a stabilizing feedback alpha and gamma is pretty obvious yeah because aclv definition in fact contains the gamma itself right because this is what you have yeah aclv definition in fact contains the gamma itself and now because v a is this v a is an aclf for 1.1 implies v a is a clf for 1.2 is a control Lyapunov function for 1.2 because that is the definition right and if you have a control Lyapunov function for a system what do I know I know that I can use the arch time son tag formula to construct a feedback alpha all right because v a is an aclf for 1.1 by definition it is an aclf for 1.2 and if you have a control Lyapunov function for any system this is equivalent to being able to construct a feedback almost smooth stabilizing feedback alpha yeah so you have constructed an alpha and the gamma is coming from the definition of aclf itself and the gamma is right here gamma is right here okay so that's it so these two are equivalent so this is a very straightforward idea I mean you're saying it is not a big deal you're saying equivalence of aclf is essentially the same as clf of a modified system anyway this should be sort of obvious from the definition and that's what we have used yeah except for the arch time son tag universal formula right now there is an additional statement here which is an important thing and we do want to try and prove that right if an aclf for exists for 1.1 and if you have an adaptive clf for 1.1 then it is globally adaptively asymptotically stabilizing okay so existence of an aclf gives you an adaptive control law for the system that's the idea okay and that's what we want to see now and that is an adaptive control of the original system not for 1.2 which is the modified system you get an adaptive control law for this system okay so having an aclf for 1.1 which is this guy means that I can construct an adaptive controller for the system okay and that's what we want to see how to do that okay or why that is the case okay so let's sort of look at this proof a little bit okay if I'm given this VA it implies that there exists a positive definite gamma and an alpha right because by the equality by the equality of the you know by this theorem one essentially right existence of an aclf implies all of this happens right so which means that you have this sort of a relationship already with this gamma and alpha all right so now what we do is we consider a Lyapunov function for the original system and what is this Lyapunov function we construct in a specific way we take a VA but now with the theta hat notice it's not because because you're talking about adaptive control now right so theta is unknown yeah so we replace all the thetas by the theta hats yeah all the thetas get replaced by the theta hats right and then we add a term again a quadratic term corresponding to the parameter error theta delta yeah and of course scale it with a gamma inverse so this is sorry there is a typo here this should be gamma to the power minus yeah this is gamma inverse the same gamma yeah and now we take the derivative carefully all we have done is we've taken the same but in place of the theta which is an unknown we've used the estimate theta hat and of course we're going to prescribe an estimation law right we'll do that great so we take a derivative here right now theta hat is no longer a constant right so we take derivative with respect to both of them so I have del VA del x times x dot essentially the original system not the modified system the original system and I have a del VA del theta hat times this tuning function gamma times tuning function right we don't know what the tuning function is yet we will actually compute it and then we have theta tilde transpose times gamma inverse times gamma the tuning function so this term comes from this is actually equal to I mean with the negative sign is equal to theta tilde transpose gamma inverse theta tilde dot which is minus theta tilde transpose gamma inverse theta hat dot which is equal to of course gamma times tau okay and that's what you have here that's what I get from here okay so once you make this substitution what I want to do is I want to replace this with the modified system right because because this inequality I have on the modified system so what do I do I take fx and replace it in terms of the modified systems with all the theta's become theta hats right so this is theta hat plus gamma del VA del theta cap transpose okay and everything else remains exactly the same if you notice right but now because I've added this term I also have to subtract this term right so I take del VA del x common right then I have what do I have here I have this guy coming in so that is fx theta and then I have subtraction of fx theta cap plus gamma del VA del theta cap transpose right so these two combine to give me fx theta tilde and this term is written as it is okay so this term gets written as it is and these two combine to give me fx theta tilde all right and then of course I have these two terms right so del VA del theta cap gamma tau minus theta tilde transpose tau so these two terms remain as it is okay now I know that this guy is now minus wx theta cap right again remember because all the theta's were replaced by theta cap so this is not theta wx theta but minus wx theta cap that's the only difference and then I have these terms right now what do I do I take the theta tilde terms common so this theta tilde term this is a theta tilde term so I take a transpose because everything is a scalar so I take transpose of scalar as much as I want so I take theta tilde transpose so I get del VA del x fx transpose and then I have a theta tilde transpose here so I get a minus tau then I take del VA del theta cap times gamma common this guy yeah so I get uh one term I get is uh let's see a plus tau so in fact this is this should be a plus tau here and this term if you see del VA del theta cap gamma gives me a del VA del x fx whole transpose with a negative sign so actually this should be a negative sign so there's a sign error this should be a negative sign but notice one interesting thing this term in this curly bracket and this term in this curly bracket is exactly the same either exactly the same terms so if I choose my tuning function as this quantity both of these uh non-definite terms go to zero right so if I choose tau as this guy then this goes to zero and this is zero right both of these become zero and I'm left with v dot x theta cap is less than equal to minus x w x theta cap which is now only semi-definite notice yeah why earlier it was negative definite because theta was not a state it was just a constant parameter but now theta cap is in fact a state right because it appears in the Lyapunov function also therefore this is no longer definite but it is negative semi-definite right and but but we know that we can use it was definite if this this wx theta cap was certainly definite in x right uh so otherwise certainly definite in x correct because otherwise I cannot claim this to be negative definite right if it is not definite in x then this is not negative definite and that would be ridiculous because that is what means to have an acl right so this is definite this w is certainly definite in x positive definite in x yeah so so therefore this whole thing is certainly negative definite in x which means that I will be able to show that x goes to zero by lasar invariance or barbal arts lemma signal chasing and that's it because it's wx theta hat is positive in x for all theta hat and you will have bounded x and theta hat right so x theta hat will be bounded because this is at least negative semi-definite therefore v is not increasing over time therefore the states here have to remain bounded and because va is positive definite because it's a clf so therefore it is positive definite in x therefore it is both x and theta tilde have to remain bounded because v is non-increasing and further x will certainly go to zero as t goes to infinity this is by signal chasing type analysis yeah so the important thing to remember is that the additional term introduced is a quadratic term in the parameter right and we have essentially proved that the original system 1.1 by virtue of having an aclf is in fact adaptively asymptotically stabilizable right and that's why we had this funny looking additional term so that this sort of a nice cancellation happens yeah that was the purpose of this additional term excellent so what have we discussed today we started with our discussion on of aclf an adaptive control Lyapunov function and how it is equivalent to having a control Lyapunov function for a modified system and we also proved that having this control Lyapunov function for a modified system means that for the original system we have an adaptive asymptotic stability property that is we can design an adaptive controller which is a feedback and an adaptation law says the original system has bounded states and the x states that is the system states and not the parameter error states they go to zero as t goes to infinity all right excellent so we'll continue in this vein talking about more about the tuning function method in the subsequent session also this is where we stop thank you