 Hello everyone. Welcome to another session of our NPTEL on non-linear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. In this week number seven, we are already underway into learning algorithms that are expected to drive systems such as what we see in our background and orbiting spacecraft around the earth. So what we were doing until this last time is to look at first the matched uncertainty case that is when the parameter was in fact matched with the adaptive controller and we actually learned how to use a back stepping based Lyapunov design to construct a control and an update law and therefore an adaptive controller for such a system. And we also of course just this stability analysis we also looked at a little bit of how to talk about you know persistence right. So basically we did all that for the matched case and then we moved on to the case of parameter unmatched with the control right and the first again looked at the known parameter case in order to understand how the back stepping design will work. So we got a fair idea of how the control would be. Yeah so basically we just started doing this right. So for the known parameter case we in fact first looked at the model right where we have the parameter now in the first state instead of the second where the control appears right and the aim is the same which is basically to drive x1 and x2 to zero right. So again slightly different because we're not looking at the tracking error problem but as I said last time itself this is exactly the same yeah even if I was considering a tracking error problem the methods that we're using would apply almost identically right. So great so in order to ensure x1 to go to zero we designed a controller yeah assuming that x2 is the control for the x1 state this is again standard in the back stepping base method and since the parameter is known I can apply this kind of a control to get this nice ideal system and we also choose this nice v1 right. Now because we know that x2 is not identically equal to x2 design we try to do the next best thing that is drive x2 to x2 design and this is where we get the motivation for choosing the second piece of our Lyapunov function or Lyapunov candidate function and that is the error between x2 and x2 design in a quadratic constructed out of it all right and now we know very well that v2 dot turns out to be of this structure right and if we choose u to be x2 design dot minus k to x2 minus x2 design right. So then I know that this gets cancelled out and I get a nice negative term so I get basically v2 dot as minus k to x2 minus x2 design whole square all right. So this is where we went and we're going to start from here so I'm going to mark our lecture as lecture 7.3 starting this part all right. So as always just like before we did in back stepping we just add up the two pieces to get a candidate Lyapunov function for the consolidated x1 x2 system all right and we actually start taking the derivatives carefully right. So the x1 v1 was just x1 squared by 2 so v1 dot is x1 x1 dot and v2 dot by virtue of this analysis yeah is just minus k to x2 minus x2 design square all right. Now we plug in for x1 dot of course yeah so what does it turn out to be x1 dot is x2 plus theta of x1 so I get x1 x2 plus x1 theta of x1 yeah and now we also know that of course we want to write x2 in terms of this variable right that's what we do right so that's what we do. So what is this so basically I have I mean we just write let me look at this carefully if this is done correctly or not all right so if we go back and look in fact I'm going to reproduce the expression right here so x2 desired was defined to be minus k1 x1 right plus well I believe minus theta fx1 right okay so in fact the way this manipulation is done that's fine that's okay so so what we do is if I actually don't write x2 as this but I simply replace theta fx1 from here yeah so if I do that theta fx1 is actually equal to minus k1 x1 plus x2 desired so this will actually be a plus sign right this will actually be a plus sign let's see if I got this correct so just a second let me go back and check all right so x2 desired is minus k1 x1 minus theta fx1 all right and if I go here which is what I wrote here x2 desired is minus k1 x1 minus theta times fx1 so if I replace for theta times fx1 from here I get minus k1 x1 yeah I made a mistake this should be minus k1 x1 minus minus x2 desired yeah this is what I was sort of looking for and this is correct yeah this is correct yeah that's what I was wondering all right so now if you combine so so this term is of course nice it's minus k1 x1 squared and if I combine these two terms yeah I get x1 times x2 minus x2 desired all right all right so so essentially there is a step that's missing here so I'm going to write it so this is actually equal to minus k1 x1 squared minus k2 x2 minus x2 desired squared plus x1 times x2 minus x2 desired all right this is very similar to what we had in the previous backstepping also okay this is very similar because you have a nice two we have nice two negative terms and then one mixed term this is very standard in what happens in backstepping data analysis yeah if you go back I'm going to try to actually demonstrate that similarity right if you see I had minus k1 even squared minus k2 psi 2 squared which is the backstepping error variable and then I had a mixed term in the e1 and a backstepping error variable so if you look at this I have the exact same scenario here I have one term in minus k1 x1 squared another term in the backstepping error variable quadratic and a third mixed term in the x1 and the backstepping error variable all right so this is exactly what you tend to get and from this point you know that I can do I can simply do what is called sum of squares which just involves using the inequality that this guy I will write it once more is less than equal to half x1 squared plus half x2 minus x2 desired squared all right so once I use this inequality here which is essentially the sum of square process why is it called the sum of squared process because we are writing these mixed terms ab type terms in terms of squares a square and b square because those are the terms we have here and so once you combine these you get exactly this expression which is again very similar to what you had even in the matched case so with this backstepping type procedure you are getting almost a rather similar you know a v dot as the matched case also yeah so this is why backstepping is a rather universal you know a method of control design that can be used in many different contexts all right so anyway so so with this you know that if I choose k1 and k2 to be greater than half then I have v dot negative definite then of course which immediately helps me to apply this Lyapunov theorem which immediately proves that x1 goes to zero and x2 goes to x2 desired so essentially you have I mean all around a global uniform asymptotic stability I mean you have all essentially the best property that you can look for all right so great yeah so in fact in fact if you look at it very carefully you can even conclude global exponential stability yeah if you so desire yeah you can even conclude global exponential stability because you can see that it is the same order and all that but since we're dealing with non-linear systems mostly we don't really mention this because yeah more often than not this is not possible to achieve yeah so anyway so now what do we know we are started off looking for x1 to go to zero and x2 to all to go to zero so this always remains a question it did the construction of the backstepping error variable actually mess with our tracking or the stabilization control objective yeah in this case also the answer is no under certain assumptions okay so what are those so x1 goes to zero is obvious x2 goes to x2 desired and x2 desired was this quantity right now what do I know I know that this is going to zero right so if x2 desired is going to zero right it's so if I want x2 desired to go to zero because I want x2 to go to zero so this will happen only if x2 desired is going to zero right so I want this to happen okay and so we sort of make this sort of an assumption right I mean you can think of it as an assumption right this is an assumption yeah because eventually yeah this is something beyond the system data right this is an assumption and if we do have such an assumption that effort zero state is zero then you have that x2 goes to zero and we are done all right now one might be tempted to argue about the reasonableness of this assumption now why one would say that this assumption is in fact reasonable well at least reasonable is that if you have you know if you're looking for equilibrium if you're looking for x1 and x2 equal to zero to be the equilibrium of the system right so you how do you find the equilibrium typically you will assign the control to be zero and you will try to find the right hand where the right hand side is zero so with the control zero this is anyway zero but if I want this guy to be zero and I want x2 to also be zero because I want x1 x2 equal to origin that is zero zero to be an equilibrium of the system then if this guy is zero then this guy also has to be zero and theta is some of course constant non positive sorry constant non zero unknown and so so constant and non zero if it's zero then there is nothing to do here okay so if x2 is zero or x2 is x2 equal to zero is an equilibrium and then this also has to be zero at x1 equal to zero and otherwise x1 equal to zero is not an equilibrium solution okay if fx if f0 is not equal to zero and so if so the point is that you're trying to make is if f0 equal to not equal to zero then x1 x2 zero zero not an equilibrium okay so and remember that we are always trying to analyze stability of an equilibrium it doesn't make any sense if you're trying to analyze the stability of a point it's not the equilibrium so therefore if we are interested in going to a point zero zero then it has to be it better be an equilibrium of the system and so f0 equal to zero is a very reasonable assumption to ensure that x1 x2 equal to zero zero is in fact an equilibrium of the system all right so this is a fairly reasonable assumption no problem yeah and so we've been able to prove for the known case with this kind of a control law right so now one of the question that also arises is what does this control law look like because you know there is an x2d dot right so you essentially have to take a derivative here right so it's use this expression take a derivative right so this is minus k1 x1 dot which is minus k1 x2 and this is minus theta f dot which is partial of f with respect to x1 x1 dot oh sorry I should just say so there is x1 in both so this is like minus k1 minus theta partial of f with respect to x1 times x1 all right so this is essentially the control and if you want to substitute this you can of course substitute for this x1 dot as x2 plus theta f x1 yeah so this is a very implementable control so this is minus of course k2 x2 minus x2 to z okay so that's that's all the worst then so x1 dot is just this yeah this is a very implementable controller so no problem yeah it's just a function of the steps yeah no issues all right but of course you notice that it contains the parameter right which is again something that you expect okay in fact x2 to z also contains the parameter so the parameter appears here right so the parameter is it's complicated yeah the parameter appears here the parameter appears here right and the parameter also appears here yeah so unlike the matched case where the parameter appeared in only one place here the parameter seems to appear in a lot of different locations yeah and this is what you will see subsequently complicates the adaptive controller design okay so so we'll sort of start going into the adaptive control design for this same system now so what happens in the unknown parameter case so of course in the unknown parameter case as you can see I cannot have an x2 desired I mean because I think of x2 as a control right so I cannot have x2 desired with the theta right because well I don't know theta so there's no question of implementing a theta so therefore the first step is again standard search and T equivalence right because we're just looking at the first system right so the first system is x1 dot is x2 plus theta fx1 with this as some what one would call pseudo control okay so x2 desired is with a theta hat which is basically an estimate of theta with fx1 and then of course you have the nice negative term so when if when x2 is in fact x2 desired you will get x1 dot is minus k k1 x1 but and now with an additional theta tilde right so this is unusual we already start getting a theta tilde here in the first piece itself okay this is where things start getting complicated and now what do we do earlier we took v1 as 1 half x1 squared but now that is not sufficient right because now I have a new state so what do I do I do what I do stat typically in adaptive control I add a quadratic term corresponding to the parameter error yeah and now I use that to construct an update law right so what do I get I get here x1 x1 dot and 1 over gamma theta tilde theta hat dot right and now I plug in for x1 dot which is this and then retain this term as it is now I know that this theta tilde terms can be combined nicely right and if I choose theta hat dot to be this yeah which is essentially cancelling this term out which is the best I can do we've already discussed this several times before then we are left with v1 dot is minus k1 x1 squared which is negative semi definite right right again try to remember the difference from the matched case in the matched case when we took the v1 dot in the ideal circumstance this is the ideal circumstance because x2 is assumed to be exactly x2 to z so even in the ideal circumstance v1 dot is coming out to be only negative semi definite whereas in the earlier version in the matched case v1 dot came out to be negative definite expression was exactly the same but there was no theta tilde state therefore the v1 dot was coming out to be exactly negative definite in the matched case so that's what I will actually make a comment on negative definite in the matched case yeah so again things are more complicated in the matched case right so now what we of course know that x2 is not identically equal to x2 to z so we do the best we can we try to have x2 track x2 to z and for that we introduce a quadratic term in the backstipping error yeah so we are trying to drive the backstipping error to zero so obviously we introduce a quadratic term in the backstipping error in the form of v2 right and this is again the same right and now what we would want to do is right so in fact I would say want to choose what we would say is we want to choose the x2 dot which is the control to be just again the same as before which is cancelling these guys out okay but now remember that x2 desired contains the theta hat right another state no longer the theta which is a constant and the derivative of the constant being zero yeah so when I take the derivative of x2 desired that is I compute x2 desired for its implementation what is it that I will get I will get this minus k1 x1 dot which is fine yeah then I get this term is actually try to turn this term is minus theta hat dot f x1 all right this term is minus theta hat dot f x1 and if I plug in this theta hat dot here this is what you get okay this term is just minus theta hat dot times f x1 all right and then the next term is just I mean because we are doing the product rule for the differentiation right so you have theta hat dot f x1 and theta hat f dot x1 okay and and you know what happens is that I mean I of course plug in for the x1 dot I keep the theta hat f dot x1 as it is right but notice that theta hat f dot x1 is also theta hat partial with respect to x1 times x1 dot so not just an x1 dot here but also an x1 dot from here okay so that's what is written here so this actually becomes we don't need this step this actually becomes k1 plus theta hat del f del x1 x1 dot in fact this is what we had even here in the control law minus k1 minus theta del f del x1 so this theta gets replaced by theta hat x1 dot yeah so that's the same right so that's what you have here and you have minus gamma x1 f squared x1 which is just coming from this term the derivative of the estimate yeah great now you of course substitute for x1 dot also because we want to write the whole thing we want to write the whole expression right excellent excellent so so what we are sort of trying to understand is that whether the control is implementable or not all right and this is where things start to get why we in this case in the adaptive case if you notice this x2 desired is of course implementable that's how we chose it right I mean we we introduced the theta hat in x2 desired just so that it is implementable and typically we assume all the states are known and theta is unknown but then because theta is unknown we introduce a theta hat here so therefore x2 desired is implementable right so so in this control x2 minus x2 desired is of course implementable but there is a problem with x2 desired dot why why is there a problem because if you note when we took the derivative carefully we came to this step everything is implementable but then we reintroduced the theta how did we reintroduce the theta which was an unknown because of x1 dot and because the derivative of x1 shows up in the control law because of this therefore we reintroduce the unknown okay so this is where we sort of start to have an issue with the implementation of the controller so that's what I will say so in fact we are going to highlight this so due to this control u is not implementable all right the control u is not implementable because of this theta okay so that is what we say contains the theta which is unknown all right so that's what we want to do now we want to figure out what we need to do and we need to figure out what we need to do we of course do the sort of obvious thing yeah instead of having a theta we replace it with a new estimate theta mu hat okay so because there is an unknown here every time we get an unknown that's what we do in certainty equivalence we replace it with the estimate but the problem is the key problem to remember is that so replace with estimate is the usual solution is the usual solution via the certainty equivalence principle all right but we cannot use theta hat since theta hat dot already specified yeah since theta hat dot is already specified yeah because theta hat dot is already specified when you do the Lyapunov analysis with this theta tilde squared this guy yeah it will only cancel this one term it will not cancel the theta tilde term that you get because of this change in the control yeah this term will not get cancelled okay and of course you also want this term to be cancelled because I mean in the ideal circumstance this term goes away but in the adaptive circumstance that's how we cancel this theta tilde terms or the parameter error terms is by introducing a Lyapunov function and using the update law for the theta hat dot in order to cancel this term now if you are unable to cancel the term corresponding to this we'll be left with a non negative definite term in the Lyapunov function derivative and that's a problem and therefore we need to introduce a new estimate new hat yeah so for the same quantity yeah for the same quantity we have a new estimate and that's what we do so this mu hat is what replaces theta hat theta and we of course construct an appropriate Lyapunov function and we will look at the Lyapunov function construction and the subsequent analysis in the upcoming session all right okay so what is it that we looked at today we sort of are continuing our analysis of this unmatched case which as we can see is turning out to be much more involved than the matched case these we saw you know what the differences are we saw that the x2 desired starts to contain the unknown parameter yeah and as as we go into the adaptive case where of course theta is assumed to be unknown we land in some trouble in terms of how to define the update loss and then in the end we come to a situation where the control which has this x2 desired dot also contains the theta again so that theta seems to reappear and then we are looking for a way to deal with this so we introduce a new estimate for this theta into the analysis so essentially the only way we are able to resolve this issue at this stage is by having two estimates for the same unknown parameter theta okay so this is very standard and adaptive control by the in a lot of scenarios we do what is called as poor word parameterization all right so we will talk about this and how this unmatched state backstipping adaptive control will happen in the subsequent sessions all right okay see you folks again thank you