 Hello, welcome to yet another session of our NPTEL on nonlinear and adaptive control. I am Srikanth Sakuma from Systems and Control IIT Bombay. So we are very into the seventh week of our course and we have this very nice and interesting background image of this satellite from SpaceX orbiting the Earth and we are well underway into learning how to design algorithms that will drive systems such as these. So what we were doing last time was that we had started discussing the back stepping approach for the unmatched parameter case. In this case, what we have is that the unknown parameter appears in the state where there is no control. And we actually started with the known parameter case. So in order to simplify the treatment, we just had stabilization as the objective that is we just want both states to go to zero. But as we had stated last time, this doesn't really change anything if you want the states to in fact track a reference trajectory. So we started off with the known parameter case as always and what we did first was that in the known parameter case, so I am sorry let me go back here, in the known parameter case we of course constructed these two Lyapunov candidate functions for the first state and the second state where X2 desired is of course the desired state corresponding to the first one. So basically after we did this, we took a derivative and we are able to show that v2.looks something like this and for the known case of course, we can prescribe this kind of a law. We can prescribe this kind of a law and in fact we also computed the derivative and that is we compute the derivative of X2 desired and the control expression looks something like this. We can see that because of X1 dot, there is a theta in the control expression also. But since theta is known, we are treating the known case, so this is not a problem, we implement this control and we can do our Lyapunov analysis with v equal to v1 plus v2 and then we are left with once we apply the control and we get to somehow this stage, we are actually left with two nice negative quadratic terms and one mixture and then we use the standard sum of squares method which I hope by now we have all understood very well to get to this expression here. So from inequality here, we get to an inequality here and we can then claim that v dot is negative definite if k1 and k2 are greater than half. So with this sort of analysis of course we can use a standard Lyapunov theorem to claim that all the states are stable at the origin and of course reaching the origin asymptotically. So that is global uniform asymptotic stability and in fact in this kind of a specific situation global exponential stability, all right, great. So we of course with this you can prove that the modified states that is the states of the back stepping which is x1 goes to 0 and x2 goes to x2 desired and with the expression of x2 desired and the assumption that f0 is 0 which we already said that was a reasonable assumption, you get that x2 also goes to 0 because x2 desired also goes to 0, right, great. Now when we went to the unknown case, this sort of expression of the control created a problem right because there is a theta, there is a theta and so on and so forth. So this is not, the entire approach is not viable anymore. So we have to start with a different approach and what was that? We in fact had to specify the desired x2 with a theta hat, right, there is still certainty equivalence but starting with the pseudo control itself. So x2 was a pseudo control and at the pseudo control level itself I have to use an estimate right because that is where the unknown parameter in fact appears, right. And so if x2 were in fact x2 desired then you have this sort of a x1 dot equation right with theta tilde being the parameter error, great. So now we of course proposed this kind of a Lyapunov function for the first state, right. So the first state itself contains the theta tilde squared and we do an analysis, this is again assuming that x2 is exactly equal to x2 desired, alright. So we already, this should be clear, assuming x2 is identical to x2 desired. So this is of course a big assumption right, this is just a pseudo control, not the real control. But of course everything goes through because you know we choose a theta tilde, theta hat dot, right, which sort of cancels this term so that's what we do. And then we get a v1 dot which is negative semi definite, right. So on the other hand in the known case this was actually negative definite, right. So here we already have an unknown parameter therefore the derivative is v1 dot is negative semi definite, although the expression is again the same, all assuming that this pseudo control is in fact the real control. Then we go to the second state, right, where v2 is the same, we have not changed v2. So v1 changes but v2 has not changed yet, alright, yet, I would use the word yet. We get v2 dot to be the same and the control we want to prescribe same. So we want to choose but then what happens, right, we compute the expression, right. So now the expression is slightly different of course because x2 desired contains theta hat and not theta itself, right. So let's look at x2 desired dot, right. So this expression is basically theta hat dot fx1, right, and so basically it is minus k1x1 minus theta hat fx1, so it is just the derivative of minus k1x1 which is minus k1x2 and derivative of theta hat fx1 using the product rule, right. So that's what we get two terms, right, derivative of the first with respect to the first term and then the derivative with respect to the second term. Now when we take the derivative with respect to the second term you again get an x1 dot, alright. We already had an x1 dot here and we get an x1 dot here too, right. And that's what we clubbed. This and this gets clubbed here, right. This and this term gets clubbed to give me this term here, right. And this is of course this term, alright. So these are the two distinct terms. Now it's easy to see that this is of course an implementable term, no problem. But then x1 dot contains the unknown again, yeah. So this is again implementable. But the x1 dot, which is this guy, contains the unknown theta, okay. So now this control is still not implementable, right. So what we choose as control U, which is the actual control, is not implementable. So of course we want to replace it with C estimate. Now we cannot replace it with a theta hat, okay. I encourage you to do this, like try this erroneous step of trying to replace this with theta hat itself and then carrying out a Lyapunov analysis. How would you carry out a Lyapunov analysis? You would simply, I would say, so what I want to say is try with theta hat in U here, okay. And V equals V1 plus V2. So and because of course you remember that V already contains, right already contains the theta, V1 already contains the theta tilde term, right. So what is the problem? The problem that occurs is that you already chosen a theta hat dot, right. And because of that you will see that this term will create a theta tilde term of course, because you instead of putting a theta here, you will put a theta hat and this creates a theta tilde term which you cannot cancel anymore because the theta tilde term you got was already canceled by the theta hat dot earlier. So there's no more terms remaining to be used to cancel this error, okay. So how do we resolve this issue? That's where we come to in lecture, in today's lecture, which is lecture 7.4. I know this was a rather longish introduction into the previous lecture but since this is slightly more involved, I wanted to sort of repeat what we did, okay. So the steps are simple, they may be confusing for you, right, in the beginning but if you revise it, it's actually simple steps, just a little bit more bookkeeping, all right. All we're doing is a little bit more careful bookkeeping here, okay. So let's look at this. So x to the z dot contains the, you know, theta, of course we cannot implement this. So we replace theta with a new estimate now, yeah, because like I said, the old estimate and please feel free to try, in fact, I encourage you to try the old estimate, you will always be left with one term which in theta tilde which you cannot cancel anymore, right, because you are left with no more terms from the v1 dot, right, that we can be used to cancel because you already chose a theta hat dot. So we use a new estimate mu, so mu hat. So the new estimate mu hat replaces the theta, okay. And of course, you know, when you basically try to write it in terms of the earlier x to the z, so we call it the x to the hat, x to the z hat dot, right. If you replace it in terms of the earlier x to the z, this becomes x to the z dot plus this mu tilde times this term, right. So it's just because there was a theta here, right. So there was a theta here, instead of the theta we put in the mu hat and mu tilde is defined as theta minus mu hat, right. So this is simply that term. So this term multiplied by this, the mu tilde, okay. So this was the earlier x to the z hat dot, right. So now we choose a different v2, right, instead of choosing it as just this term, we add a, was a term corresponding to the parameter error, right. So for the same parameter, right, what do we want to do? We are creating another Lyapunov candidate function, okay. So for this special unknown case situation, we are creating a different Lyapunov function, okay, creating a different Lyapunov function. Now, this was already there, nothing new here, the new term is here. Why? Because we introduce a new estimate for the same parameter theta, okay. So this is what is mentioned here, 1 over 2 sigma, mu tilde square in v2 corresponds to the overestimation term, all right. So this is known as overestimation, right, so, or overparameterization. So we also use the term overparameterization, okay. What does it mean? It means that I use the multiple estimates for the same parameter, all right. So this is called overparameterization. So now we carefully do this analysis, okay, let's carefully do this analysis. Let's look at v as v1 plus v2 and compute the v dot, right. Now remember, earlier v1 dot was computed assuming that x2 is in fact equal to x2, right. Now we can't do that. So I am going to write what is x1 dot, x1 dot is x2 plus theta fx1 and x2 dot is u, right. And what did we choose as u? u was, let's see, u is this, but u is not exactly this guy, u is not this guy, right, u is in fact x2 hat desired dot, right. So basically, let me carefully write all the terms here so that we have everything at our disposal, so we have everything at our disposal. So we have x2 desired as equal to minus k1 x1 minus theta hat fx1 and we have u is equal to x2 desired hat dot minus k2 x2 minus x2 desired minus k2 x2 minus x2 desired. And this x2 hat desired dot is nothing but x2 desired dot plus mu tilde, I am simply copying it over here, fx1 k1 plus theta cap del f del x1, okay, right. So all these things, right, we have several ingredients here. So we have several ingredients here, I have copied all of them, right. So u is in terms of the x2 hat desired dot and x2 hat desired dot is in terms of the x2 desired dot, so this is minus k2 x2 minus x2 desired, right. So this x2 hat desired is this guy and minus k2 x2 minus x2 desired is this guy, okay, and x2 desired itself is this. So now I take the derivatives diligently, complete the analysis. So v1 dot has x1 x1 dot which is x1 x2 and this term which is, okay, I forgot to write one which is I believe the theta hat dot which is gamma x1 fx1. So theta or theta hat dot or theta hat dot equals minus theta tilde dot equals gamma x1 fx1. So gamma x1 fx1, all right, great. So this is what we have, okay. So v1 dot is x1 x1 dot which is x1 x2 and this term gives me theta tilde, theta tilde dot by gamma which is theta tilde times x1 fx1, right. So this is actually theta tilde times f1 x1 fx1, this is what I get from v1 dot, okay. Let's see, I'm trying to wonder, wait a second. This is theta, so I believe this term gives me, let me write this out. This term gives me in v dot theta tilde, theta tilde dot divided by gamma and I think we are fine here. So it's just theta, theta tilde dot is minus theta x1 fx1 minus theta tilde. So this should be equal to minus theta tilde x1 fx1, okay. Let's see what's there in the next step. x1 x1 dot actually gives me x1 x2 plus x1 theta fx1, this is correct and this term gives me, yeah, so this is where there is a fault, this should be not, should not be there. Minus theta tilde, okay, okay, I'm sorry. This actually cancels out, okay, so these two terms actually cancel out, all right, this is fine, this is absolutely okay, so it's written in a more complicated way than I expected. All right, so the first two terms are just x1 x1 dot and this term is theta tilde, theta tilde dot. This term is of course from the second term x2 minus x2 divided by u minus x2 dot from here and the last term is this, okay, so this is the only piece that we have not selected yet, okay, so this is absolutely okay. So now if I go to the next step, right, and of course I already got the expression for the u, which is in terms of x2 hat dot, x2 d hat dot and that's this entire expression, right, so what you get is first what I do is I write x2 in terms of the x2 design, right, so if I write x2 in terms of the x2 design, what do I get? Sorry, x, yeah, not x2 in terms of the x2 design, but in terms of the error, right, so basically what I will do is I will write this guy x1 x2, right, as x1 x2 minus x2 design plus x1 x2 design, all right, so this is just x1 x2, okay, and this is just from x1 theta of x1, just the dynamics and this is again from the update law, okay, and this term now minus k2 x2 minus x2 design is because of this term, so now everything else is from this term, right, so all of this, so this term gives me this term, all right, then let me see if I can change colors, then this term gives me this term, all right, and this term will cancel out, right, because there's an x2 d dot, so that term, the first term in fact cancels out, so this is cancels, okay, so this term in fact cancels out, right, so that's it, and then I'm left with this guy which is yet to be chosen, right, which is yet to be chosen, right, so this is just, this is coming from this with just x2 minus x2 design multiplied, this is coming from this with just x2 minus x2 design multiplied and because of the Lyapunov function derivatives, right, now if I look here, right, if I look at this x1 times x2 desired, right, okay, so what is x1 times x2 desired, and so x2 desired is this, right, so x1 times x2 desired is, if I expand this term, it's minus k1 x1 squared plus x1 theta hat fx1, all right, x1 theta hat fx1, so I get a, from x1 x2 desired, I get a minus k1 x1 squared and this x1 theta hat fx1 cancels this term, right, because theta minus theta tilde is minus theta hat, all right, so I think this is fine, I hope the signs are okay, sorry, so this, yeah, that's what I'm wondering, this should be a negative, yeah, now this negative will cancel theta minus theta tilde x1 fx1 which is actually equal to theta hat fx1, so this is actually equal to theta hat fx1, so this cancels with this, all right, so you get this minus k1 x1 squared left from here, so this just leaves this much and that's it and then this is a nice mixed term, x1 and x2 minus x2 desired and then there is a nice negative quadratic term in this, if, yeah, if I choose my mu cap dot, right, as sigma times x2 minus x2 desired fx1 k1 plus theta del f del x1, okay, so if I make this choice, this term and this term cancel out, right, so all I'm left with is this nice negative term here, nice negative term here from this thing and then this mixed term here because everything else just cancels out, okay, and then what, right, so that's what is sort of written here, you see, this is the update law, sigma x2 minus x2 desired fx1 k1 plus theta cap del f del x1, so this is exactly what is left here, all right, as you would expect and from here to here, of course this is sum of squares, yeah, again we are already used to this I hope, so this term is simply less than equal to ix1 square plus half x2 minus x2 desired square, this is just using a, b is less than equal to a square plus b square, so this is the standard sum of squares, we do this again and again, so I hope you're used to it, so once you get to this, you know that if k1 and k2 are greater than half, you have this, you know, whatever, nice negative semi-definiteness, only negative semi-definiteness, of course, because now theta tilde and mu tilde are also states, so there are two more states, so obviously this is not negative definite, it's only negative semi-definite, but you know very well that I can use signal chasing and babala dilemma to prove that x1 and x2, in fact go to 0, sorry, we can prove that x1 goes to 0 and x2 goes to x2 desired, right, so let me, let me be very careful here, let's see, wait a second, we can prove x1 goes to 0, right, so not this yet and we can prove x2 goes to x2 desired, okay, and we can also prove x2 desired, x2 dot goes to x2 desired dot, so what we have is that x2 is this guy, so if x1 goes to 0 and if 0 is going to 0, so this is actually x2 desired, right, so if x1 is going to 0 as we have already, I apologize, give me a second, there is some misbehavior from the pen, right, so we already have x1 going to 0 directly from here by using signal chasing and babala dilemma and we have x2 going to x2 desired and the derivatives also converge, this is standard, right, so you have x1 going to 0, right, and we also have x2 going to x2 desired, now we look at x2 desired more carefully, so what happens to x2 desired, I know that as t goes to infinity x1 goes to 0 and further if I assume that if 0 is 0, like before a reasonable assumption, then I also have this term going to 0, right, and so I have the whole thing going to 0, all right, I have the whole thing going to 0, so this is fine, I believe you can also show also possible via another route, so you can show x1 goes to 0, x2 goes to x2 desired and I believe you can also show x1 dot goes to 0, right, and from here x1 dot is equal to x2 plus theta of x1, right, and if x1 is going to 0 and this is going to 0 because x1 is going to 0, then this also has to go to 0, okay, so this is another way of doing it, yeah, both are identical, doesn't matter, right, so in this case one of the caveats is that the function f is deliberately chosen to be a function of the first state only, if this doesn't happen doing stability analysis will be very difficult, yeah, and you can also see that you are saying that fx1 f of 0 is 0 and things like that, so if it was a function of both the states then then you have to talk about x2 also being 0 and things like that, so this is a rather complicated situation if you have function of both the states, this is again something that is more of a well, I mean, it will be a much more tougher exercise than research exercise and may or may not work out for all cases, all right, okay, excellent, so what did we see today? We have essentially seen how to complete the stability proof and in fact adaptive control design for this unmatched parameter case, this of course turned out to be way more involved, we had to do over parameterization, we had to introduce a second estimate and in the Lyapunov function also you notice that we have two more, two additional terms, one due to theta tilde, now that due to mu tilde, right, so this was an over parameterization and the analysis itself was a little bit more involved than before, yeah, but since the problem is also more involved, this is of course reasonable and you can see and also start to think that the ortega type construction that worked for the matched case, you cannot actually use in the unmatched case, yeah, so that you understand that the Lyapunov function now is significantly more complicated and more involved, yeah, so backstipping is the way that you will have to go for the more complicated cases, all right, excellent, so this is where we stop today and I see you again in the next session, thanks.