 Welcome to yet another session of our NPTEL on nonlinear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. So we are well into the eighth week of our course on nonlinear and adaptive control and I think it's evident to all of us that we are now in a very good position to be able to design control algorithms that drive autonomous systems such as the SpaceX satellite that you see in the background. We have looked at several different sort of adaptive control designs by now and I think all of you have gotten a very good hang of how we are going about these designs. In the last couple of lectures specifically, we were sort of looking at this vector version of the adaptive integrator back stepping design and this is a rather powerful method if you may as you can see and the idea here is that you can use the over parameterization in order to compensate for the fact that the unknowns are unmatched with the control dynamics. So this is what I would say is a rather strong positive of this adaptive control method. So we looked at this in some detail in the past two lectures and so this is of course something that's rather good. We also showed how to construct the adaptive laws. We also showed most importantly how the second level Lyapunov function is in fact constructed. So that essentially was this guy. So I mean I really hope that all of you got a very good feel for design of adaptive controls for vector systems. We have looked a lot at the scalar case but of course we also want to look at the vector cases carefully. The vector case of course is different only in the sense that the Lyapunov function now contains norms and norm squares instead of the typical scalar square quantities. And of course we had to do a little bit of careful bookkeeping in order to approve these nice properties. So that's really I mean I would really say that the differences are rather minor and it's not a significant novelty in terms of the design method or anything. In fact at every stage you can find a parallel with the scalar method. We of course pointed out the big issue and that is the requirement for in this case two parameters, two estimates to identify the same parameter theta. And we certainly want to get rid of this issue. And this is where the extended matching design method comes in. So I'm going to mark the lecture here. We talked a little bit about the extended matching design and the fact that we are starting again with the scalar case system just like before. But still I want to mark the lecture right here just for the purpose of making a new beginning on a new page. That's all. So if you look at the system like we said it is still the single integrator system amalgamated with a control in the next stage. And that's really the idea here. And we of course want to do an extended matching design which means it but it doesn't mean anything. It's just a name. The idea here is that we do not want to go over parametrize. So let's look at what is the desired ideal control here. If you remember alpha one that is the desired ideal control is not different from the week seven. Because if you look at week seven the unknown case if you see the desired on the XD is still something like this. And that's essentially what we will have here too. I mean you just different phi instead of an F but otherwise it's the same. We don't know the parameter therefore we replace it with an estimate multiplied by the phi. And then we cancel the sorry we introduce a good term in the X one. So this is actually equal to X two desired in our notation. And this is actually equal to X two desired in our notation. Now we of course just redefine states. We call the first state as X one, Z one equal to X one and Z two is just the back stepping error again. Why? Because X two is not really the control so X two cannot really be equal to X two desired. So we do the next best thing we try to make X two to chase the X two desired. And we of course define theta tilde as theta minus theta hat as we do in all adaptive control problems. So in the new states we write the dynamics in the new states of course. So the Z one is just X one. So Z one dot is just X one dot and that is X two plus theta phi X one right. And X two can be written as Z two plus alpha one. So if I do that I'm going to carefully rewrite these things. So this is I'm going to write a few intermediate steps. This is equal to Z two plus alpha plus theta phi X one. And I'm going to replace alpha as Z two minus theta cap phi X one which is the same as Z one minus C one X one which is the same as Z one. And then I have plus theta phi Z one right. Now these two of course combined to give me theta tilde phi X one or Z one. It doesn't matter. Then I have minus C one Z one and the Z two here. Okay. That's it. And Z two dot is X two dot minus alpha one dot which is X two dot is just the control and alpha one dot has two pieces because alpha one is a function of X one or Z one and theta cap. So this is del alpha one del X one times X one dot which is what we plug in from here and del alpha one del theta cap right times a theta cap dot. So we have not yet specified what this theta cap dot is. Okay. So we're not going to specify it now. Right. Let's go on. Yeah. So we have the dynamics in the new variables Z one and Z two. The important thing to note is that theta cap dot is not yet assigned. Yeah. Now what we do is seeing the earlier version what we were doing was we had as soon as we wrote a Z one dot or an X one dot in the earlier version. Right. We immediately defined a V one with the theta tilde and we came up with an update law with the theta cap. Yeah. So why we did this is because this is how we sort of understand back stepping right the handle the first state first all aspects of the first day. So what we did was we defined the Lyapunov candidate V one which ensure that the first state was negative semi-definite ensure that the derivative of the Lyapunov function along the first state was negative definite or semi-definite at least and then move on to the next day. We were essentially completely inspired by how back stepping is that if you deal with the first state then you go to the next and so on and so forth. But now we sort of control each you know to do handle everything for the first state. Yeah. So we do we do not define the theta cap dot we do not define it even at all. Yeah. So the important thing is that theta cap dot is not assigned V one is not defined. Unlike in week seven unlike in V seven V one is not defined. Right. And because V one is not defined theta cap dot is also not assigned. Right. And what do we do instead? Like I said we control our itch and we defined a combined Lyapunov function now. Yeah. And what is that? The same simple idea. I took the Z one squared then I take the Z two squared and I just have one theta tilde squared over two gamma appearing. Okay. No longer a second estimate and all that because I did not even create a first estimate did not create a first V one. Right. So I just am going to end up with the final me and just one estimate. Yeah. At least that's my hope. And now I of course nicely take the derivatives. Right. And you know, see if I can define a theta cap dot. Okay. So I have Z one Z one dot, which is this Z two and the Z two dot, which is this Z two dot is just X two dot minus alpha one dot, which is del alpha one del X one X one dot, which is the same as this. And del alpha one del theta cap theta cap dot. So you see there is already another theta cap dot here. But the good thing is that theta cap dot is in fact known quantity. Right. I mean, in the sense that I as a user, I'm going to specify it. So even if it appears in my V dot, it's not a big deal. Okay. Even if theta cap dot appears in my V dot, this is not going to be a big concern at all. Okay. So this is the idea. This is what sort of helps me. Right. So of course we have the last term, which is a usual term theta, tilde, theta cap dot over a gamma in V dot. Great. Now, I have the nice negative term typical in back stepping. Right. And then there's a term. I mean, then I can club this term with the Z two, right, which is you plus a Z one from here. And then all the terms which do not contain a theta, tilde, right. So I take you, of course, Z one, of course, then I have these two guys. Yeah. And then I have this guy because remember theta cap dot cannot contain theta tilde. Right. If it does, then you have a problem, right? Because theta tilde is unknown. And if your parameter update law, theta hat dot contains the unknown, then you've not designed an adaptive controller at all. You just designed a controller. Okay. So, so you taken in the Z two term all the quantities which do not have a theta tilde in them. So this is the only thing that gets left out. Everything else remains. Yeah. And then this guy comes from here. So that's what is the term here. And then I club the terms in theta tilde separately. So that's one term. Then this term gets combined. Right. So that is this guy here. Right. And then I also get a term here. Right. Remember, I also get a term from the first state. Right. And that appears here. Right. The term from the second state appears here. And then, of course, this term in the unknown quantity. Now something really nice and neat has happened. Right. I mean, if you look at these terms already have a nice negative quadratic in Z one in the first state. And in the second state, I have the control. So I can introduce a quadratic by canceling all these quantities because none of it depends on the unknown. So I cancel them, you know, comfortably and I introduce a nice negative term. Right. And the third term is an uncertain order. You know, I mean, it's not a sign definite term. I really do not know what kind of sign it will have. Right. So I'm not, you know, concerned myself about trying to make it negative definite and things like that. Right. So all I will do is I'll simply try to push it to zero. And this is what he ways are goal in typical non linear control. Right. Whichever terms you see are not sign definite. You will usually try to push them to see. Right. So that's the idea. So that's what I do. I use my theta cap dot in order to cancel these terms. And that's what I get. Now, if you look at this very carefully, this term, right, so it has two terms in it earlier. You had two separate estimates. And the first estimate contained gamma X1 FX1 as the first update law had gamma X1 FX1 notice. And that's here, gamma X1 FX1 just in different notation. The second update law that is the mu hat dot. Right. Contained, you know, something a little bit more complicated, which is the z term, if you may. Right. This is the z to the way of you define. So it's a sigma z to F times some this quantity times this quantity. So if you look at this guy, you have z to times a gamma times this quantity. Right. It is actually the same quantity. Right. Yeah. It's just that there is, you know, there seemingly is an additional, you know, piece here, which is the K1 term, which we don't have. Yeah. Which we don't seem to have here. Right. Yeah. Which looks like we don't have this K1 term. Right. And I'm just trying to look at why we get the K1 term. And that comes from the control here. And that comes in from. Yeah. I think that comes in from somehow this X2 desired dot term. All right. That somehow comes in from the X2 desired dot term. And I believe that the, I believe if I look at it carefully, this term is exactly the same as what we have earlier. Okay. This term is exactly the same because if I look at what was alpha one here. So let me look at what was alpha one here. Now, let me try to evaluate in that sense. Alpha one was this guy. Okay. Alpha one was exactly this guy. Okay. So I can mark this as my alpha one. Right. So I'll write it as such. So in the, in week seven, alpha one was actually minus K1 X1 minus theta hat of X1 minus theta hat FX one. So what I will do is I will try to evaluate as we already have seen that the first term is identical. So I want to evaluate the second term. Yeah. To show that this is also identical to the second update law. We saw that the first term was identical to the first update law. And what I want to do is to show that the second term is identical to the second update law. Right. So let me compute del alpha one del X1. And that's minus K1 minus theta cap del F del X1. Right. So this term becomes gamma Z1 Phi X1 plus or minus Z2. So the RF is actually equal to Phi. Remember. So I'm going to write this as Phi. You have to use Phi instead of an F. That's okay. Minus Z2 K1 plus theta cap del Phi del X1 times Phi X1. Yeah. That's what would be what you would get for the week seven type control. And if you look at the second update law, that was the mu hat dot, it's the same. Right. It's sigma some gain, which is gamma in our case. Right. It's the same. You have gamma. Wait a second. I will remove this. And you have the same. You have the gamma. Then you have the Z2, which is this guy Z2. And then you have exactly Phi FX1, which is the Phi X1. Then you have K1 plus theta cap del Phi del X1 K1 plus theta cap del Phi del X1. So this is exactly the same. Right. So not worry about the signages so much. But what you see is you have exactly the same term as the second update. Right. So that's really the idea of what has happened. By not choosing a V1 and the theta cap dot in the first step is that we have obtained the theta cap dot, which is essentially just an addition of the two update laws. Right. Then we had two parameter estimates and two update laws. So we have essentially obtained the same, you know, two update laws added together for theta cap dot in this case. Yeah. So this is rather cool. Yeah. We didn't have to, you know, actually design two different estimates. Yeah. And it almost seemed like a posteriori. Once I look at this, it seems like it's almost just a, you know, I mean, why did we even do the previous method is what you think. All right. So once we actually have done this, right, have actually implemented this theta cap dot and this you, what we have is that you get a nice negative term here and a nice negative term from the second piece. And this goes to zero. So I have a nice negative definite negative semi definite V dot, just like you expect in all adaptive control. And from standard signal chasing arguments, what you can show is that Z1, that is X1 goes to zero and Z2, that's the backstipping error actually goes to zero. And if you remember, what is alpha one? Alpha one is again, this guy, I mean, this guy, theta hat phi X1 minus C1 X1. So this is X2 minus theta hat phi X1 minus C1 X1. Right. Let me see. Yeah. Goes to zero. Right. Now I already know that X1 is going to zero. Now, if I also assume that phi zero is zero, that is when X1 goes to zero, this quantity becomes zero. Then this guy will also go to zero. And therefore you will be able to claim that X2 goes to zero as T goes to infinity. And that's essentially what you want. You want all the states to go to zero. Yeah. Again, we are looking at the stabilization problem. The tracking problem would have been no different. Yeah. But so in this case, we are concerned with the states X1 and X2 actually going to zero. And then you obtain that. That's essentially what you wanted. And that's essentially what you obtain. Okay. So this is sort of the extended matching method. And the idea is rather simple. You avoid designing the first level parameter estimate and the first level candidate Lyapunov function and you simply go to the next step and directly design a complete system Lyapunov candidate function. And this actually helps you with, you know, removing one of the parameter estimates. And you also find that structurally, the parameter estimate that you get is identity is, I mean, parameter update law that you get is essentially contains the sum of the two parameter update laws from last time. All right. So that's sort of, you know, something cool. So this is the sort of closed loop dynamics you would get if you actually plug in everything and put everything in place. Yeah, this is the sort of closed loop dynamics you would get. Now, one of the issues that you have is that basically you, when you look at the control law, right here, the control law is not appearing. So if you look at the control law here, you can see that there is a theta cap dot which appears. Now this theta cap dot appears because the unknown parameter is one step above the control. Right. Now, it's not difficult to see just by looking at the pattern that if the control appears at two steps below the unknown parameter, then a theta cap double dot will appear and so on and so forth. Okay. And usually, you know, I mean, it is not considered very healthy to have derivatives of quantities and these can lead to amplification of noise. Right. So you notice that this does not happen in the week seven method, right? In the week seven method, there is no theta cap dot that appears in the control law. And this is a good thing in the non-extended case. But in the extended case, this extended matching design, this is a problem. Okay. So it's not exactly apples to apples. There is a slight difference that the control law, which sort of gets hidden in the closed loop dynamics contains the theta cap dot which can lead to problems in implementation. Yeah. And these, as you go to control, which is lower and lower below the unknown parameter in the dynamics, then the theta cap double dot triple dot and multiple derivatives and higher and higher derivatives of theta cap will start to appear. All right. So, I mean, keep this in mind. Yeah. This is a sort of nice exercise. Anyway, I mean, I recommend that you do this is an aerodynamic model for a wing that you want to try to use the typical integrator backstipping and also the extended matching method in order to do a stabilization of this model. Yeah. That's the idea. So this is the reference for this set of lectures and very, very useful. So I strongly recommend that you look at these notes. Yeah. So anyway, so what did we do this time in this set of lectures today? Is that we looked at the extended matching design method, right? And what the extended matching design method does is to sort of reduce the number of parameters that you're trying to estimate. So you are still have just one estimate for one parameter or one set of parameters. So you do not have a larger number of estimates than the number of parameters. The problem we see that we end up with is that you find higher derivatives of the update law, which may or may not be convenient in implementation because of noise issues, et cetera, et cetera. All right. So keep this in mind so nothing is free, right? So we hid some issues in the control law itself. Yeah. But other than that, yes, we reduce the number of states in the controller. And this is, of course, a significant advantage when you're looking at implementations. All right. Great. So this is where we'll stop now. We'll continue in the subsequent session. Thank you.