 Hello everyone. Welcome to get another session of our NPTEL on nonlinear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. We are now pretty much close to ending the ninth week of lectures for this course and I hope that all of you with me have learned several analysis and design techniques for designing algorithms that will fly autonomous systems such as what we see in our background. What will be rather interesting and useful for me is to get feedback from all of you on the kind of applications that you envision using it or even start to use it. And I would really like to see some testbed results at least on how or the kind of improvement that you could achieve using adaptive design methods. So what we were doing last time was essentially in this week 10 lecture notes and we had started looking at the notion of adaptive CLF adaptive control Lyapunov functions, right? And we talked about the equivalence of the adaptive control Lyapunov function and the control Lyapunov function for the modified system. And finally we also showed that if an adaptive CLF exists for a system of the form 1.1 then the system is globally adaptively asymptotically stabilizing which means that I can design an adaptive controller for system 1.1 which ensures that x and theta tilde states remain bounded and further that the x states converge to 0 as t goes to infinity, right? So that's rather nice. So the power of ACLF is now I hope well understood. So if one is able to construct an ACLF then I mean you already have sort of achieved a big goal because just by using this ACLF you can now design a controller, right? And it is also very clear and evident in some sense, right? The alpha, the control that the feedback law alpha in this case is obtained just by, you know you can simply use like a Einstein's on-type formula to get an alpha a stabilizing controller and the tuning function tau is obtained using this expression, right? In fact there's a clear expression for what would be the tuning function, yeah? So this is the cool thing, right? I mean the expression for the tuning function is clearly known. The expression for alpha can be obtained by an Einstein's on-type universe of formula. I mean of course you can do all of this intuitively, might be even better, easier, analysis-wise. But even if you don't you can almost automate it, right? You can have a computer compute these symbolically and simply implement it. So having a adaptive control Lyapunov function for a system or for a class of systems is a rather powerful, you know, tool. Great. Now what we want to do is to move on to the back-stepping version of things, right? Because what does back-stepping mean? It means that I add some integrator layers and I can still, you know, apply the same method. The adaptive integrator back-stepping was essentially meant that I can design a control for the first stage, but then if there's an integrator layer then I can just move this controller to the second stage using a smart construction of a Lyapunov function. Here also we do something similar, right? Extended matching was slightly different, of course, because we did not actually declare a Lyapunov function for the first stage separately. We directly looked at a constructing, we directly constructed a Lyapunov function for the second stage, right? So that's why that's a little bit different, yeah? And this is a little bit different. However, the point being that you can, you know, you can do back-stepping, yeah? And that's what we want to do. We want to do adaptive back-stepping via CLF starting this session. So I'm going to mark this as lecture 9.5, right? This is where we are. And so what is this? So you are saying that if you have a system, again, which is looking like the system 1.1, yeah? Where x is the state in Rn, theta is the parameter in Rp, and the control is still a scalar control, right? The system is said to be globally, adaptively, asymptotically stabilizable, right? I'm sorry, if this system is globally, adaptively, asymptotically stabilizable, then the claim is that the system with the integrator is also globally, adaptively, asymptotically stabilizable, right? So anyway, so that's what we will, of course, prove, right? Or try to prove, right? So that's the idea. So the system 2.2, as you can see, is essentially an integrator version. So the control is replaced by the state psi, and then there's a psi dot equal to u. So it's essentially the standard backstepping form that you are used to seeing, all right? There's nothing new here in this structure, right? So this is what we want to prove, right? That if I'm given that this is globally, adaptively, asymptotically stabilizable, then I want to claim that this is also globally, adaptively, asymptotically stabilizable, all right? So let's look at the proof, okay? So if you know that system 2.1 is, in fact, globally adaptively, asymptotically stabilizable, then you know that there exists a feedback, a VA, and a gamma positive definite such that this sort of a relation holds. This is essentially the ACLF relationship, essentially the ACLF type relationship, okay? Now, what do we do? We use our backstepping idea, right? We know that alpha cannot directly be applied because psi is not actually a control, but psi is in fact a new state. So we try to do the next best thing. We try to make psi to chase alpha. So we make psi to chase alpha. And how do we do that? We add a quadratic term in the Lyapunov candidate, right? So we consider this V1 or a candidate function V1, which is essentially the same VA. Notice again that V, although I talked about adaptive, I am not really putting theta hat anywhere, okay? I did not really put theta hat anywhere. This alpha, though I have not specified it, alpha is just x theta. So what we are going to try to prove is that given VA x theta ACLF for 2.1, we claim that V1 x psi theta is an ACLF for 2.2. That's it. We are not going to worry about, because what do we know? We know that once I have an ACLF, I can design controllers and adaptive laws and all that jazz. Yes, I don't worry about unknown theta at all right now. I just worry about finding an ACLF. Now, because the system is globally adaptively asymptotically stabilizable, existence of a VA is guaranteed. Guaranteed, right? That's what it means, right? We already said that, right? That if you have an ACLF, I mean, if the system is globally adaptively asymptotically stabilizable, means the existence of such a VA function, right? So this is guaranteed. So what we are claiming is that from this VA, I can construct this V1, which is just a standard backstabbing construction. Because what do I do? I take the VA and then I add to it the backstabbing error term, a quadratic in the backstabbing error term. It's just psi minus alpha that's defined as a z and this is just half z squared altitude. And we essentially claim that V1 is in fact an ACLF for this new system. All right? So how do we do that? We just try to compute the ACLF inequality, right? So what is this ACLF inequality? We want, we essentially have this. Let's see. We essentially will show this. Okay? So that's what let's be careful. Will demonstrate this inequality. And how do I demonstrate this inequality? I'm just going to take V1 and take its derivative along this. Yeah, V1 is a function of xi and theta. So all I'm going to do is I'm going to take the derivative of V1 along this equations 2.2. Right? Because this is what, not along 2.2, but the modified version of 2.2. Okay? Let's be careful here. All right. Great. We have assumed that psi dot is u is some alpha 1. So now, oh, I'm sorry. So what is the left-hand side? The left-hand side is essentially the V dot for the modified system. So that is del V1 del x, right? Times fx plus cap fx theta plus this guy plus gx psi. Right? And del V1 del psi times psi dot, which is alpha 1. All right? So that's it. Let me verify if this is correct. Correct. So this is del V1 del x times the modified x dot, if you may. The modified x dot, if you may. And del V1 del psi times the currents. So this is what should be the case I believe. This is what should be the case I believe. The modification is basically theta plus gamma del V1 del theta transpose. Right? It is now del V1 del theta transpose. Right? Is this quite correct is what I'm wondering. So if I write the system out carefully, I want to write it a little bit more carefully. X i d dt is and the drift term is fx plus gx psi and zero plus the parameter term is fx zero times theta plus the control term is zero one times u. So for this system, if I want, so this is my sort of this f prime. And this is f prime. And this is sort of my, sorry, this is not quite, yeah, sort of my g prime. So what I will have is everything remains the same, but this gets modified. Right? So technically, this should be right. So this should be, if I was to write this carefully, I have to be a little bit careful. I'm sorry. And this may not. So this is this should be del V1 del x times fx plus gx psi zero plus fx fx zero times theta plus gamma del V1 del theta transpose plus zero one alpha one. And this should be multiplied not by just del V1 del x, but by del V1 del x and del V1 del. So is this correct is what I'm thinking. Yes. Yes, this is what it should be because if you look at, I'm just trying to make sure this is precise, I think it will come out to be the same expression, but I want to make sure this is the correct expression. So what do I do? How do I get the modified system? I just take the original system and just in the term connected to the parameter and make this change. Right. And so then everything is multiplied by the del V or del x. So in this case, there are two states that V1 depends on x and psi. Therefore, there has to be now I apologize. Therefore, there has to be del V1 del x and del V1 del psi. Right. Correct. And the states again the term in the theta is just modified. Everything else remains exactly the same. Right. Everything else remains exactly the same. So I really hope you can all see this. Right. And this essentially I believe boils down to exactly this guy because this will give me del V1 del x, fx plus gxi plus del V1. So yeah, yeah, essentially this from here, I will get this. Yeah. So essentially the same thing just that this is how it's been obtained. So this term has been obtained using this kind of a calculation. Okay. I hope this much is clear. Yeah. I wanted to make this more precise here. All right. Great. Great. So yeah. So basically, yeah, we want to show that this is an ACLF. So now we evaluate the left-hand side more carefully. Right. So how do we do this? So again, so this is a correct expression. This is correct. Yeah. I was just not sure if all the terms are correct. So I just wanted to write the original expression. All right. So if you look at the left-hand side, what I can do is I can write partial of V1 with respect to x and partial of V1 with respect to psi in terms of the components using this expression right here. Right. And that's what we do. So del, so if you look at this expression here from here, you can write del V1 del x is basically del VA del x. And is there anything else? Yeah. Minus, sorry, plus z times or actually minus z times del alpha del x. Yeah. And similarly del V1 del psi is equal to z times del no del V1 del psi is just z is just z. I think that's correct. So this I can substitute for here. Yeah. So this is del V1 del x and this is del V1 del psi. Yeah. This is del V1 del x. So that's what we do. And this is also del V1 del x right here. Sorry. This is del V1 del theta. Right. So we also have del V1 del theta and the expression here. So del V1 del theta is equal to what del VA del theta minus z del alpha del theta. Okay. Very similar. So this expression and this are very similar and del V1 del psi is just z. So all of this is just computed from this expression. Okay. And so then we substitute all of that here. Okay. Once we make this substitution we again know that some relation on del V on VA. Right. And so we write that out. So we know that del VA del x plus FX FX theta plus gamma del VA del theta transpose. So I I'll tell you which terms we take. We take this guy along with this, this and this term and this g x times psi is written as g x times z plus g x times alpha. So we take this term. So all of that goes here. Okay. And then we are left with these terms that del VA del x along with this guy, this guy and this guy that goes here. Right. Right. Then I'm left with the entire del z del x terms that's here and this term is just taken as it is. This term is taken as it is. Right. And now what do I know by our ACLF result? I know that this quantity is of course minus W x theta, right. And then I have again taking z common with this, I have alpha 1 minus all of this mess. Yeah. So I so you can see that the good thing you can see is that all the other terms contain z in it apart from this guy in red, apart from this guy, everything else there is a z here in the end. There's a z here in the end. There's a z here in the beginning and there's a z here. So I can take the z common in all the other terms. That's what I do. And I get this big expression. So now if I choose my alpha one to cancel all of this, yeah, because again, remember as of now, we are not talking about theta being unknown or anything like that. Theta is completely known. Okay. So we can use theta. So everything is known. So I cancel all this mess, which is what is all of this. Yeah. And then I introduce a good term, right in the z. And once I can do that, I know that I'm left with V1 dot is minus W x theta minus z squared. Right. Which means what? So it turns out to be negative definite for all theta in X, comma, Z, right? So this is a ACLF because it is a CLF, right? Because if you got the derivative to be negative definite, it means that V1 is a CLF for the modified system, right? By choosing an appropriate alpha one, of course, so if you could choose a one alpha one, you, when you take an infimum or all possible U that is again smaller than, necessarily smaller than this, or necessarily not larger than this at least. And therefore, you have that V1 is a CLF for this modified system. And if you V1 is a CLF for the modified system, it means it's an ACLF for the original system. Right. And what was the original system? The original system was this kind. Okay. So that's pretty great. We have essentially been able to prove that if the original, if the system 2.1 was globally adaptively asymptotically stabilizable, then so is this guy. Why? Because from the ACLF for this system 2.1, I could construct an ACLF for system 2.2 is by modifying the control alpha one and choosing V1 in a smart way from VA. All right. And that's, that's great, right? That's essentially what you want. Yeah. So this is what you need in, you know, typical back stepping adaptive control also, right? So just that we are, we now have a different sort of expression, right? Now, what would be the control law? The control law is alpha one, we just computed it complicated. And what would be the tuning function? Now, this is interesting, right? Tuning function is, has a proper expression, right? Because we already have an expression for the tuning part. It is the partial of the ACLF times F. F is the term multiplying the unknown in the dynamics. In this case, term multiplying the unknown in the dynamics is this guy. And the ACLF is this guy. So what do you have for the tuning function? It is partial of V1 with respect to both states, which is basically, this is the same as saying times F0, term multiplying the unknown. And if you actually compute this, yeah, the second term goes to nothing. So I'll just have del V1 del X F. And again, I can expand del V1 in this form. So this is essentially my tuning function for this integrator system, okay? So if you want to look at extension to, again, set point regulation and tracking problems, you have to look at the KKK book, and this is the reference, section 4.2 and 4.3. So this is essentially the tuning function design method. What we want to do is, of course, we want to look at some examples also in the subsequent sessions, that is the idea. So let's, we will do that, you know. But just to summarize, what we did in this session is that we had started, we had already looked at ACLF ideas in the last session. In this session, we essentially wanted to look at the extension of the ACLF idea to backstipping. That is when I add an integrator layer, right? Then how do you apply the ACLF idea is what we wanted to look at. And we did. We actually proved that if you have an adaptive CLF for the original system, then if you add an integrator, you can construct an adaptive CLF for the integrator system using the original ACLF itself, right? And we also showed how this is an ACLF by slightly modifying the feedback law. And of course, we can then construct a nice tuning function, which will help us construct a parameter estimator also, right? So what we intend to do, you know, in the subsequent session is to take up the same example. If you remember, we had taken up an example for an unmatched case and we solved it using the adaptive integrator backstipping and also the extended matching method. Now we want to solve the same problem using the tuning function method also. And so that is what we will do. Of course, we'll try to get the ACLF motivation from the earlier examples and so on and so forth. But yeah, essentially we will, you know, that's sort of the problem we want to work out. And because it is a vector problem, we are again hopeful that we can do a good job. Yeah. So that's what we'll try. And we'll try to solve that particular problem in the vector context. And we try to design a tuning function based adaptive control law and try to compare it with our other two case control adaptive control laws. All right. So I hope all of you will join me again in the next session. And yeah, and I hope you enjoyed whatever we discussed today. So these are one of the more advanced methods of adaptive control and rather useful methods also. All right. Great. So this is where we stop. Thank you.