 Hello everyone. Welcome to yet another session of our NPTEL on nonlinear and adaptive control. I am Shrikan Sukumar from Systems and Control IIT Bombay. So we are into the last legs of this course and we are in the middle of week number 11. So, well, I mean, in the background what you see now is a bunch of different kinds of applications that adaptive control has found using these are things like, you know, drones, fighter aircraft, spacecrafts, neural nets. So, so this is sort of the applications or at least one set of applications that I'm hoping that the algorithms that you've learned in this course will help you work with. I'm of course always fascinated by newer applications. I've seen quite a bit of automobile system applications also and I am always open ears to know what you're working on. Yeah. So please ping me and let me know what sort of things that you are using these adaptive control ideas for. And I mean I can say for sure it will be very interesting for me to learn. Now, what we are doing since the beginning of this week is the initial excitation based adaptive control. So, we already know, you know, the standard adaptive control for parameter learning requires the knowledge requires persistent presentation. Yeah, we already know this is more like an, you know, a condition on identification and not something that succeeds adaptive control itself and not something that can be easily gotten rid of. So, in order to make it a little bit more realistic several researchers worked on trying to relax this condition and one such relaxation is what we are looking at right now. And this is based on using only initial excitation instead of persistent excitation. So the idea being that only for an initial phase of time. We have sufficient excitation and beyond that you don't find. And so the question is then can you still, you know, achieve your, you know, parameter convergence and parameter learning in those cases. So this is the general idea. Now, we started with looking at a single integrator system. And this was, was rather interesting, I hope, we don't write it as a differential equation, but we write it in this regressor parameter form. And, and then we go on to look at development of these multiple layers of filters. And so we have two filter layers here. And so we have one filter layer, which is the YF, we talk about the implementability of this filter, and then connect filtered variables also just like we have a connection between the standard, the original variables, Y and U. And of course, then finally we design a second filter layer. Once we did the second filter layer design, we then started to address the tracking problem. Until that point, we didn't even really care about the tracking question itself. Right. So we, of course, designed this parameter update without any Lyapunov analysis at this point, because we are rather comfortable with the fact that there is nice negative terms appearing here. Yeah, which is unlike what you get in standard adaptive control loss. Now, once you have, you know, this sort of a situation. We then, you know, choose our control also so the control is also already chosen. And so all we're left with now is to do the analysis. This, of course, we remember the reminiscent of the other noncertainty equivalence method we looked at, which was in connection with parameter projection. Yep. So I hope you can connect these dots sort of. So we do the analysis we of course have this dummy variable lambda which we use to dominate things again similar to what we had earlier. And finally we said that if there is this initial excitation condition, which was of course much weaker than the persistent excitation condition. We can show negative definiteness. Right. So we had these rather nice property. Right. I mean, and whenever you see something nice. I hope just like I do. I try to critique it and I try to figure out where the issues are or what is good. What is bad. Yeah, not just bad. I mean, of course, there's always there's a very, very powerful method. I've advocated it myself and I strongly recommend that you look at these authors other words because although there are other composite adaptive controllers which use similar ideas. In my opinion, these authors seem to have done a, you know, done it in a rather simple intuitive. Yep. All right. So, of course, we said that you don't have to verify the initial excitation for YF itself. It's sufficient to verify it for the original regressor Y. Yeah. Great. Now. A few comments that in order. So this is also what we saw first is that if there's no excitation, robustness is still an issue. If you haven't handled the robustness issue for that, you still need to do projection or Sigma modifications and things like that. Yeah. Second, of course, we showed how the IE condition is significantly weaker requirement. And of course, we also talked about the fact that the initial excitation condition is not uniform. Because the regressors typically almost always depend on the initial conditions because it depends on the state. So of course, it depends on the initial conditions. You need parameter dependent version of this YF. We looked at this in the integral Emma discussion a while ago and we talked about persistent excitation. But in this context also the same idea holds. Yeah. And finally, not finally, but we also looked at how or what would be a possible way of choosing. And we saw that this is not super easy sort of a nonlinear question. Yeah. The good thing for us is that such a lambda existence of such a lambda is significantly easier to show. And we don't care about actually choosing lambda because it doesn't appear in the control. All right. So the final thing we sort of mentioned and we discussed a little bit is that the behavior in the absence of excitation is not very easy. So this was one of the more key points I would say because if you look at the update law, you see that both these terms are very excitation dependent. The first term sort of depends on, you know, the YF transpose wire. So it's only the semi-definite. Right. So the definiteness of this, of course, is not guaranteed. And the definiteness of this is guaranteed if there is initial excitation. But if there is not, then there's also not guaranteed to be definite. So none of these are actually negative definite. Right. And so how will it perform? Right. Because if you look at the Lyapunov analysis, this entire Lyapunov argument is based on trying this term, trying to dominate this guy. So you try to break this into squares and then you use this guy to dominate this. All right. So our entire argument may be in jeopardy if you don't have initial excitation. Yeah. Now, what happens, how things are different in standard adaptive control is that there is neither of these two. Neither is there a negative definite term. So it's a bit sad. But there is also not a mixed term in the unknown. Yeah, because this mixed term essentially gets cancelled by using your standard adaptive term. Right. So of course one, I mean, I would say one sort of easy sort of way to deal with it. Right. One easy sort of way to deal with it. Let me see. Yeah. Okay. I was just wondering about the order here. So E is of course, so let me look at the order of things. This here is a scalar. Yeah. Z here belongs to R1 cross 2. And theta tilde belongs to R2 cross 1. Right or R2. Right. So now one of the things that I could have done, right, if I look at the analysis here from in V dot, I would. So I'm going to put this as an aside. Yeah. So that let me draw it a little bit more straight using my tool. I'm going to use a bit of an aside. So if you look at my V dot, my V dot is E E dot minus lambda theta tilde transpose theta at dot. All right. So now I already know what is E dot. So I substituted so minus E squared plus E Z theta tilde is what I will get from here. And from here I get minus lambda theta tilde transpose and theta had dot was all this mu F Y F transpose Y F theta tilde plus mu I F theta tilde. Apologize new Y F while I have theta tilde. And suppose I have plus some additional term new at my disposal. All right. And if I do have this, what I can show here, right, what I can show here is that if I choose my new, let me choose my new as one over lambda Z transpose E, then I'm sort of fine. All right. So that's the idea. So what will happen if you make this kind of a choice of V is that if I now write this expression for V dot like this, if you see I have all I've done is plugged in for this V in here. And if I write this entire expression for V dot, I have this minus E squared from here and I have the two negative terms from here, right. And subsequently I will have this guy contributing to a term this one. And then this mixed term comes back here. All right. Now, if you notice, we've already mentioned that all terms in V dot are scalars. So we can take as many derivatives as we sorry, as many transposes as you want and nothing changes because the transpose of a scalar is the same scalar. So you can very easily take a transpose of this guy and E is of course a scalar. So E transpose is just E and you will see that this transpose of this exactly matches this term. So of course, these two terms cancel out. And what is this nice thing that we obtain because of this is that there is no mixed term anymore. There are only non-negative, sorry, non-positive terms in the V dot because this is non-positive, this is non-positive and this is non-positive. And so there is nothing to dominate. All right. So of course, what is the outcome of this? The theta tilde dot now has an additional term, right. And if you notice this additional term is essentially the certainty equivalence adaptive update law. Yeah. So this is exactly what you would have done if you were not using this initial excitation based law. So the initial excitation based law now has terms in addition to what you would do in the certainty equivalence based adaptive update law. Now, what happens because of this is that suppose there is no initial excitation either. Suppose our application is unfortunately such that there is no possibility of initial excitation, forget about persistent excitation. So what happens is that these terms are then not necessarily definite, right. So these terms are not necessarily definite. Now what that means is that you cannot really, you know, they don't really contribute to the Lyapunov analysis. So I might as well ignore them. Yeah. Because they are only negative semi-definite at best, right. So what this indicates, I mean, I mean, you can choose to ignore them or you can keep them and use in the Bavla's lemma. But essentially what you'll be able to obtain in the absence of excitation is that e goes to zero because of this term. Again, we don't repeat all this signal chasing analysis anymore. We've not done that for a while because I assume that all of you have seen enough of these same steps to understand which terms will actually go to zero. So I can very easily prove that this term will go to zero in the V dot. And you can of course prove that Y of theta tilde goes to zero. In fact, also Y of theta tilde goes to zero. You can prove both of these, right. So these may be some kind of attractive sets, right? Attractive invariant sets. If you remember, we talked about this attractive invariant set when we discussed this projection based adaptive control also. So similarly, you will also get some kind of an attractive invariant set in the absence of excitation. Yeah. But of course, we still end up proving that e goes to zero. And this is what we do in standard adaptive control anyway. Yeah, that we end up proving that the tracking errors go to zero. But of course, we cannot guarantee anything about the parameter estimation errors going to zero. So learning of the parameters is of course not guaranteed in the absence of any kind of excitation. But if you have even basic things, something as basic as initial excitation, then you are guaranteed to have parameter convergence. But in the absence of initial excitation, things don't go bad. Yeah, you still have bounded results. That is your e and theta tilde will be bounded because v dot is definitely semi-definite negative. And on top of it, you will also get the tracking errors to go to zero. So this last point that we made last time that behavior in the absence of excitation is not evident. This can sort of be resolved very easily by making this simple modification in the adaptive layer. So this new adaptive law, as you notice, contains a mix of your initial excitation based adaptation law and your standard certainty equivalence adaptation law. Yeah, so I hope you understand that we can combine the power of these two methods that have the initial excitation based law, which has these filters and then the standard certainty equivalence adaptive controller to get the best of both words. Yeah, in the absence of excitation, I still get nice tracking performance and bounded parameter errors. And in the presence of initial excitation, that is some excitation for some initial time, I get parameter convergence also. So this sort of last point is something we have resolved. Yeah, great. So sorry for that long initial preamble, but it was rather essential to understand better what initial excitation based adaptive laws are all about. So whenever there is a new paradigm, it's important to understand it well, to see what are the good things it has and then see what are the limitations also. So what we want to do now is to look at the combination of this back stepping based ideas that we have seen before with this initial excitation based adaptive law. Now, I really hope that you remember that this initial excitation based adaptive control design did not really rely on the dynamics so much. So one would hope that the dynamics itself should not really impact the initial excitation based design. Yeah, so but let's see how these two get combined. Yeah, that is the back stepping based ideas and the initial excitation based adaptive controllers. So the first thing to sort of notice is that we are still going to be looking at the matched uncertainty case only. So we are not considering the unmatched case here. So here we have the standard double integrator with unknowns A1 and A2 and the control in the same equation. So that is the matched case. And the objective is as usual to drive these errors even E2 to zero where even is X1 minus R and E2 is X2 minus R dot. All right. So before we, I apologize, before we go forward to doing any adaptive, sorry doing any control design, just like before we do this, we write the system in this regressor parameter. And what we do is we only look at this system because this is the one with the uncertainty. So we don't really care about writing this in any form. So we only look at this piece of dynamics and I'm writing this in this Y beta equal to U1. So, so if I take these guys to the left, I have this Y beta equal to U. As usual, I have this over parameterization here, right, because of this additional one that I put in here. And that's it. So and Y is of course X2 dot and minus X1 and minus X2. It's like that way. Now remember that as usual that this guy, like I should mark it properly, this guy is not measured. Okay. So this guy is not measured. This is pretty standard, right? Because what is this? If you think of this as a mechanical system, this would be like an acceleration term. So usually the assumption is that only the states are measured and not the derivatives and so on and so forth. That's the usual assumption. Yeah. Of course, you do have mechanical systems where you have accelerometers and acceleration is directly measured. But that's not very common. Okay. So, so X2 dot is not measured. Remember that. So this is exactly like the single integrator case where you did have one unmeasured quantity. But we did show that even with this unmeasured quantity, you can do this integration by parts and still implement the filter. Yeah. And so that's the same here. Right. So I define one layer of filters in just like before, which is the YF dot is minus sigma YF plus Y. And then I also define a filter on control because that's also an own quantity. So the known quantities in this equation are Y and U that define filters on boards. And of course, the filter gain bandwidth is exactly the same. And we take some, some, of course, that is some positive gain sigma and initial conditions are assumed to be zero. Yeah. So, so the important thing to remember is that again, we are not showing it here, but these are implementable. Okay. The filters are implementable. And with the same logic as before. Yeah. So there are two pieces in Y. Yeah. One piece contains these two, which are of course available through measurements. And then there is this guy, which is not available in measurements. So we just do an integration by parts. Yeah. To remove this. And that's it. And once we do integration by parts, it becomes implementable. All right. Great. The other thing to remember is that just like in the previous case, nothing has changed. I mean, only the definition of why has changed. The structure of the regressor parameter form has not changed. And this is the whole thing reason why this is sort of decoupled from the dynamics itself. So the dynamics has changed and that change is encoded in the Y itself inside the line, but the structure Y theta equal to U has not changed. Right. So that's what I would say that structure is. This dynamics. Agnostic. Yes, it doesn't really depend on the dynamics of the system. The structure is Y theta equal to U. This is the standard regressor parameter form. Yeah. And because the structure is not modified in the filtered variables, you will still have the equation. U F equals Y F. Yeah. So this still goes. Nothing has changed here. And this is nice. Right. Very cool. Okay. So those are the important points. Remember, I still write it in this regressor parameter form Y theta equal to U because of which I have U F is Y F theta. And I, you know, design the filters identically, though the dynamics has changed from single even the order of the dynamics has changed. The only thing is there is still again similar to the previous case of there is still one unmeasured quantity, but we deal with this using integration by parts. And so the filter Y F is still an implementable filter. All right. Excellent. Right. So now if you sort of look at what we did in the previous example in the case of single integrated, we add another filter layer. Right. So the first filter layer helps improve performance comes from slotting the original work in 80s, but it doesn't really help you alleviate the need for persistence of excitation. Yeah. So in order to do that, we need a second layer of filter. And that's what we did in the single integrated case and in the double integrator or if you have 100 integrators or if you have any kind of dynamics, you would do the exact same. Nothing changes. So the second layer filter is of this form. YIF dot is YF transpose YF. So I think this is also plus YF transpose YF with initial condition 0 and UIF dot is YF transpose UF with the initial condition 0 again. And of course, as before we have that YIF is positive semi-definite. Why? Because it starts at zero value, zero matrix now. And the derivative is a positive semi-definite matrix. Yeah. And therefore YIF dot is positive semi-definite. Therefore the way YIF propagates is also becomes a positive semi-definite matrix. Yeah. Excellent. Excellent. Now for UIF also you have YF transpose UF and the structures are again chosen smartly, carefully in what sense? In the sense that I get a similar looking filter equation, right? To the first, the original regressor parameter form translates to the first layer filter regressor parameter form and then to the second layer filter regressor parameter form. So the form does not change at all. Yeah. Although the dynamics have changed. Yeah. So as is evident, this entire change in dynamics is encapsulated inside this quantity. Now of course, also this quantity. All right. Excellent. Now again, we choose our update law that is theta tilde dot is minus theta cap dot as this minus Mu F YF transpose UF minus YF theta hat and minus Mu YF UIF minus YIF theta hat. The update law is also exactly the same. Yeah. Remember, we are not talking about the modification that we proposed here with the certainty equals adaptive law added to it. We're not doing that yet. Yeah. But this is the standard initial excitation based adaptive law. Yeah. So if you do this, if you take this in this way, you know that this UF minus YF theta cap is going to be equal to YF theta tilde and this guy by our standard regressor parameter forms in the filtered and the double filtered systems. And this becomes equal to YIF theta tilde. And once you substitute these here, you get minus Mu YF transpose YF theta tilde minus Mu YF YF theta tilde. And as you will, Mu F and Mu IF are some positive constants. Okay. So again, very standard. Oh, I'm sorry. This is already the fact that Mu F and Mu IF are positive is already mentioned here. I want to repeat it. All right. Excellent. And then we, of course, write a usual error dynamics. Yeah. Which gives me E1 dot is E2. Right. And E2 dot is X2 dot minus R double dot. Right. So this is A1 X1 plus A2 X2 plus U minus R double dot. So then we again write this as Z times theta. So this is where the Z shows up. You have the control and you have the R double dot. All right. So this is what you have for your dynamics. Excellent. Excellent. So what did we look at today? We sort of, well, we, we spent a decent bit of time trying to understand are they, I mean, trying to sort of really go with the implications of initial excitation adaptive control. More specifically, we looked at a small modification to the IE adaptive controller so that you add the C, the certainty equals adaptive control term to it, which ensures that even in the absence of excitation, the performance of the IE adaptive controller does not deteriorate. Yeah. So that was nice. And after that, we started looking at the back stepping based extension for our initial excitation adaptive controller. So we started looking at that. And of course we will continue on the same in the subsequent session. So I hope you're enjoying our discussion on this initial excitation based adaptive controller. And I hope to see you again from the next. Thank you.