 Hello, everyone. Welcome to yet another session of our NPTEL on Nonlinear and Adaptive Control. We are sort of in the last legs of this course. And we are close to the end of week number 11. And we sort of hope that whatever we have learned in this course will help you to design algorithms for systems such as what you see in the background. So what we were doing until this last time was talk about start to discuss a double integrator problem using the initial excitation method. Now this method helps us to avoid the need for persistent excitation in parameter learning. Now remember that learning is one of the key sort of paradigms or requirements in neural network based adaptive controllers, what you see in this picture also. So other systems that you see here like drones and aircraft and spacecrafts are real applications of adaptive control and have in fact been tried at a large scale research level and at some implementation level also. Now in this double integrated type system, what we sort of emphasized or we were starting to emphasize is that the construction of your adaptive law does not get impacted significantly. And the reason for this is that this regressor parameter structure that we use in order to design our update law is agnostic to the dynamics of the system. It doesn't matter what's the order. It doesn't matter what kind of dynamical system. As long as you can write the system in a regressor parameter form that is y theta equal to u, if you can do that, then you can sort of design your adaptive controller for any system. In fact, even if there were non-linearities here, these non-linearities would just get inside the wide patient. And that's not really effect how you design your adaptive update law. And so of course, we design these two layer filters, yf, uf, and yif, uif. And the structures are kept in such a way that the regressor parameter equation structure remains identical here for the first layer and the second layer filter also. And so this is a rather important feature, which is what helps in subsequent analysis. And we've already seen this kind of analysis for the single integrated system. So the adaptive law is of course chosen again in the standard way like this. And you get theta tilde dot is nice non-positive terms. That is your minus mu of yf transpose yf theta tilde and minus mu yf yif theta tilde. So we know for sure for a fact that yif is of course positive semi-definite. So is yf transpose yf. But you also know that in the presence of initial excitation, yif is positive definite. So therefore this becomes a legitimate negative term. Yeah. So what we are not doing at this stage is that we have not introduced the certainty equivalence adaptive law yet. But I mean we can talk about that. But again, because you've seen it for a single integrator, it should be possible for you to implement also in the double integrator. So we look at that later as the need arises. Yeah, but for now we are just looking at the standard initial excitation based adaptive law. Yeah. So we construct the error dynamics, of course, using E1 and E2. And the dynamics turns out to be E1 dot is E2. And E2 dot is z theta plus U minus Rw dot. All right. Great. So now we start our standard backstipping design. How do we do it? We look at the first subsystem. I'm sorry. Let me mark the lecture first. So we are on to lecture 11.5, right? We are on lecture 11.5. So how do we do the backstipping design? We start by looking at the first subsystem and assume that E2 is our controller. If E2 was the control, then what is the good control to drive E1 to 0? It's simply minus K1 E1 for some positive gain K1. All right. So that would be your desired value of E2. Now it's evident to us that E2 cannot just be made to follow a desired value or cannot just be equal to a desired value. Yeah. But we can make it track the desired value. And that's how we define the backstipping error, which is E2 bar, which is E2 minus E2 desired. And that becomes E2 plus K1 E1, yeah, in this case. And again, we have seen this backstipping design even before. So this should not be something new for you. Just a couple of weeks old, this material. So I mean, I hope that this is not coming to you as any kind of surprise. If it is, then I would strongly encourage you to go back and look at our backstipping design lectures. So our new system is essentially going to be written in terms of E1 and E2 bar, where E1 is, of course, your usual error variable. And E2 bar is the backstipping error variable. So of course, what we try to do is drive E1 and E2 to 0. So our aim will now be to drive E1 and E2 bar to 0. And remember that if E2 bar goes to 0, implies E2 goes to 0. Yeah, why? Because even if you look at E2 bar, and if E2 bar is going to 0, K1 E1 is already going to 0 by the previous fact that E1 is going to 0. So we are left with the fact that E2 is also going to 0. So this was already established. So making the backstipping error variable go to 0, it's equivalent to actually making the original variable also track and see. So this is what we wanted. Great. So this is what is the backstipping design. Now we, of course, want to do the analysis. So we do the control design using the Lyapunov function, the candidate Lyapunov function in this case, because it's not evident without doing this analysis as to what the control should be. So we, as usual, choose our, this is how backstipping works. The first piece of the Lyapunov candidate is E1 squared by 2 from the first piece of the dynamics. And the second term is just the quadratic in the backstipping error. And the third term now is a usually initial excitation based sort of term. So there we introduce the lambda. And then there's a theta tilde norm square. So remember that even in E2 are scalars, but theta tilde has E terms. It's a vector R3, vector in R3. So therefore it is, there is a norm. So now we, of course, expand this. I get E1, E1 dot. And E1 dot is just E2, which I'm writing in terms of the new variables. E2 is just E2 bar minus k1 E1. And then E2 bar dot is just E2 dot plus k1 E1 dot is this one. And then, of course, I'm left with this nice set of terms from my parameter update. All right? Great. Now what? I have this nice negative term from in E1, which I keep. Then I have this term in E1, E2 bar, which I club with this guy. That's what I do. Write the nice term, club this guy in here. So I get z theta plus U minus R double dot plus k1 E2 as before. Then I get this additional term E1. So this term is, of course, coming from here. And these terms are, of course, same as before. Not made any change here, except for writing this term as the norm squared term. All right? That's it. That's the only difference. Now, what would be the logical value of the control? As always, try to cancel all the funny-looking terms and introduce a good term. This is the basic logic. So what are the funny terms? I can't completely cancel this. So I introduce an update, a parameter estimate here. Yeah? So I try to cancel it with my estimate. I can cancel this. So I do. I can cancel these two. So I do. And then I introduce a nice negative term with a positive gain k2. Yeah? And once I do this here, I get v dot as minus k1 E1 squared minus k2 E2 bar squared. And since I could not completely cancel this term, I'm left with this theta tilde W. All right? OK. I hope that is sort of evident. All right. Now, we, of course, have this two nice terms here, which is what we tried to use to navigate. So what we do is we start to use a usual sum of squares method to split this into two pieces. This is getting split into this guy and this guy. Using AB is less than equal to a squared plus v squared by 2. Yeah? And now it's evident that this can be combined with this term. I do that. And this term can be combined with this term. So notice that as usual, when I came from here to here, I ignored this. I removed this term because this is not definite. It's at most negative, semi-definite. Yeah? And so it's not very clear if I can use it to dominate a term. So therefore, it's not really useful for me in the Lyapunov analysis. Important thing to remember is that it does not halve my analysis. Therefore, I can drop this term. And so this is because I dropped a negative semi-definite term, I'm guaranteed that this is less than equal to this term after dropping. OK? All right? Great. So if I now look at what's left with me, I know that this term gets combined with this term. Give me that. And this term gets combined with this there to give me that. OK? Now, as usual, there is this question of using lambda. So remember that this muIF and muF and all these things are usually fixed beforehand. At this stage, I'm only left with the ability to choose lambda. Sigma 1i is also sort of chosen. So I will say that, let's see, I can go from here to here if yF is each in x i. Remember, I cannot go from because I replace yiF by the sigma 1 identity. And I can do this only if yF was initially exciting. So this is where I've used the assumption on initial excitation, again, similar to before. Yeah? And now, because this z is a function of the state also, we can choose. But it's OK. It's complicated, right? Not completely straightforward to say that this is the particular value of lambda that will work. But the important thing for us is the existence of such a large lambda. Yeah? I just have to choose lambda large enough. And if I do choose lambda large enough, I'm fine. And most importantly, I get these nice negative terms here. And of course, in the presence of initial excitation, I can show convergence of u1, u2 bar, and teacher-childer. All right? So in the presence of initial excitation, everything is great. Yeah? Everything is great. No problem, right there, just like before. I have convergence of all three parameters. I only require initial excitation and not persistent excitation. So my performance is really nice. Yeah? I can dominate this term with a large lambda, a lambda which is not going into my control implementation. So it doesn't matter. It doesn't matter to me. Now, the issue that we are left with as before is that what happens if you don't even have initial excitation? So we are still left with that same question. In order to answer that question, I would again go back and try to modify my adaptive law. So what I would do is I would add here another term, lambda theta tilde dot. Sorry, there will be a negative sign there. Minus lambda theta tilde dot v. And this is assuming that theta tilde dot is minus mu f yf transpose yf theta tilde minus mu if yi f theta tilde minus v. So this is what we assume. So this is actually, I'm sorry, this is theta tilde transpose v. So this is what we have as an additional term. If I think of introducing another additional term in the parameter update law. So why am I trying to do this? Because we already saw that everything is nice. There is initial excitation. But if there is no initial excitation, both these terms are gone. I cannot really use them to dominate anything. But then I still have this mixed term, which I don't know if I can dominate in the absence of excitation because these two terms are gone. So then the only solution I would have is to somehow try to get rid of this term. And that's what we are trying to do. So if there is excitation, everything was nice, excellent result. But in the absence of excitation, you want to make a modification to the adaptive law so that I don't have to deal with this mixed term, which I cannot cancel or which I cannot dominate anymore. So then I propose this sort of an additional term. And I try to see what this additional term will be. The additional term has to come from the Lyapunov analysis. So this term remains as it is here. This term remains as it is. And here, too, this term remains as it is. Now at this stage, if you notice, I can choose. I set this stage. I can look at these two terms together because this term is, if I take a transpose, it's theta tilde transpose z transpose e2 bar. e2 bar is irrelevant because it's a scalar, so I can move it around wherever. But I do have to take a transpose of instance. But this theta tilde transpose and this theta tilde transpose is sort of matching. So if I choose, if we choose your v to be, let's see, z transpose e2 bar divided by lambda. If I do that, then this term and this term cancels out. So this term does not exist. All right, I hope this makes sense. Essentially, I am able to choose a v so as that I can cancel these two terms. And once I cancel these two terms, I have no more mixed terms. So this term is not there. This is also missing. So my v dot after this point, beyond this point, my v dot becomes different. So my v dot with this new adaptive law becomes minus a1 e1 square minus a2 e2 bar squared minus lambda mu f yf transpose theta tilde squared minus lambda mu if theta tilde transpose yif theta tilde. All right, so there is no more mixed term. Yeah, so this is already less than equal to 0. So if there is no excitation, then I can ignore these terms or I can still prove that yif theta tilde and yf transpose theta tilde are going to go to 0. And you will also be able to prove that e1 e2 bar So e1 e2 bar goes to 0. e1 in the absence of initial excitation plus theta tilde e1 e2 bar remain bounded. So all these quantities also remain bounded. So this is rather nice. So if I make a small change, so my adaptive law, as you can see, becomes this. So if I write it out here, so theta tilde dot is minus mu f yf transpose yf tilde minus mu if yif theta tilde, and then minus 1 over lambda z transpose e2 bar. Yeah, so if you see, this is like the certainty equivalent adaptive law. And this is the usual initial excitation bracket. If I make this modification with this additional term here, then I do get a nice performance even in the absence of initial excitation. So great. So basically you have now seen that it is very much possible to do this adaptive control design. It is initial excitation for double integrators also. In fact, I would strongly encourage you to try it for the unmatched case and so on and so forth to see that this is, in fact, applicable to a large variety of dynamical systems. Yeah, of course, you would also see the certainty equivalence modification and so on and so forth. But I would encourage you to do that. So if you do have initial excitation, which is a significantly weaker requirement, you are in very good shape. You can actually deal with many, many different dynamical systems and your update large generation is decoupled from all of the dynamics. I mean, it is hidden. It's not like it's decoupled in the sense it's not independent of the dynamics. It depends on the dynamics through the regressor and the filtered regressor and filtered control and so on. But the fact is you are still not seeing it in the expressions. And that's rather nice in terms of the construction. The construction has a standard structure. Just your regressor keeps changing. So you can pretty much use the same update law and plug it into a different system just by modifying the regressor. And this is, of course, very useful in implementations. All right? Excellent. So what have we sort of seen in this week is that we started with this idea of initial excitation based adaptive control. We understood that the persistence of excitation is a very stringent requirement. We have, of course, seen how persistence excitation is used to prove parameter learning. And now we wanted to do something better. So therefore, we looked at initial excitation based adaptive control. The idea was relatively nice, simple, straightforward in that you write your system in a regressor parameter form. That's the first thing. And then you start to design two level of filters. It's zero initial conditions. And once you do that, you have this standard regressor parameter form in the filtered variables also. And because of this, you can construct a very straightforward parameter update law. With this, you then go on to design your controller like you would do. And your parameter update law is, of course, already designed. You also talked about the parameter dependent version of initial excitation, just like we have for persistent excitation. Because more often than not, your regressor has to depend on the state, which means it depends on the solutions, and which means it depends on the initial conditions, which are effectively going to function like parameters. Therefore, you do require to define parameter dependent and notions of initial excitation also. But then once you do that, the proof goes on in a pool of convergence. And all is significantly simpler in this case, because even in the Lyapunov analysis, directly you start to see negative terms if you have initial excitation or parameter dependent initial excitation. So things are significantly nicer in terms of the proof. So you don't need more complicated results like the integral lemma that you did require in the parameter dependent version of persistent excitation. We then, of course, realized that a couple of things. One is that choosing lambda is not easy, which is this lambda which is used to dominate these mix terms. And we also saw that if there is no initial excitation either, so if you don't even satisfy this weak requirement, then things may not work very well for this kind of adaptive controller. So what we saw was that a simple modification of this adaptive controller where we add the original C adaptive law to this update also tells alleviate this issue. That is one, you don't need to choose a lambda anymore because there is no mix terms. There is nothing to dominate. So there's no need to choose a lambda. And two, in the absence of excitation also, you get bounded trajectories and convergence of practical errors, which is what most adaptive control theorists promise in the media. So you don't go back on that fundamental promise of adaptive controllers. And on top of that, you add this cool feature of having of requiring only excitation at initial time and not requiring anything for infinite time like you do when you talk about persistent excitation. So I hope you found this new method rather interesting and impressive. If you've already designed some adaptive laws for your dynamical systems, I would recommend that you do the same with this initial excitation based method. And I will also recommend that you compare the performance. So that would be interesting for you to report and for me to Suna. And so do let me know if you can see any difference, one word or another. So in a subsequent week, we are going to focus a little bit on a neural network adaptive control, so learning ideas and how we can do some provable results in learning. So we will essentially follow pretty much an article from Frank Lewis in this subsequent week in order to get a feel for how adaptive control and learning are intrinsically connected. So I really hope you will join me in the last week, which is sort of an excursion into more modern areas. Of course, the paper is not very modern, but because computations, et cetera, have become significantly cheaper now, learning and deep learning has become a more popular tool for many systems engineers. So we will look at some of that in this upcoming final week of our NP. Et cetera. So I hope you all enjoyed these sessions. And I hope to see you again in the next week. Thank you.