 Hello. Welcome to yet another session of our NPTEL on non-linear adaptive control. I am Srikanth Sukumar from Systems and Control IIT, Bali. So, we are into the sixth week of our lectures on on the adaptive control. And in this week, we have already started looking at our first adaptive control problem. So, before we move on, of course, we want to remind ourselves that the algorithms that we develop and the systems that we analyze with the most we've learned are meant to develop and design algorithms that drive systems such as the spacecraft that you see in the background and autonomous. So, that's sort of what we are targeting to do with the tools that we learn in this course. So, a little bit more detail on what we have been looking at. Yeah. So, we started with our first sort of adaptive control problem. And this was a first order scalar system. So, it's a first order differential equation describing the system. And of course, it's a scalar valued states. And therefore, we are dealing with a first order scalar system where the unknowns appear linearly and are assumed to be constant. Yeah. So, how we went about this is that we wanted to sort of the control objective was basically to track a desired trajectory r of t. Yeah. And therefore, we designed an error variable. So, this is the sort of standard step. We designed an error variable t of t, which is essentially just x minus r. And we also then load the dynamics for this error variable. All right, since we know the dynamics for the state, writing the dynamics for the error variable is rather easy. Then the first step was to do a control design for the case when the parameter theta star is assumed to be norm. And we did that by choosing some kind of a nice target system. We talked about the properties of a target system that it should be viable. That is, it has to have some kind of a matching with the original system. And secondly, it has to ensure that the error has nice asymptotically stable properties. So, this is the target system we choose this case and it satisfies both our requirements. In order to get to this target system, we require a control which looks like this. And it's evident that the parameter, of course, appears in this control because we essentially cancel the non-linear energy. And once we are able to do this in the non-parameter case, we choose a radially unbounded Lyapunov function and we compute the derivative which turns out to be minus k e square, which is asymptotically stable and all the nice things and so on and so forth. Excellent. Now, beyond this, it was obvious that just by using the Lyapunov theorem, the error goes to zero and all the nice chance. Now, how do we deal with the uncertain or unknown parameter case is the application of the certainty equivalence principle, which essentially said that you retain the same structure of the controller, but you replace the true parameter value which is unknown to you by its estimate. And this estimate is going to be designed subsequently. So, with this altered controller, we get an altered error dynamics which contains now the parameter error which is denoted as theta tilde. And so what's the idea? The idea is that we use a Lyapunov function to come up with the theta hat dot. We use a Lyapunov candidate to come up with the theta hat dot dynamics. How do we do that? We take the earlier Lyapunov candidate we had and we add to it simply a quadratic term in the parameter error. It is scaled by this gamma factor which is essentially called the adaptation gain. You will see why because it appears in the adaptation law. And we just then start simply taking the derivative. We of course don't know what is theta hat dot. We are yet to prescribe it, but we know what is E dot that is substituted in here. And it so happens magically, like I said, but actually not magically, but by construction that you have theta tilde appearing in both these terms and therefore theta tilde can be taken out common. And so you have this sort of an equation. And what do you do? You simply ensure that this quantity is zero. Why do you have to ensure this quantity is zero? Because it is a mixed term. Like this term is a nice negative quadratic term. There is no sign definiteness in this term. Which means I cannot guarantee it's positive or negative definite. In fact, we want negative definite things in the V dot. Therefore, the best thing we can do is to sort of cancel and make this zero go away. Then so this is what we choose. You choose theta hat dot from here. So that just goes away. And this is essentially your adaptation law. And you see that this gamma appears here, which is essentially what we call the adaptation gain. Because changing gamma, we change how fast the adaptation happens. So if I make gamma really small, then the adaptation is really fast and so on and so on. The important thing to remember and a mistake that a lot of folks make is that your update law or your control law cannot depend on theta tilde. Because theta tilde is unknown. Yeah, theta tilde is unknown because it contains theta style. Right. Therefore, theta tilde cannot appear in the control law or the update law. Because if it does, then your controller or your update law is not implementable. And this is, of course, no longer adaptive control. It's actually not something that can be used at all. Right. Great. Great. So then what do we do? That was the sort of final stage. We got this v dot is minus k e squared. What we notice is that the v dot we get in the adaptive case, that is in the unknown parameter case, is the same as the v dot we end up with in the known parameter case. But there are two differences. There is one major difference is that in the known case, there's no theta tilde squared, which is obvious because there is no parameter error parameter was assumed to be known. But in the unknown case, there is, in fact, a theta tilde squared term in the v, which means in a known case, v dot was negative definite because there was, in fact, only one stage. But in this case, we have actually introduced an additional state theta tilde because we have to, otherwise we don't know how to update our parameter. So the fact that we get the same v dot does not mean we also get negative definiteness. In this case, we get only negative semi-definiteness. Why? Because we had emphasized this many times during early analysis sections. And also now that v dot, if any function does not contain some of the states of the system, then it cannot be definite. It can only be semi-definite. That's what this is only negative semi-definite. And therefore, by using the Lyapunov theorems, you can only claim that the E and theta tilde states are uniformly stable at zero. So you can only claim uniform stability and nothing more. So this is critical. So this is where we went until last time. And then we will see now today how we go over from here and, in fact, claim all the properties we need because we need asymptotic stability, not uniform stability. Uniform stability is not enough because it just says that if you start small, you remain small. If you start with small errors, you remain with small errors and things like that. But that's not enough. We want the errors to actually go to zero. We want to do actual tracking. That is what is the whole point of this adaptive control theory. So how do we do that? So that's what we see in today's lecture. The market is lecture 6.0. The first thing to remember is that what we do with typically mixed terms, that is the terms that we cannot cancel. So that the terms that cannot remain negative definite is we try to cancel them. And that's what we did when we chose our adaptation model. Or as simple as that. OK. So now what do we do? We only get uniform stability. So we carry out the signal chasing in bubble at SILMA analysis. Now remember that we already saw a sample of signal chasing in bubble at SILMA when you're looking at the analysis part of the syllabus. And so we should already have a little bit of an idea of what's about to come. Yeah. And so we start. If you remember I had spoken at that time extensively about there being these very, very standard steps. Yeah. These are steps that I had mentioned at that time that all of you just need to master. Yeah. I mean, in fact, memorize so that you never sort of change the sequence of the steps. So once you just follow these steps, the same set of steps work in almost all cases. In fact, a lot of these new research articles are adaptive control. The authors don't even mention these steps anymore. Yeah. Because it's so standard that they once you have a V dot, they pretty much conclude whatever is expected. So they expect that the reader already can sort of follow these steps on that one. Great. So this is the signal chasing analysis. So remember where we are. I will again write out some key points. So V was 1 of square plus 1 of gamma tilde squared. All right. And V dot came out to be, and so this is of course positive definite radial unbounded and all the nice things. And V dot came out to be minus K e squared, which is negative semi-definite. Okay. So this is what V and V dot were. So this is where we start. So the first thing we claim is that V is lower bounded and non-increasing. So that's obvious. So V is lower bounded because it's strictly positive definite. And it's not increasing because V dot is less than equal to 0. The derivative is less than equal to 0 of a function of time. Then of course the function cannot increase over that. It can do anything but not increase over that. It's a non-increasing function. And we know from these two properties that there exists the limit as T goes to infinity for V of t. So limit T goes to infinity. V of t exists and is finite. And what do we do? We denote this with V infinity. We are going to use this. V infinity in our subsequent calculations. Okay. So that's step one. Okay. Remember you should just compare these steps with what we did. I believe we did this for the spring mass damper sort of example using the Bob Lutz lemma. You should compare these steps and you should be able to see that these steps are rather standard. They are almost the same steps that we did even then. Okay. And in the same sequence also. Yeah. All right. So now the second point is since V is finite. Now why is V finite? So V zero is of course finite. It doesn't make sense if you already started at infinite state or infinite parameter or something like that. Just not realistic. So V zero is finite. And what do I know? I know that V of t less than equal to V zero. Okay. So notice that I am sort of using this funny notation where now I'm earlier V was a function of union dot theta and theta tilde dot. I mean although I have not written it, it was something like V is a function of E theta tilde theta tilde. But now suddenly I'm writing V as a function of time. Okay. So please do not get confused. Whenever I say V of t, it's actually defined as V E of t and theta tilde t. Okay. And what is E of t theta tilde t? These are solutions. These are solution trajectories. So remember, so there's a lot of things that are happening here that are sort of implicit, which I'm not saying, but that should occur to you. And that should be very clear to me. The first thing is when I write V of t, that is V as a function of time, I mean that I'm now plugging in the value of E, that I'm plugging in the solutions here. Okay. Not just E and theta tilde as some variables, but the solutions. Now remember, though I've written this as E of t and theta tilde of t, these implicitly depend on E zero and theta tilde zero. So E zero is here and theta tilde zero is here. In fact, both are in both of them. All right. So these are actually also dependent on E zero and theta tilde zero. Yeah. Please do not forget this. Yeah. So whatever we get as an outcome of this analysis is not uniform with respect to initial conditions. Yeah. Because once you choose an initial condition, you get some outcome. Okay. The good thing is we don't have to specify what initial condition because for any initial condition all these, this entire analysis will go through. Okay. We're not that worried, but still it should be there at the back of the mind that this expression of V of t that I write here, so nonchalantly actually contains the initial condition because V of t is obtained by writing the solution of E and theta tilde as a function of that. Okay. That's how you get V of t. And that's how you compute V dot. Right. Otherwise there's no question of computing the time derivative. If there is no time. This is how do I compute a time derivative? This and so this when we use this notation. Yeah. You remember that this is basically just the directional derivative along the dynamics. Yeah. And this is exactly consistent with V being a function of time. dV by dT, this is exactly how I would do it. Yeah. I would take dV by dE, E dot, dV by d tilde in the theta tilde dot is exactly how I would do it. Yeah. Therefore the two definitions are consistent. Yeah. So therefore when I take derivative with respect to time, this is actually a valid sort of operation. Okay. So therefore V of t has a lot of hidden meaning. Yeah. So please remember this. Yeah. Excellent. So now that we know that V dot is less than equal to zero, therefore V of t is less than equal to V zero. What does it mean? It means that V of t is also finite, because V zero is finite. And if V of t is finite, it means that E of t and theta tilde of t are also finite. And what do we know about signals that are finite? We know that they belong to class infinity, L infinity. Okay. If a signal is finite for all time, then it belongs to class L infinity, because the infinity norm is finite. Yeah, because it's just a supremum. If E of t and theta tilde of t are finite for all t, then their supremum norm, that is the L infinity norm also has to be finite. Okay. Right. Now, the next thing that we do is we integrate both sides of this equation. All right. So, yeah. And it's claimed here that E is L2 from integrating both sides of the derivative of the V dot equation. Okay. Now, how do we get that? Here it's not sort of explained. How do we get that? Let's try to integrate this. Okay. So, if I integrate, say, V dot dt 0 to t, and this is less than equal to minus k integral 0 to t E squared tau d tau. Okay. Now, this is actually equal to dV. Okay. So, this will be V0 to Vt. Yeah. And this is dV. Okay. And this is less than equal to minus k, again, 0 to t E squared tau d tau. All right. So, what can I say now? I can say from here, if I take limit as t goes to infinity on both sides, what do I get? I get that V infinity minus V0 is equal to minus k 0 to t E squared, sorry not, this is equal to infinity E squared tau d tau, which means that 0 to infinity squared tau d tau is equal to V0 V0 minus V infinity divided by k. Okay. And what is this? This is just the square of the infinity norm, right? So, this is basically just equal to the 2 norm squared. And we have just proved that it is finite. So, that's what we are saying that by integrating both sides, I can get that the 2 norm is of E is finite, which means that E is an N2 signal. Okay. Which means that E is an N2 signal. That's what we have proved. Okay. That's what we have proved that the 2 norm squared is a finite quantity. Therefore, 2 norm itself is a finite, N2 norm is a finite quantity. And if a signal has a finite L2 norm, then the signal itself will belong to N2. Okay. That's what we said. And now we can immediately... Well, I mean, are we done? No, we are not yet done. We are not yet done. There's a step in between here. Okay. There's a step in between. And what is that? There's in between step. I'm going to mark it as 3.5. And what is it? I can also claim that E dot belongs to N infinity. How? How? What is E dot? E dot is... Let's see. Let's try to look at what is E dot. E dot is just... That's... E dot is... This quantity. Yeah. So it's minus ke plus theta tilde f. So let's just copy it here. So I'm going to... It's bigger. E dot is minus ke plus theta tilde fxt. Okay. So if I assume fxt bounded for bounded x and for all t. Okay. If I make this assumption, I already know that E theta tilde are bounded. Yeah. So I know that E is bounded. Therefore, ke is bounded. I know theta tilde is bounded. Also, if E is bounded, then x has to be bounded. Yeah. Because... Right? So... E at infinity also implies x at infinity. Why? Because E is just x minus r. And r is a bounded small signal. Right? Therefore, if E is bounded, x is also bounded. If x is bounded, I'm assuming that this function is bounded for bounded x and for all time. Yeah. So therefore, this entire right-hand side is bounded. Okay. That is what is... E dot is at infinity. So, okay. And then we can directly use the corollary of the barbellard's lemma. I really hope all of you remember. If not, you should revise it, go back and revise it. I'm not going to state... Well, I will state it. Essentially, the barbellard corollary says that if a signal is L infinity and Lp for some t and if the derivative of the signal is E dot is L infinity, then the signal goes to zero as t goes to... Okay. And that's what we have for E. For E, we can claim that E is both L2 and L infinity. So, E is 2. And I can also claim that E dot is L infinity. Okay. And therefore, I have that E goes to zero as t goes to infinity. Now, notice, we already have the uniform stability result. Yeah. That the fact that, you know, E and t dot are uniformly stable at the origin because of the standard Lyapunov theorems. But now, we also can claim that E converges to zero as t goes to infinity. Okay. Then this is very good. Very good for us. Right. Why? We had an unknown parameter. Right. And what did we do? We designed an adaptive control. Why? Because we had the control. We had a controller. And we also had an update. Okay. And with these two elements, with these two elements, I can claim that the tracking error actually goes to zero as t goes to infinity and also remains uniformly stable, which is rather nice. Yeah. I can't claim asymptotic stability of the entire system like I would have loved to, but I still can claim something rather nice that the tracking error remains stable and converges to zero of the origin as t goes to infinity. Yeah. Which means that I get my exact tracking in spite of an unknown parameter. So this is the power of adaptive control. I want you to sort of absolve it. Yeah. So I'm not saying anything about the theta tilde. Yeah. Not saying anything about the theta tilde. So look at what happens. Okay. Let's look at what happens. Yeah. So, but the important thing to note is that the tracking error goes to zero in spite of there being unknown parameters. Okay. So now we want to know if the unknown parameters converge to true value or not. Yeah. How do we do that? So we say, we know that e dot is integrable since e goes to zero as t goes to infinity. So how do we claim that e dot integrable means that if I do this, yeah, this should be finite. And this is because I will essentially get limit as t goes to infinity e of t minus zero here, which is just minus zero because this limit is going to zero. You just prove that. Okay. So e dot is an integrable signal. So that's the first claim here. Second is you can compute e double dot to verify that e double dot is also bounded. Okay. How do we do that? It's not too difficult. What is e double dot? We already saw what is e dot. Is this good? Okay. So I just have to take the derivative of that. Yeah. So this is e double dot is equal to minus k e dot. I'm not writing k dot explicitly yet. Plus theta tilde dot f plus theta tilde f dot. I've removed the arguments from f. Okay. So I'm not going to write k e dot. So this is minor because I know e dot is already bounded. So minus k e dot plus theta tilde dot, which is what is theta tilde dot? Theta tilde dot is one over gamma e f. Yeah. Theta tilde dot is in fact negative of one over gamma. Sorry. One over gamma e f times f. So that's e f squared plus theta tilde f dot. So now what do I know? I know that e dot is bounded. We already proved it. E dot is bounded. We already proved. Okay. Then we know that e and f are bounded. f is bounded by a resumption. e is bounded by the Lyapunov analysis. Now if I further assume again, something on f dot, assume f dot bounded or bounded x and all time. Okay. See we make a similar assumption on f dot as we did on f. Okay. Then what can I claim? I can claim that this entire quantity e double dot is also bounded. Okay. So that's what we do. We can verify very easily that e double dot is also bounded. Now one of the results that we had looked at said that if e dot is uniformly, if e dot is bound, e double dot is bounded. That is the derivative of a signal is bounded and the signal itself is uniformly continuous. That's what we claim here. And so this is then by the original Babilad's lemma. What does the original Babilad's lemma say? It says if a signal is integrable and if it is uniformly continuous then it goes to zero as t goes to infinity. So from that we know that e dot was integrable and we know that e dot was uniformly continuous. Therefore, e dot goes to zero as t goes to infinity. So we have not just proved that e goes to zero as t goes to infinity. We also proved that it's derivative goes to zero as t goes to infinity. We did this very carefully without resorting to usual mistakes or the pitfalls in adaptive control or pitfalls in nonlinear analysis where the function goes to zero or function goes to a constant derivative goes to zero and things like that. No, we did this very carefully and formally. So e dot goes to zero as t goes to infinity. What was e dot? e dot was this thing. If I know that the left-hand side goes to zero as t goes to infinity I know this guy goes to zero as t goes to infinity. Then what am I left with? I know that this product also has to go to zero as t goes to infinity. And that's what I write here. But this doesn't mean anything about the convergence of e dot n. And that is what is standard in adaptive control. Excellent. So what did we look at today? We sort of tried to complete our analysis for this first order scalar system. And we are sort of at the end of this analysis. Essentially, we required the use of Babelat's lemma and signal chasing in order to complete this analysis. And what we are able to show is that in spite of a parameter error, the tracking error goes to zero. So therefore we can do exact tracking. So if it's a robot, we can exactly track it trajectory. But we cannot guarantee that the parameter then registers to a value. We can only show that something like a theta delta times f goes to zero. Which doesn't mean necessarily the parameter. Excellent. So this is where we stopped today and we look at more details of these kind of problems in the subsequent sessions. Thank you.