 Hello. Welcome to yet another session of our NPTEL Long-On Linear Adaptive Control. I am Srikanth Sukumar from Systems and Control IIT, Bombay. As always, we have this nice background image of the SpaceX satellite in our background. And we are hopeful that it motivates us to develop design algorithms that help systems such as these operate autonomously. So in the sixth week, we have started our exploration into adaptive control problems and adaptive control design. And we've already covered adaptive control design for a first-order scanner system. That's what we did. We also analyzed the stability and we could essentially conclude that we can show that all closed loop signals are uniformly stable with respect to the origin equilibrium. And we could also show that the tracking errors converge asymptotically to zero. Yeah. What we could not guarantee was that the parameter estimation errors in fact were possible. And for this, we said that there is a requirement for a persistence of excitation evolution. And this is what we had looked at in the week preceding to this. And we sort of used a little bit of that theory that we learned in order to claim parameter convergence for certain special case scenarios. Yeah. So this was what we had for the first-order system. At the end of the last session, we sort of started looking at a second-order scalar system, which is not significantly complicated. What we do here is just add an integrator to the standard first-order dynamical system. Right? And as usual, we have X1 and X2 evolving in the real number field. And control also takes values in real numbers. And the parameter is also real value and so on and so on. So this is essentially a scalar second-order system. Yeah. Then we still want the control to be such that X1 tracks a new light trajectory. Right? We only give a tracking for the X1 and not X2 because subject to the matching condition, the desired trajectory of X2 is dictated precisely by the desired trajectory for X1. Yeah. So this is not something that's a left choice. Yeah. So that's how it works. So we should write the objective in mathematical terms. It comes out to limit as t goes to infinity. X1 minus r goes to zero. And limit as t goes to infinity. X2 minus r dot is to zero. All right? So like we already mentioned, this double integrator system is in fact a standard model for several mechanical systems on Euclidean spaces. Yeah. So that's, so there's already a lot of motivation for looking at systems like this. Yeah. So subsequently when you look at robotic systems and spacecraft dynamics, where the spacecraft dynamics is not exactly like this, but it's like a nonlinear integrator, nonlinear double integrator. And the robot dynamics looks pretty much like this except that these X1s and X2s start to belong to vector spaces. So r3 and r4 and rk and so on and so forth and not just scalar ones. Yeah. So anything that changes the way you start looking at other real mechanical systems. All right? Great. So this is where we stopped last time when we introduced this problem. And today we're going to start designing the adaptive control for this system. Yeah. So this is where we begin. We are on lecture number, sorry, I thought it means 6.5. And we're on lecture number 6.5. So it's the fifth lecture of the sixth week. Yeah. And this is the error model. Right? Just like before. So all the steps will start to seem rather similar to you. Yeah. Just like before you design an error model, in this case there are two errors because of course there are two states X1 and X2. So I construct an error corresponding to each state and its objective value. So objective for the X1 is r, objective for X2 is r dot. Yeah. So again, if you start thinking in terms of mechanical systems perspective, X1 is sort of the position variable, a position state, and X2 is the velocity state. Simple. All right? Great. So then as always, we again, once we have an error model, we want to, sorry, once we have an error definition, we want to construct the dynamics of this error. So that's what we do. And you carefully take a derivative. E1 dot is simply X1 dot minus r dot. And X1 dot is X2. And therefore, E1 dot is exactly equal to this, which is E2. That's what we write here. Yeah. So this happens. This equation 3.11 looks exactly like the original system only because of the matching condition. If this, if r and r dot were not here, instead I had 2r dot or r dot by k, then this will not happen. And we already discussed this at some length last time that that is an infeasible trajectory. Trajectory has to satisfy some matching conditions. It has to be consistent with the dynamics. All right. I cannot make the system follow trajectory, which is not consistent with the dynamics. And the dynamics dictates that X1 dot has to be X2. All right. So that's it. So that's what we have. E1 dot turns out to be E2. And then E2 dot is just X2 dot minus r double dot. So the second derivative of r. And so I just write X2 dot here. And then minus r double dot. And just moving on from here. OK. So this now becomes the dynamics that we are going to work with. This becomes our error dynamics. So this is what we call. So this is what we call the error dynamics. All right. This is what we call the error dynamics. Great. Now, how would we do the control design for the non-parameter case? So what we do here is that we, as usual, try to find a target system. Yeah. So what is the target system that we choose in this case? We choose the target system as this even dot is E2. I cannot change that because there is no control here. So even the target system even dot has to remain E2. But now in the E2 dot, there is some control. So I can more or less dictate whatever structure I want for E2 dot. So E2 dot is minus K2 E2 minus K1 E1. Now, why do we choose this? This is a system that we have analyzed even before using the Barbara, that's lemma and strictly appano construction and so on and so forth. So we know and just by computing the eigenvalues of the system also, it's a linear system. I can very easily conclude that this system is, in fact, globally exponentially stable at E1, E2 equal to 00. This is a globally exponentially stable system at the origin and that is why we choose this as a target system. So there are two requirements, right? First, it has to have nice properties for the states, which it does. It in fact is globally exponentially stable. And second, it has to be consistent with the dynamics, which is even dot is continuous to remain E2, because I cannot change it. And here I can change the right-hand side to almost anything I want. So I choose nice stable right-hand side, stabilizing right-hand side. Great. So now in order to get from here to here, what is the control I choose? It's very simple. I just obtain this by subtracting the two. And so the control I choose to go from here to here is just cancelling these two terms, that is this guy and that guy, and then introducing these terms. That's this. There's just two pieces. Cancel these two. And then introduce these two. All right. And that's it. Okay. Since I have assumed already that I know the parameter value, it does start. So I can, in fact, implement this controller in 3.14. Yeah, this is implementable. And I'm good. So what's the next thing I do? I have to do a Lyapunov analysis, right? So I have to choose a Lyapunov candidate. Okay. All right. So what do I do? I choose my Lyapunov candidate as half K1E1 squared plus half E2 squared. Now we've already seen this kind of Lyapunov candidate before for the system. This is what we used for the Babelart-Slamma type analysis. And we know by taking the derivative, so I'm not going to take the derivative here, because we've done this a couple of times. Yeah. We took this, then we, if you take the derivative and you plug in for the dynamics from here. So that's the directional derivative of V along this dynamics 3.13. You get V dot is minus K2E2 squared. You don't get any term in the E1. You do not get any term in the E1. All right. So once we understand that, we know that V dot is only negative semi-definite. Yeah. But we already have done this kind of a, we have already sort of dealt with this Sprigma's Damper problem with exactly this Lyapunov candidate and with this negative semi-definite V dot. If you do not remember this, I strongly urge you to revise. Yeah. I strongly urge you to go and revise our lecture on example of how to use the Babelart-Slamma. All right. That's what it says here. V dot is only negative semi-definite. So we can at most claim uniform stability from the standard Lyapunov theorems. But then we can use signal chasing or Lassal invariance, either one of them. And of course signal chasing with Babelart-Slamma or Lassal invariance to show that both errors converge to zero. On top of uniform stability, we can also obtain convergence by either using Lassal's invariance principle or by using signal chasing and Babelart-Slamma. So we have done this. So again, for those who do not remember, I strongly, strongly urge you to go back and revise this material. So great. So of course, there is some nice terminology here which gets used in non-linear control theory often. These kind of Lyapunov functions with the system is known to be asymptotically stable. So the Sprigma's Damper is in fact asymptotically stable. Yeah, it is well known. But the Lyapunov function that we choose to analyze the system is only negative semi-definite p-dots. It's called a non-strict Lyapunov function. Okay. So we know that the Sprigma's Damper is asymptotically stable. It's in fact exponentially stable. We know that you can even obtain this result by simply computing the eigenvalues for this linear system. However, the Lyapunov candidate that we've chosen only means a negative semi-definite p-dot. Yeah. Which does not allow us to conclude, you know, from the Lyapunov theorems that the system is asymptotically stable are called non-strict Lyapunov functions. Yeah. And in case of non-strict Lyapunov functions, as has been mentioned, you have to use signal j-z-plus-bablatz lemma or LaSalle's invariance, in the special case of time invariant systems, to conclude asymptotic stability. All right. So it's a sort of weakness. It's a sort of weakness of, I would say, our analysis method. Yeah. The system is known to have the nice property of exponential stability or asymptotic stability. However, because of our, say, not so good choice of Lyapunov candidate, we cannot conclude that from Lyapunov. So there's a lot of research in non-linear literature, non-linear control literature, on identifying good strictly Lyapunov functions for different kinds of non-linear systems. So that's one of the problems that non-linear control theorists do for us. All right. Excellent. Now that we understand that. So let's look at this. So now we want to do the unknown case. Okay. So what is it? We use the certainty equivalence principle. Yeah. As is mentioned here, C is the certainty equivalence principle. What does it say? Keep the same control structure and replace the unknown parameter with its estimate. That's exactly what you've done. All other terms are the same. The only changes in this, where we had theta star earlier, and now we have a theta hat. All right. Let's just prescribe an update law for this subsequently. And now if I substitute this altered control here, because it is not theta star, the term with theta does not exactly cancel out, but you're left with a theta tilde. All right. So earlier this was not there. Earlier we only had this much. In the known case, there was only this much. Right. But because now we are assuming that the parameter is not known, we replace the true value by its estimate, and therefore you get a theta tilde term. And this is what we used to, in fact, compute the update law for theta tilde. All right. So how do we do that? We take the Lyapunov candidate that we already had. Remember, this is what we always do. Even the previous scalar first order system case, that's what we did. There we have only one term, half events, half is squared. So here we had two terms. So we take the same two terms and then add to it a quadratic in theta tilde. And as usual, this gamma is positive when it's the adaptation or the adaptation gain. Again, just like the first order case. Yeah. So take the same Lyapunov candidate as the known case and add to it a quadratic term in the parameter estimation area. Yeah. And now if I take the derivative carefully and substitute, I get this first term, which I would have gotten if there was no parameter. And then I get a second term due to the parameter direction. Okay. Not too difficult. I mean, if you are not convinced, you can just go here and do this. And then can just go ahead and do this sort of computation quickly. So V dot will be K1 E1 E1 dot plus E2 E2 dot minus theta tilde theta hat dot over down. We have used the fact that theta tilde dot equals minus theta hat dot because theta star is constant. Now if I substitute, here I have K1 E1 E2 plus E2 E2 dot. Sorry. Plus E2 E2 dot. Plus E2 and E2 dot is minus minus K1 E1 minus K2 E2 minus theta tilde f minus theta tilde theta hat dot by down. So I've substituted for E1 dot and E2 dot but theta cap dot is still not been decided. So that is being retained as it is. Okay. Now it's not difficult to see that K1 E1 E2 and this K1 E1 E2 cancel out. Right. And what am I left with? I'm left with minus K2 E2 squared plus or infinite minus theta tilde E2 times theta hat dot divided by gamma. Let's see if the sign has turned out to be correct. Yeah. I think this should have been, if I'm not wrong, this should have been plus. That's what the error is given. This is a plus. This will be a minus E2 f. Yeah. This is all correct. It's a plus. And so that's how you, that's what you get. And this is what you get. And this is exactly the same. All right. Now what do I do? I do the best I can. Right. As always, I do the best I can. I cannot make this term negative definite because that would require me to introduce a minus theta tilde here. And I cannot introduce a minus theta tilde. This is very clear to you. Theta tilde is unknown because theta tilde contains theta star, which is the true parameter value, which is unknown. Okay. Therefore, we cannot implement a minus theta tilde. So there's no way of making this negative definite. So that I do the next best. I just try to remove this term because it's not a definite term. And in order to remove this term, I choose an appropriate theta hat dot to cancel this. And that's what it is. This is the appropriate choice. Once we make this appropriate choice, I will get V dot is minus K2 E2. Okay. Great. And this is negative semi-definite. And notice, again, notice the pattern. The, although the V is different, the V now has an added one over two gamma theta tilde squared in the adaptive case. My V dot expression, which is minus K2 E2 squared turns out to be the exact same expression that I get in the non-adaptive case also. Yeah. This is always the case because this is how, because of how we have chosen the Lyapunov candidate for the unknown case, this always ends up happening. Yeah. That V dot will have the same expression as the known case. Yeah. Even though V is different. Right. So now in the known case, also in the second order case, it was semi-definite. V dot was semi-definite, negative. And here also it is negative semi-definite. Okay. So we have no choice but to apply. Well, of course, we can claim uniform stability at best and using Lyapunov theorem. And now we apply signal chasing. Okay. And we do the same steps. So I'm not going to elaborate too much on these steps because we've already seen it. We do the same steps. First is V is lower bounded and non-increasing. Yeah. Because V is basically one-half k1 e1 square plus half e2 square plus one over twice beta teriors squared. So obviously this was greater than zero, already unbounded in fact. Right. Therefore it is lower bounded at zero and it is non-increasing because of this. So we have a V infinity existence. And it's finite. Now because V is finite, because of the fact that V is non-increasing, we have that all these e1, e2 and theta tilde are bounded functions, are real infinity functions. Then just like in the previous case, we can integrate both sides here. Yeah. And we can actually obtain that e2 is an n2 signal. Okay. All right. Because V dot is integrable from zero to infinity. Again, same step as in the case of the first order system. Yeah. And further e2 dot is also in l infinity because e2 dot is nothing but this guy. And everything in the right hand side, e1, e2 and theta tilde are bounded. So if I assume something on the boundedness of f or bounded argument, again just like the scalar case, then e2 dot is also bounded. Now if e2 dot is bounded and e2 is l infinity and n2, then by the corollary of the Bar-Balatz lemma, so from the corollary of the Bar-Balatz lemma, we can claim immediately that e2 goes to zero. So we get, again in the context of mechanical systems, a velocity. So we obtain a velocity tracking. Okay. So we also obtain that, of course, e2 is the same as e1 dot. Therefore, we obtain that the derivative of e1 also goes to zero. And the derivative of the velocity tracking happens. So difficult velocity matching in a robotics syndrome. Okay. Now we do this again, do similar steps as before. Now that we have proved e2 goes to zero, we will try to prove that e2 dot also goes to zero. Right. So this is, again, standard steps. Right. We said that the sequence of steps, the sequence of steps are again identical. In the first order case also, we did the same. We proved that e goes to zero. So then we, therefore, we have to go on to prove that e dot also goes to zero, which is which we did. So that's what we do here too. So now we know that e2 dot is integrable because e2 infinity is zero. Right. So e2 dot is integrable. Right. And since e2 goes to zero, as t goes to infinity. And with appropriate assumptions, again, on f, you can also prove that the second derivative of e2 is also bounded. Because the derivative of e2 dot is bounded, it means e2 dot is uniformly continuous. So with the original bubble at Slema, e2 dot is an integrable uniformly continuous signal. Therefore, it goes to zero as t goes to infinity. So as before, we've proved that e2, not only e2, but e2 dot also goes to zero. However, in the first order case, e2 dot contained only e and theta tilde. Right. So now the e2 dot contains e2, e1 and theta tilde. So if we know that e2 dot goes to zero and we know e2 goes to zero, we are still left with the fact that this sum goes to zero. Okay. All we can say is that this sum goes to zero. Okay. That's what we write here. That minus k1 e1 plus theta tilde f goes to zero. But unfortunately, this does not guarantee that the position error is going to zero. Look at this. It's a rather nonlinear function. I have the position error, sure. But then I also have theta tilde times f added up to it. So the summation going to zero does not really tell me anything about the error e1 going to zero. So therefore, in this case, I cannot even guarantee that the position tracking happens. All right. And this is what is called the detectability obstacle in adaptive control. And this happens because we started with a non-strictly upon a function for the original or the known system. If you start with a non-strictly upon a function for the known case, then you will end up with a detectability obstacle. And what it means, what does it mean? The detectability obstacle. It means that you may not be able to prove convergence or tracking of all the states. Okay. You will not be able to prove tracking of all the states. And that's rather disappointing. As an adaptive control theorist, I said we don't care about parameter convergence person, but we definitely want the system to track the trajectory. Yeah. As of now, we've only been able to put nice boundedness and stability, things like that. And that the velocity matches with the desired velocity. We've not been able to say anything about the position. And this is called a detectability obstacle. Why does it happen? We know that too. It happens exactly because I chose a non-strictly upon a function for the original known system. Yeah. Excellent. So that's how do we fix it? Choose a strictly upon a function. Obviously. All right. Great. Great. So one of the other alternate solutions was sort of proposed by Romeo Ortega in the 90s. He's still a relatively active researcher. He's retired from CNRS in New Paris. But he still continues to publish articles. But one of the interesting things he proposed was a Lyapunov-like function, which helps us avoid the detectability obstacle. All right. So what is it? So if you consider the spring mass damper system, I mean, let's not look at the system we had, but a simple string mass damper system. And suppose that we want to prove that X1, X2 both goes. We want to show X1, X2 converge to 0. S goes to infinity. All right. So what did he say? He said that if you choose a Lyapunov-like function, because this is not a Lyapunov function anymore, I hope that's evident. It's only positive semi-definite. Why is it only positive semi-definite? V minus alpha k comma k is identically 0 for all k. This is k comma minus alpha k is exactly 0 for all k. So there are values of states outside the origin where V becomes 0. And that's not allowed for positive definiteness. And so therefore it's only positive semi-definite. It's not even a Lyapunov candidate. Lyapunov candidate requires that you have a C1 function, which is positive definite at least. So we don't even have a Lyapunov function. We call it a, we call functions such as these Lyapunov-like functions. And what was his claim? His claim was that I can use or we can use a function like this in order to prove that both states go to the same. All right. So he did not even care about coming up with a strict Lyapunov function. He in fact came with a function which is not even a Lyapunov function. He went further down in fact in some sense. But he could still show that both states X1 and X2 will converge to 0 as t goes to infinity using this kind of a choice of function V. All right. So how does he do that? Is what we will see in the upcoming session. All right. So great. So what did we sort of see today is the adaptive control for a second order system. And by the end of the session today, we could only arrive at the detectability obstacle. And what is it? The detectability obstacle is that because of a choice of a non-strict Lyapunov function for the original known parameter case, we could not prove that our parameters or even the states, the position state in this case converges to 0. It converges to the desired trajectory. And this is very bad. And we are definitely not happy with the result of that thing. We just don't want just bounded trajectories and then the position states not matching the desired positions. And that is definitely something that we cannot live with. So we started to see this order construction which hopefully will help us alleviate this issue. This is what we will see in this upcoming session. All right. Excellent. So this is where we stop now. And I'll see you again soon.