 Welcome to yet another session of our NPTEL on nonlinear and adaptive control. We are into the second week of this course. I am Srikanth Sukumar from Systems and Control, IIT Bombay. As always, we have our nice motivational image right in front of us, which is a rover on Mars. And these are the sort of autonomous systems that we hope to drive with algorithms that are developed through the course of what we learned this semester. Alright, so without delaying any further, let me go into the course material. So, if you remember, last time what we had done was the first thing was that we discussed this corollary of the Babalat Slema, the very famous Babalat Slema. And we also of course have an exercise which requires you to approve the corollary in some sense using the original lemma itself. Then we started to look at how we can use Babalat Slema in convergence analysis in typical adaptive systems. And in order to do that, we had this nice setup where we have a spring mass damper system, which is moving on the horizontal plane. And we very quickly derived Newton's law, second law based equation of motion for this system. And we of course get a dynamic dynamical system of the form 3.1, which is reduced to the state space form in equation 3.2. After that, we constructed these error variables. So, the aim in most of what we do is to drive systems towards some optimal or some nicely behaved trajectories, trajectories which we are predefined in some sense. One of the things I forgot to mention are that typically these trajectories are assumed to be C infinity, that is infinitely continuously differentiable. And of course, separately from C infinity, these signals are also assumed to be bounded with bounded derivatives. So, these are standard assumptions that we make on all desired trajectories that we choose to work with. Otherwise, your system gets driven to infinity and things like that, which is something we want to avoid. So, once we had these desired trajectories, we define our error variables using these desired trajectories. And then after that, we computed the dynamics of these error variables. Now, in order to do control design, what we did last time was to create a target system. Now, the idea behind the target system is to have a stable system, asymptotically stable at least, because we want these errors to go to zero. So, we wanted to create this asymptotically stable system. In this case, of course, exponentially stable because it is linear. So, we create this target system, which is sort of compatible with the original system. And in the sense that in the original system also, we had even dot equals to e2. And so, we have the same in the target system. However, the second piece of the dynamics which contained the control in the original system was prescribed to be a nice negative terms, right? And we could, of course, prove that this is in fact an exponentially stable system just by using root locus, sorry, Routh-Herwitz, even root locus, and typical eigenvalue computations. And once we understand that this is in fact exponentially stable, we want this original system to follow this target system. In order to do that, we compute a control log f of m of this form. And given this 3.5, that is, if this 3.5 gets plugged here, you are going to get this target system 3.6. So, this is where we were last time. Now, this is equation 3.6 is what is called the closed loop error dynamics. Why is it closed loop? Because we have closed the loop using the control. This is very standard terminology in control theory. What's the idea? The idea is that the control depends on the states themselves. So, these terms are all state-dependent terms. Therefore, you need some kind of a feedback from the system. That's why these are called feedback controllers. So, you need a feedback from the system. That is, you need some sensors mounted on the system, which will give me this information. Without any sensor mounted on the system, it is impossible to actually implement 3.5. And then what do we do? We take the information from the system and feed it back into the system via some kind of a control loop. Yeah, via some kind of a control loop. And hence, this is called closing the loop. Take information from the system, process it in a controller and send it back into the system as a new control signal or an actuation command. In this case, as you can see, the control signal is simply this f of t, this kind of a force that I have the ability to exit on this Mars. Yeah, excellent. Now, let me understand this. Yeah, now that we understand this quite well, that's why this is called closed loop aerodynamics. This is called the closed loop aerodynamics. And these are the quantities, that is, E1 and E2 are the quantities that we want to drive to zero. Now what do we want to do? We talked about this last time, right? So let me actually label our lecture. So this is lecture three of week two. Okay, so we did talk about this last time. What we want to do is we want to prove that this system is in fact asymptotically stable. Yeah, using potential functions. Why did we say we were interested in doing it? Because it's obvious that in this case, we have other dudes like, you know, your eigenvalue analysis, your router with an criteria. You have so many methods to in fact conclude exponential convergence. Why do we need potential functions? The point is, in a lot of cases, our target system turns out to be non-linear. You may have to make your target system non-linear. You have no choice. Yeah. And if that happens to be the case, then you are forced to use some kind of a potential function analysis because in those cases, your eigenvalue analysis or your router is not possible. They are all tools that can be used only for linear systems. Okay. And since that is the case, we of course want to see what to do in the more general scenario, how to do potential function analysis for convergence in the more general scenario. Okay. So notice that Babalat's lemma is only a tool for proving asymptotic convergence. Okay. We already looked at this distinction. Right. Well, I'm sorry. We have not yet looked at this distinction, but we will very soon. So don't worry if you're confused. The idea is that Babalat's lemma moves only convergence. Yeah. That is it. That is it will show that it claims as you can remind yourself from the theorem that some function goes to zero as t goes to infinity. It doesn't say anything about what happens before infinity. It might very well be the case that the function goes to infinity, explodes and it comes back and converges to zero as t goes to infinity. Yeah. This is something like this. Yeah. So, I mean, just like I said, I mean, you could essentially have something funny happening with the function. Yeah. You could have something funny happening to the function like this before infinity. Okay. And this is not really predicted by the Babalat's lemma. Okay. It is not predicted by the Babalat's lemma. So I want you to remember that the Babalat's lemma only helps you prove asymptotic convergence. That is, it talks to you. It tells you something about the behavior as time grows large. Yeah. Excellent. Excellent. So once we understand that, let's try to see what we want to do. So what we want to do is we want to prove asymptotic convergence. Like I just said, which is mathematically defined as limit as T goes to infinity E1 of T and limit as T goes to infinity E2 of T has to go to zero. So this is important. Okay. So this is what you have. Limit as T goes to infinity E1 of T and limit as T goes to infinity E2 of T. Both of these will go to zero. That's T goes to infinity. Okay. So what we do is, as we had planned, what is E functional or we use a potential function, whatever you want to call it. Yeah. And what is this? This is very standard for spring mass dampers. This is, V of T is half K1 E1 square plus half E2 square. Okay. So these are, of course, functions of time. Right. It should be obvious to you that this is non-negative. That is, it is lower bounded by zero. Why? Because I'm simply taking squares of quantities. Right. So it's obviously lower bounded. No problem. Excellent. So now that we understand that this function is lower bounded. Right. V, what do we do? We take the derivative. We very, very carefully take the derivative of this function. So let me repeat what the function was. It was V is, I'm sort of ignoring, avoiding writing the time argument here. V is K1 E1 square half plus half E2 square. Okay. And then we simply take the derivative which is K1 E1 E1 dot and E2 E2 dot. The two scans allowed because of the two in the derivative. Okay. And what do I do here? Here, now I substitute from the dynamics of E1 and E2. Yeah. Substitute from the dynamics of E1 and E2. Yeah. And what is this dynamics? It's already written here. Right. This is my close loop error dynamics. Yeah. Notice, we have assumed all the parameters are known and therefore we can actually reach here without having any issues. Yeah. If these parameters were not known, this is not possible. But we will worry about that later. Yeah. That is where adaptive control built. Excellent. So I substitute for E1 dot as E2 and E2 dot as minus K1 E1 minus K2 E2. Okay. And then it should be very clear to you that by virtue of the construction, I did something, you know, neat in the construction, right? Which may not have been obvious to you but I did do that. Yeah. So what is that? I put a scaling K1 E1 squared. Okay. And because of the scaling, I get a K1 E1 E2 here. Right. And here, of course, I already have a minus K1 E1 E2. Yeah. And so what happens is that these two terms, of course, cancel out. Yeah. By virtue of my smart construction, I do this and this is where some kind of an experience or, you know, intuition, good intuition in constructing, you know, functions or these potential functions comes into play. Yeah. So because of my smart choice, we, yeah, by the way, it's not my smart choice. This choice of potential function for spring mass damper has existed forever. Yeah. So I'm just calling it my smart choice just to tell you that delta new problem, which is not a spring mass damper, which is something more unusual, then you will have to be, have to make a smart choice. Okay. Excellent. So, yeah. So due to this rather nice choice of V, yeah, what happens? Yeah, we get cancellation of these two guys here. Okay. And then I'm left with minus K2 E2 squared. Okay. Excellent. Great. And then what do I have? This is essentially obvious that V dot is now less than or equal to 0. It can never be positive. Because again, I have a square term with a negative term and K2 is of course a positive quantity. And so K2 is obviously positive, right? Both K1 and K2 are positive quantities. Okay. Great. So I know that V dot is less than or equal to 0. Okay. So what do we want to do? We are now claiming that as T goes to infinity, both even and E2 will go to C. Okay. This is our claim. And of course we want to see how to prove this claim. Yeah. And this is where our Bob Latt's lemma will come to the fore. So the analysis that we do subsequently is called the signal chasing analysis. Yeah. Which is through a lot of signals. Yeah. And this is very, very standard steps. Okay. So the important thing for all of you who are taking this course is to almost memorize these steps. Because every time we try to use Bob Latt's lemma, these will be the standard steps that need to be implemented. Okay. So every time, every time these will be the standard steps. So I want all of you to sort of memorize almost, you know, what these steps are. Okay. Excellent. Excellent. So let's see. So the first step is to note that V is bounded. Okay. What does it mean? It means that V is lower bounded. Yeah. We already said that because it's composed of squares. So it's a non-negative quantity. So it's lower bounded by zero. That's obvious. And further, it is not increasing. Why is it not increasing? Because the derivative is less than equal to zero. Yeah. You know very well that for any function of time, for example, if the derivative is less than equal to zero, then the function cannot increase. It can stay constant or go down. Yeah. The function is lower bounded and non-increasing. And this immediately lets us use lemma 1.1. What was lemma 1.1? This is one of the first lemmas we did. It said that if function is bounded and non-increasing and so then the function has a finite limit as T goes to infinity. Then the function has a finite limit as T goes to infinity. Okay. And that's what we use. Lemma 1.1. Yeah. That was the use of all these very, very nice lemmas. Okay. So since V is lower bounded and it is non-increasing, using lemma 1, we have that there exists a finite limit. Yeah. So V infinity. Let's call it V infinity. We are calling it V infinity. Okay. Excellent. The second point is that both even and E2 are bounded. Why? Because again, V dot is less than equal to zero. And this implies that VT for any time T greater than equal to zero is less than equal to zero. Okay. Obvious again. Yeah. Because it's a function of time, derivative is negative. It's nice and continuous also differentiable and everything because of how it's constructed. Yeah. And therefore, whenever you have, you know, the derivative to be less than zero, less than or equal to zero, then V of T has to be less than equal to V zero. What does it mean? It means that V itself is bounded because whatever value I started it at, it's always going to remain below that value. Okay. So V is bounded. And now, notice that V is composed of quadratics in E1. E is. That is, you know, V is actually even square and E2 square. So it should be obvious to you that if V is bounded, then all of these have to be bounded. Yeah. Because there's no subtraction happening anywhere. It's easy to argue. If even was unbounded, V will become unbounded. Or if E2 is unbounded, V will again become unbounded. So the only way for V to remain bounded is that both E1 and E2 remain bounded. Okay. This is rather critical. Rather critical. Yeah. So if V is bounded, both E1 and E2 are bounded. Okay. Purely because of a quadratic nice construction. Yeah. We just squares. They're all getting added. Nothing is getting subtracted. Right? Instead, if V was of the form, something ridiculous like E1 minus E2 square, and I said V is bounded, bounded is same as L infinity. Remember, I cannot guarantee even E2 not necessarily bounded. I cannot guarantee that they're bounded. Why? Because if E1 can become really large and E2 can become really large, but they can remain same. Yeah. They can go to infinity together, but this difference will always remain bounded. Okay. So if this, so this is sort of a counter example. Remember. Yeah. Yeah. So, V bounded implies E1, E2 bounded because of nice construction of V. Okay. If I make some arbitrary and what I would call ridiculous choices for V, this will not be true. So always be very careful whenever you claim, you know, it's not that I just wrote a V. Therefore, if V is bounded, even E2 are going to be bounded. No. It's not a guarantee. Yeah. And I gave you a counter example. If I choose V as even minus E2 square. Yeah. This still is greater than equal to zero. Remember. This still has a lower bound. Yeah. But, you know, V being bounded in this case will not imply that the errors are bounded. Okay. Excellent. Excellent. Right. So and of course we are using boundedness and L infinity interchangeably. We have already proved this in class. That L infinity is same as boundedness of the function. Excellent. The next and a rather critical step, a rather critical step is that E2 is an L2 function. Yeah. So we are coming back to everything we have learned until now of these signal norms and everything, function norm, vector norm, signal norms. Yeah. So keep these definitions in your mind. Yeah. So we are claiming that E2 is an L2 function. How do we claim that? So what do we do? We integrate V dot from zero to infinity. Okay. So we're here. So this is where we integrate both sides. Okay. Now I know. So the right hand side of the force minus K2, zero to infinity, E2 squared d dt. Okay. Now from the left hand side, I know is just, you know, this becomes, if you may, this is just dV dt dt. So this just cancels out. So this just becomes an integral of dV. And so that is simply V at infinity minus V0. Yeah. Now, amazing thing. This can be evaluated only because we have step one. Okay. If the, if V infinity was not defined or if it was not finite, this cannot be evaluated. Okay. So great. So because of the step one, I can make this even on the left hand side. So V infinity minus V0. The right hand side, I don't touch. I don't do anything to the right hand side. It is simply E2 squared dt. Now what? What do I know? I know that L2 norm of this E2 signal is actually this kind. It's exactly this kind. Yeah. Because of the scalar quantity, I can, I can put an absolute value, but it is irrelevant. It is irrelevant. I don't have to. Okay. That should be clear to you. Excellent. So now that I know this, right? So this is my 2 norm of E2. So what do we know now that this is the 2 norm of E2? What do we have essentially? And this quantity here is what you have here. Okay. So from 312 and 313, what can I say? I can say that the V infinity minus V0 is nothing but minus K2 times squared of the 2 norm. Okay. And I also know V infinity minus V0 is bounded because V infinity is a finite quantity. V0 is a finite constant because I initialize it at a finite quantity. And it would be ridiculous otherwise. And of course, so what do I have here? So here I have the norm E2, the 2 norm of E2 is actually the square root of V0 minus V infinity divided by 2. Now notice very carefully that this guy on the top. Yeah. So this quantity on the top, that is this guy, is in fact greater than equal to 0. So we don't have to worry about imaginary numbers coming out of this. This is greater than equal to 0. Why? Because V is a non-increasing function by our step number. Yeah. V is non-increasing, right? We just said it here. Therefore, V infinity, which is the value of V when P becomes really large, has to be less than equal to its value when V is, when time is 0. Therefore, this is non-negative. So you have a non-imaginary outcome here. So it's not, so it's a sanity check, okay? Excellent. So what do we have? Because of this expression, I know that this right-hand side is finite and this is precisely what it means for the signal to be an L2 signal. It's precisely what it means for a signal to be an L2 signal. And therefore, we say that as per our definition, E2 is an L2 signal. Excellent. So I want to go ahead and summarize, you know, what we did today. We will continue, of course, with more of our discussion next time. What we looked at today is we started off our analysis, our signal-chasing analysis. Yeah. We saw that Bob Lutz Lemma can only to prove asymptotic convergence and cannot tell us anything about the behavior of the function as for time which are, you know, less than infinity. So nothing about the transient, only about the steady state in typical first level control systems language. Yeah. And then we looked at what is asymptotic convergence and then we saw a few steps of the signal-chasing analysis. Yeah, this analysis in which we prove signals go to zero is called a signal-chasing analysis. In order to do this, we chose a very nice, very smart potential function. Yeah. And this helped us to show that V is strictly greater than equal to zero and V is non-increasing. And this is where we started our analysis. Okay. And this is how we started with proving a few of the steps of the signal-chasing analysis. What we will do next time is, of course, we will complete at least one step of one piece of the signal-chasing analysis. And we will see how far we get into the rest of the proof of asymptotic convergence for signals even and equal. Okay. The important things to remember here at the very, very nice smart choice of potential function was critical. Yeah. We saw that if, you know, an unusual choice would not let you prove your asymptotic convergence as you require. Yeah. And this is a rather critical thing to remember. And the other thing to remember is that all of our knowledge of our, you know, all these L2 functions, LP spaces, L infinity spaces, boundedness, norms, and all the lemmas that we learned, they start to get used here. Yeah. And these step, these sequence of steps we do remains almost identical for the entire course. Whenever we do Babylon's lemma-based analysis. And this is why it is rather critical that we almost memorize, you know, this set of steps. Okay. So I would strongly urge you to come into memory these set of steps that we do because they remain more or less identical. I mean, of course, the expressions are different. The steps themselves remain identical. All right. Okay. This is where we stop today. Thank you very much.