 Hello everyone, welcome to yet another session of our NPTEL on non-linear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. We are again back to our motivating image. We are in v2 and we aim to sort of develop algorithms and analyze them and algorithms that will help us drive autonomous systems such as this. So we go back and try to recap what we did last time before moving ahead very quickly. So last time we started by studying a few important lemmas. The first one being on convergence of certain functions. And in this lemma we saw that if you have a function which is scalar valued and with the derivative which is bounded then the function is uniformly continuous. This was one of the lemmas. The other lemma we saw, in fact we saw this lemma 1.1 earlier which said that if you have a scalar valued function again and it is bounded below and non-increasing then the function has a finite limit as t goes to infinity. We also saw examples of what such functions are. We then studied one of the, at least one version, right? Or the one version of the Babalat's lemma which is one of the very, very key lemmas for analysis of adaptive systems. Without Babalat's lemma there is almost no hope of doing any adaptive systems analysis. And here in this lemma essentially we say that if a function is integrable and here we could talk about both scalar and vector valued functions, right? For vector valued functions we look at component-wise integrals, right? So if the function is integrable and further that the function is uniformly continuous then we have that the function converges to 0 as t goes to infinity, okay? We also saw an example of how this Babalat's lemma, although it is a sufficiency condition that is it is a one-way result. It says that if these two conditions are satisfied that is integrability and uniform continuity then you have convergence to 0 and not the other way round, right? Not the other way round, okay? So this is sufficiency condition but we saw an example of how this is a rather tight sufficiency condition. We saw an example where if a function is integrable but not uniformly continuous, right? Then even in this case the function does not actually converge to 0, okay? We saw that there is in fact no limit to a function of this kind, okay? Alright, so moving on we will look at a corollary of this, right? We will look at the corollary of this and we want to say that, you know, we want to sort of look at a slightly simpler looking condition if I may, okay? And this is a more concise statement if you remember but still remember this is a corollary, right? So what does it say? It says that if the function is such that it is both L infinity and Lp for some integer p, right? Where infinity is not included because of course you already have infinity here, right? So if it is L infinity and Lp for some p and further f dot is L infinity then the function converges to 0 as t goes to infinity, okay? So this is sort of a, if I may, sort of a different characterization for Bob Lutz lemma, okay? Now like I said it's a corollary and so it makes sense to sort of want to know that corollary implies the actual lemma itself, right? And so therefore we do say something like this. We say that if the function is L infinity and L2, right? Then we want to prove that limit as t goes to infinity f t is equal to 0, right? So we essentially want to prove the corollary, okay? So this is, I would say you also want to add here, yeah? Also want to add here that the derivative is L infinity. We want to prove that the corollary holds true when this p is equal to 2, okay? So this is sort of a specific case of the corollary but if you can prove it for p equal to 2, you can prove it for any p, no problem. And so this is sort of asking us to prove the corollary using original Bob Lutz lemma, okay? So you want to prove the corollary using the original Bob Lutz lemma statement which is essentially lemma 2.1, okay? All right, so I leave it to you, yeah? This will be part of the homework exercises of course, all right? So I want you to give this a shot, yeah? If you remember I already said that this integrability condition has some similarity with L1 type condition. So and you sort of have to use similar idea here, okay? That's sort of the hint for this exercise. Okay, now let's see how one might use the Bob Lutz lemma. I want to give you a glimpse of convergence analysis using this very, very cool tool that we are claiming can do so much, yeah? So you want to see if it can, in fact what we are saying it can and it will help us analyze convergence of adaptive systems, yeah? So suppose we consider this typical spring mass damper type system you all have seen something like this before in most of your mechanics, dynamics, you know, in high school and so on. The dynamics of this system is given by this equation, okay? This is very, this equation 3.1 is very standard application of the Newton's second law, yeah? I mean what you would typically do is write, think of M as a free body diagram, sorry, M as a free body and then we write all the forces on it. So what are the forces? You have something like a, let's see, okay? There is a sort of inconsistency here and this should be, so let me change this. This should be C, yeah? Yeah, this should be C and not B, okay? So this is a free body, there are multiple forces here, right? So this movement is only horizontal, so let me try to write this a little bit carefully, okay? So this is the vertical force is MG and then you have a normal reaction from the ground, yeah? I mean which is not really marked here but there is sort of a constraint here, yeah? To prevent it from, you know, falling below. So that's a normal force. And then in this direction we have forces CX dot due to the damper and you have KX, you have CX dot due to the damper and you have KX due to the spring, all right? So again this X is of course this X, right? Depends on this displacement, yeah? Makes sense? And then you of course have this user applied force F of t, okay? And then what does the Newton's law state? It states that if I write it in only the horizontal direction in the X direction, the Newton's law will be MX double dot is equal to all the external forces. So the only positive force is F of t and then you have minus CX dot and minus KX, okay? So once I have written this very, very concisely, the Newton's law, all right? Let's see. Okay, yeah, let's see. I want to sort of, okay? So this concise form of Newton's law has been written and let me see if I can make this bigger. Yeah, it looks like I can. All right. So once you have written this very concise form of the Newton's law, you can see that this is exactly what we have here, right? It's exactly the same equation, just rearranging the terms here, okay? Excellent. So now that I have this equation, I'm going to write it in our standard state space form, right? And how do I write it in a state space form? By choosing states. And so what are my states? I choose my states as X1 is equal to X and X2 is equal to the derivative of X, okay? This is the standard way on how you convert anything into a state space form. So then what do we have? We have X1 dot is equal to X dot, which is actually equal to X2, okay? So that's the first equation. Then X2 dot is the second derivative of X and this can immediately be concluded from this equation, okay? It's pretty obvious, okay? All right. So these two equations are in fact the same. So here I have MX double dot, which is essentially MX2 dot. And then you have CX dot, which is just, sorry, I'm sorry. There is a small error here. This is not states are like this. X1 is X and X2 is X dot, okay? So X1 dot is X2 and then X2 dot is X double dot, which is minus C by MX dot, which is X2 minus K by MX, which is X1. Plus F over M, plus F over M, okay? So now in general, as it is mentioned here in adaptive control as well as in non-linear control, we typically want our states to follow desired trajectory, okay? It may seem rather unrealistic to you for this particular problem, but that is what we always want to do, right? We want our robots to follow a trajectory. We want our spacecrafts to follow a trajectory. We want our voltage signals to follow a trajectory and so on and so forth. I mean, if you consider any system, you in general want your states to follow a trajectory, okay? And therefore we define this desired trajectory, right? So this is, we define what is called desired states by just these X1 desired and X2 desired. Usually you will have to have the X2 desired to be the derivative of X1. This is so that this has to be consistent with the dynamics, okay? This is sort of a matching condition, okay? The fact that X1 desired and X2 desired are related in this way is a matching condition, okay? Why is this a matching condition? This is because if you see for this dynamical system itself, X1 derivative is actually equal to X2, okay? In this particular case, X1 is the position and X2 is the velocity, right? So the position derivative is the velocity, okay? Therefore, in order to have a consistent desired trajectory, you will need your desired trajectory to also have the derivative of position to be the velocity of the desired trajectory, yeah? Otherwise, these are not compatible trajectories and you cannot track such a trajectory, okay? If I give you a trajectory where X1 dot desired is not equal to X2 desired, then this trajectory cannot be followed by the system. It should be obvious to you, okay? So that's why this is called a matching condition, okay? So this makes sense, yeah? That if your system has position derivative equal to velocity, then your trajectory also has to have position derivative equal to velocity. There's no two ways about it, okay? Great. Now that I have these desired trajectories, what do I do? As control engineers, we are always interested in looking at things going to zero. Remember, we always said that our equilibrium we assume to be at the origin and so on and so forth. So we're always interested in going to zero. If you see the Babelhardt's lemma is also saying that a signal goes to zero, okay? So because we want to deal with objects that go to zero, we want to create error variables, okay? Because I don't want to look at X1 going to X1 desired because both X1 and X1 desired are quantities that depend on time, right? So what do I do? I construct an error, which is the difference of X1 and X1 desired, okay? This is very standard, right? And similarly, an error 2, that is E2, which is the difference of X2 and X2 desired, okay? So we create these errors corresponding to all system states, okay? We create this corresponding to all the system states. So here we have two states, so we create two. If we had 100, we would create hundreds of errors. Why does this make sense? Why does this make sense? Why does it make sense is because if this is equal to zero, if this is going to zero, if both these errors are going to zero, what do I have? I have what I desire. I have, this implies that X1 goes to X1D and X2 goes to X2D, right? And this is precisely my tracking objective, right? This is precisely my tracking objective that I want my desired quantities to go to the true values, okay? I hope this makes sense to all of you, right? Because this has to make sense to you because this is what we do in all of our linear control and adaptive control. We always create error variables or variables which we want to go to zero, okay? We always want to create variables that will go to zero and therefore we create error variables. Instead of looking at the original X1, X2 variables, we look at X1 minus X1 desired and X2 minus X2 desired as my new system variables, okay? Great, great, excellent. So once I have these error variables, right, these new definitions, right? What's the deal? I want to identify the dynamics or the evolution of these error variables. Until now, I had dynamics of X1 and X2. But now I want dynamics of E1 and E2, all right? And how do I do that? Just take the derivative because I have the dynamics of X1, X2. And if I want the dynamics of E1, E2, so I just take the derivative of E1 and E2. That's what I do here. E1 dot is just X1 dot minus X1 desired dot, okay? This, and what is X1 dot minus X2, X1 desired dot? It's exactly same as our definition of E2, okay? And this is not my magic. This is not my magic, not by any distance, okay? This is by virtue of the matching condition, all right? This is because of the matching condition that you have this sort of a dynamics. So what happens? E1 dot turns out to be exactly E2, right? So our error dynamics, the first equation of our error dynamics, looks exactly identical to our original state dynamics, yeah? Okay? It's an integrator. It's an integrator. This is an integrator. This is very standard, okay? In mechanical systems, in aeromechanical systems, at least, to have an indicator as your first state dynamics, all right? All right. Then what about the second state? You have E2 dot, and then you take derivative of X2 and X2 desired. So you have X2 dot and X2 desired dot. X2 dot, I can plug in from my equations as this guy. Yeah, this is this entire equation that we had from before, yeah? And you have X2 dot, which is, say, again, the double derivative of X1, okay? Just the double derivative of X1. All right, excellent. As of now, we are assuming, I mean, we are still not in the purview of any adaptive control or anything. We have not even learned what is adaptive control. So we are simply trying to understand what, you know, Babalat's lemma can do for us, yeah? How to analyze systems using the Babalat's lemma, yeah? So as of now, we are simply trying to do that, just to give you a taste of how analysis with Babalat's lemma goes. So we assume for the moment that all the parameters of the system are known. That is, we know this quantity, we know this quantity, okay? So all these parameters are assumed to be known, yeah? Obviously, the trajectories have to be known, have to be given to us. It doesn't make sense otherwise, because if I don't know what I'm following, then, right? What am I following, all right? Okay, great. So what do I do? I start off by choosing an appropriate control now, okay? And what is this appropriate control now, okay? I know for a fact that, well, I mean, maybe you don't, but I know for a fact that a system like 3.6, that is this guy, is a exponentially stable system. You already know what is the meaning of this. So this is a globally exponentially stable system. I know this for a fact, yeah? Why do I know this for a fact? Well, because I'm doing control engineering for a long time. So I know this for a fact, okay? Now, for those of you who don't know this for a fact, how can one verify that such a system is, you know, globally exponentially stable in many, many different ways? One obvious way is just that is this is a linear system. First we identify that this is a linear system, okay? And then what do you do? You say E is E1, E2, transpose. So then you have E dot is 0, 1 minus K1 minus K2, E, okay? And then what do I do? Compute eigenvalues, yeah? You can do many things. Once you do this, you can, once you have this sort of equation, you can do many things. You can write the characteristic polynomial and use the Routh-Herwitz criteria. Many of you might have done it in your typical first basic control systems course, yeah? But I mean, in this case, I can also compute the eigenvalues directly, right? One way or another, it is not difficult to see. What will be the characteristic polynomial of the system? It will be something like lambda square plus K1S plus K2 equal to 0, right? Let's see. Let me sort of verify this is going to be the case. Yeah, so this is, I am claiming the characteristic polynomial, okay? What is the characteristic polynomial? It is simply the determinant of Si minus A equated to 0. This is, well, I mean, not S. Let's use lambda. Lambda I minus A equated to 0, okay? I am simply computing the determinant of lambda I minus A, okay? So, this is just determinant of lambda minus 1 K1 lambda plus K2 and that is what I am equating to 0. So, actually, this is the other way around. Let me see. So, you will get lambda squared plus K2 lambda plus K1, okay? This is what you will get. Let me sort of highlight this nicely, okay? This is what you get as your characteristic polynomial and it is not very difficult to see using this particular characteristic polynomial, right? What is going to be your eigenvalues? It is simply the solution of this. It is simply the solution of this, okay? So, what is the solution? Let's see. I mean, lambda is going to be equal to, so let me write here, the left, lambda is going to be, I apologize. Lambda is going to be equal to minus K2 plus minus, okay? It is going to be minus K2 plus minus square root of K2 squared minus 4 K1 divided by 2, okay? All right? Excellent, excellent. Now, depending on what this K2 and K1 are, you will get different roots. But the key thing to remember is that, you know, this is always, the key thing to remember is that real part of lambda is always negative here, yeah? Why would the real part of lambda always be negative? I know that because this quantity in the bracket here, that is this guy, is always less than this guy. This is always less than this, okay? So, because this is less than this, the real part of lambda will always be negative. Again, this is something you know from your standard first, you know, first level course in control system. So, therefore, I am not covering it in too much detail, okay? But you should know that, yeah? If the quantity in the square root is less than this quantity here, then the real part has to be negative, okay? And the real part is negative. I know for both the eigenvalues, then I know that the system is globally exponentially stable, okay? Or of course, like I said, you can simply use the Routh criteria and you know that K1 and K2 are both positive. And you can continue along. And it's very easy to conclude that the system is globally exponentially stable, okay? So, now, cool. So, I know that this equation in blue is globally exponentially stable. So, what do I want? I want this system to follow this system. That is to be the same as the system, okay? Because if I can make 3.4 and 3.6 same, then I'm done. Because my system is globally exponentially stable, okay? Great. So, how do I do that? Because I know, why do I use this target system? Because I already see that the first equation is matching. If the first equation was not matching, then I had no hope. Because I have control. That is this f over m is what is my control, the control function, right? It is f is the control function. So, I have control on the second equation, but nothing in the first equation, right? So, the first equation definitely had to match. And they do. So, I choose this target system. And so, what do I do? I choose this f over m in a way that my right hand side becomes this. And that's exactly what this is, okay? I have simply chosen to cancel this guy, this, this and this, okay? And that's it. I have simply chosen to cancel these quantities, right? With some new definition here, of course. You know, I am calling this k1 as k by m, k2 as c by m, okay? And then I introduce some nice terms, minus k1 even, minus k2 e2. That is, I introduce these terms right here. Yeah, basically this, I get from here to here by choosing the f. That's it, okay? That's the m. And I do that, okay? And so, I get that my, what is called my closed loop, okay? This is the, what I would call, not just error dynamics, but the closed loop error dynamics. That is, I have plugged in the control. Therefore, it is called the closed loop error dynamics, okay? Now, of course, we know that this system using our, you know, either Routh-Herwig's methods or using our, you know, using our Eigenvalue analysis, that this is exponentially stable. But what if I want to prove this using Lyapunov methods, or using a potential function? This is what we will see next time, can be done using Babalat's lemma. Why do we want to do this? Is because typically nonlinear systems may not have, you know, such an easy nice structure. You may have something very nonlinear appearing here. And then you cannot use Routh criterion and so on and so forth, okay? Your target system may not be linear, okay? And in such cases, you are forced to use nonlinear analysis methods. And that's what we will see, how to prove exponential convergence using Babalat's lemma and potential function methods next time, okay? Great. So what did we do this time? We essentially started to look at how to use the Babalat's lemma. We are not there yet, right? We will continue, of course, next time. But we have, you know, sort of looked at a model, looked at a control and looked at an error equation. Before that, we saw an alternative version or a corollary to the Babalat's lemma, which also may be useful to us at some junctures, all right? Excellent. So this is where we will stop today. See you again next time. Thanks.