 Hello everyone. Welcome to yet another session of our NPTEL on nonlinear and adaptive control. I am Srikanth Sakumar from Systems and Control IIT Bombay. So we are into the eighth week of this course on nonlinear adaptive control and we are already looking at several algorithms, we've already looked at several algorithms which will help us to control autonomously systems such as what we see in the background of a satellite orbiting the Earth. We are also now able to design algorithms against uncertainties in such systems. So what we were doing until last time is basically completing the proof for model reference adaptive control. So model reference adaptive control was a paradigm for adaptive control of linear systems which is a very, very popular and very, very well known and very well cited and even used sort of set of methods in adaptive control. And the basic difference from what we have been doing before this is that here we actually track a state that comes out of a reference model than just a reference signal. So the picture looked somewhat like this. So that is the big difference here that you have a reference model instead of just a reference state. And everything else is sort of similar. In terms of the analysis, the big difference that we saw was that we had unknowns which were actually matrices. So we started with A star and sorry, A and B being unknowns. And then of course we moved it to sort of redesigned it so that the unknowns became these K star and L star which were still matrices. And because we were dealing with matrices and matrix unknowns, we also learned a rather novel way of designing Lyapunov candidate functions for matrix states or matrix unknowns. So here the parameter error term instead of just being a quadratic or a square of the vector norm is in fact a kind of a matrix Frobenius norm, weighted matrix Frobenius norm. And it is defined in terms of the trace function. So this was the big difference. And in order to sort of complete the proof, we also needed to use some very, very interesting and useful trace properties. So this is something that I hope all of you will remember always because these are very, very useful in manipulating any kind of Lyapunov candidate in any other context. If you see trace functions, then it's very, very useful to manipulate these trace functions. And we can use these equalities in order to do that. All right. And of course, then we designed our update law. As usual, it was set a certainty equivalent adaptation. So all was as usual, other than these particular features. Okay. So now we are sort of ready to move into the next week's lectures. Again, please do not get confused. The sort of classification of the week is also in a sense to provide homework and just to organize material. As in when we do complete the material for a particular week, we will move on to what we call next week lectures. So we are in week number eight. We are already going to start looking at week nine lectures. Yeah. So I really hope that this doesn't cause any confusion. Should be pretty straightforward. This is just for us to, you know, organize the homework in a proper way. Yeah. So this week nine is titled adaptive integrator back stepping and extended matching design. So we will do a couple of things in this set of lectures. The first is we sort of generalize this adaptive integrator back stepping that we did a couple of weeks ago. So not the week number eight, but week number seven lectures, right? In week number seven lectures, we saw integrator back stepping for the first time. We also saw how to do a back stepping design for the unmatched case. So what we will do here is to generalize that idea. Okay. And then we look at a different method of doing adaptive integrator back stepping so that we don't have to resort to over parameterization. All right. So that's the agenda for week nine lecture notes. Yeah. So let's continue. Right. So let's suppose that for the system. Right. So now suppose we are, since we are generalizing, so we are essentially taking vector states and all that. Okay. So, well, I mean, to be honest, let me mark this first before we start. So this is actually lecture eight point three. All right. This is lecture eight point three. Okay. So, so we'll recall is this week seven. I mean, I'm going to go here to week seven. We had a system like this. Right. Where we learned how to do back stepping for unmatched parameters. Right. And we know how we did it. We started with the first system then assume that there is an unknown parameter. So the desired quantity was designed with a cap. Right. And then we of course had a Lyapunov kind of design. But then when we went to the second Lyapunov function and we tried to choose a control, there was a problem because the control is not implemented. But the control is not implementable due to the presence of this theta here. Right. So control contains an extra desired dot, which can, which brings up a theta, which is not implementable. So obviously, we don't want to use a theta hat again, because we already have specified an update law and we can't do it. It will create an error in the analysis. So instead of theta hat, we put in a new estimate new hat. Right. So we put in a new estimate new hat for the same quantity theta. We have two estimates, theta hat and new hat. Right. And so with that, we declare this new variable as X2 hat, X2 desired hat dot instead of X2 desired dot. And then we continue this analysis with this new mu hat. And then of course, you have a candidate Lyapunov function, which also contains a mu tilde on top of the theta tilde. Okay. So essentially, because we have two parameters, two estimates for the same parameter, we have two terms corresponding to the parameter errors. Right. So there are two different parameters. All right. So that essentially was what we were doing. I mean, we then specify mu hat dot and all that in the standard way. Yeah. If you don't remember, please go back and revise what we sort of did in this lecture. So what we want to do today, at least in the beginning is that we want to look at a generalization of this, then the states are not scalar states. Yeah, because most in most problem more often than not, your states will be vectors. Yeah. So it doesn't make any sense to just restrict ourselves to scalar states. And doing this will also give you a very fair idea that the analysis methods don't change significantly when you have a vector state instead of a scalar state. Okay. So it's not like things become too complicated. Okay. So that's rather nice. So this is another thing that I want all of you to get used to. Because suppose I think of any robotic application, say a two-armed manipulator or think a quadrotor, a quadrotor has six degrees of freedom, right? So three rotational and three translational. So corresponding to that, there are 12 states because three position states, three translational velocity states, three angular position states, three angular velocity states. So there are 12 states. So it's vectors, right? So we describe the equations in terms of vector equations, right? Similarly for a two-joint manipulator, right? For a two-joint manipulator, so you can think of the shoulder and elbow, right? If I have a two-joint manipulator, so the shoulder has two states corresponding to the shoulder angle and the shoulder angular velocity. And the elbow has two states corresponding to the elbow angle and elbow angular velocity. Right? So therefore, this is again a vector. Okay. So it doesn't make sense always to look at scalar states because we might end up having vector states. More often than not. More often than not. Yeah. So that is the purpose. Two things. One, you want to generalize two vector states, right? And of course, this particular method to vector states. And second, we also want to sort of get a feel for how to deal with vectors in Lyapunov analysis because in many cases the states are going to be vectors. Yeah. And you will see they're not significantly different. Great. Now that we have this preamble. So suppose we have this system. X dot is fx, which is like a drift, plus fx theta, which is the term containing the unknown parameter, plus gxu. Okay. Now the states are in some rn. The unknown is in some vector rp. It's a vector rp. So p-dimensional vector. State is n-dimensional. Right? And you can find the corresponding dimensions of f and f and all that. And g. Right? And the control input u is assumed to be an m-dimensional vector. Okay. So there are three different dimensions. The states are n-dimensional. Theta is p-dimensional. u is m-dimensional. Okay. And of course the f and small f and whatever and also g actually, also g. Yeah. Are assumed to be sufficiently smooth and have all the nice properties. Okay. So that, you know, we can take differentials if we want and so on and so forth. Right? So suppose for this system, there exists an adaptive controller. Okay. That is, what is an adaptive controller? There are two pieces to an adaptive controller. First is a control law u, a specification for what the actuator is to generate. And this depends on x and an estimate of the state, theta hat. And further there is an update law for theta hat. That is a theta hat dot, which again depends on the state and theta hat possibly. Right? So this is what constitutes an adaptive controller. Two pieces. Right? A control law u and a parameter update law theta hat. Okay. So suppose that there exists such an adaptive controller for this system and a smooth v function. That is a, which is essentially upon a function, smooth v, which takes the state and the parameters. It's radially unbounded in x and theta tilde. Right? Such that if I take v dot, right? So what is this? This quantity is just, this quantity is just v dot. Okay. This quantity is just v dot. Right? Just taking, because v is a function of x and theta tilde. So first I take partial with respect to x. Then I multiply it with x dot. Right? Because x dot now contains the control. This gets substituted here. So this is the closed loop system. This gets substituted here. So del v del x fx plus capital fx theta plus gx alpha. Okay? So this is del v del x x dot plus del v del theta tilde, theta tilde dot. Okay? All right. So let's see. You have to be careful here. We have to define. Let's see. Let me see if we have done this correctly. Okay. I think, I think this is, this should be, actually this should be theta tilde dot. Okay. This is not theta hat dot, but a theta tilde dot. It doesn't matter because I mean, we usually are defining theta tilde using theta hat. So theta tilde is defined as theta minus theta hat typically. So theta tilde dot is equal to minus theta hat dot. Yeah. Because theta is assumed to be a constant. Right? So basically this doesn't change anything. I mean, if you, I mean, it just changed the sign, but that's important because that's the sign used here. Right? So this is theta tilde dot, not theta hat dot. So it's a derivative of the parameter error. But remember the right side cannot depend on theta tilde because theta tilde is unknown. We keep that in mind. But then this theta tilde dot is just minus theta hat dot. So if you know theta tilde dot, you know minus theta hat dot also. Okay. Great. So, so this is just the derivative of v along the closed loop trajectories. Right? So this is a function of x and theta tilde. So partial of v with respect to x times x dot and partial of v with respect to theta tilde times theta tilde dot. Right? And the assumption is that for this, for this particular v, v dot along the system trajectories is less than equal to minus w x theta tilde, which is a negative semi definite function. So minus of w is negative semi definite. Okay. So this is the assumption. Right? So this is the assumption. Right? We start with this kind of a system. Right? And we have an adaptive controller and a corresponding v. Okay. Yeah. And a corresponding v says that v dot turns out to be less than equal to a negative definite function. Okay. So that's the idea. Then we go on to add an integrator. Before we do that, I wanted to compare this with our, sorry, our system here. If you look at this guy, the first piece. Right? Where you have this system. Just look at this. Just look at this system. And if you think of x2 as the control, just like we did in backstipping. If you think of x2 as the control. Right? And then you do have this control law, which contains the theta hat and an update law. Right? With a corresponding v says that v dot is negative semi definite. All right? So we exactly have this. Right? So what's the situation then? Our first system is of the form this in fact I have written it here. Right? Where this is actually the control. And this is the control law. Right? Which is a function of x1 and theta hat. And of course there's an update law, which is theta hat dot. There is a v function, which is this and v dot equals or less than equals minus w would also depend on theta hat. But the point is it is negative semi definite. Yeah? The key point to remember is that it's negative semi definite. So we exactly have the same setup already available for the scalar state case, scalar state scalar control case. So we are exactly generalizing that set. Okay? Exactly generalizing that particular setup. So we assume that we have now a vector system. We have an adaptive controller for this vector system such that there also exists a smooth v says that v dot is negative semi definite. Okay? So now if we add an integrator, which is essentially what's been done here. Right? If you look at this, we think of this as the control, but actually you have an additional state x2 dot is u. Essentially we have an integrator. So that's what we say. Now, if we add an integrator, so the control here, so the structure is slightly more general of course, but still we do the same thing. If we add an integrator, that is the control is replaced by this sign and then the side is u. So the control is at the next level. Okay? Then what do we do? All right? So this is where the integrator back stepping was used earlier. And we want to of course, generalize it to this vector case. Right? So one thing should be evident that this size is also an RM. It has to be the same dimension as the control. Right? Because otherwise this is not a feasible situation. Okay? So if size is a different dimension, then there can be problems. So we assume that size is the same dimension as the control. Okay? So then the claim is that we can still construct a Lyapunov function, which allows us to compute an adaptive controller, which is going to guarantee closed loop signals remain bounded and also that somehow the back stepping error and W go to 0, ST goes to 0. Because this is the best we can do anyway. But even in the scalar case, we prove that W goes to 0. What was W? In this case, W is this. So we prove X1 goes to 0. That's the best we can do. Right? And of course, we also prove that the back stepping error goes to 0. So we prove these two things. That's what we want. Okay? Even in this scalar case. So that's what we claim here also. We want to claim here at least, right? That this W goes to 0 and the back stepping error, which is psi minus the, because the psi is the state now and cannot always be made exactly equal to alpha, but in steady state it can be. So psi has to follow alpha and that is the back stepping error. And that is also going to be driven to be 0. Okay? What is this magical Lyapunov function? Nothing very magical to be honest. All right? So we call this V bar now and it has several states. It has X and psi, which are now the new states. It has the earlier parameter error theta tilde and it has the new parameter error. New parameter theta bar. Okay? Remember, there we used a mu, right? Here, of course we used a mu, right? Let's see. Yeah. Here we used a mu. Okay? The mu hat. So here we are using theta bar. Okay? Just a difference in notation. All right? So it's a function of four quantities now. X, which was always there, theta tilde, which was already there for the one state system. And now we have psi and theta bar corresponding to the integrator and the new over parameterization. Okay? How do we construct it? Take the same V, which was just a function of X and theta hat. Then we add a backstipping error term standard. Then we add a term corresponding to the new parameter error. Okay? The theta bar is a new parameter estimate. So I just add a term corresponding to that. Now, notice that because this is a vector, I have taken care that this is written as a norm. Okay? So this is written as a norm. This is no longer just square of psi minus alpha. It is, in fact, norm squared of psi minus alpha. And it is very standard to use the Euclidean norm, all the two norms. So this is denoted as Z, of course, like is mentioned here. Right? So this term is actually one of Z transpose Z. And this is the Euclidean norm squared. Yeah. And similarly, I construct a very similar looking term for the theta minus theta bar. That is a new parameter error also. Just that I add an adaptation gain. I always add an adaptation gain. This is the adaptation gain. Okay? So let me see. Unfortunately, we have used the same gamma here. Okay? This is not the same. Okay? Let's be clear on that. This is like a, let me call it an S inverse. Okay? So S is some constant positive definite matrix. So this is not the same as that gamma. Okay? That gamma was the... This gamma is actually the update law for the parameter theta hat. Okay? This is the different parameter, different matrix S. And this S is simply, like you all know, is the adaptation gain. It actually controls how fast or slow your adaptation with them. Okay? All right. Great. So instead we have this sort of a new kind of Lyapunov function for this integrator system with vector states. Right? So therefore we have norms appearing here. We have transposers appearing here. All right? Now we are going to, of course, take the derivative. Right? I mean, our claim is that this is a good Lyapunov function. Okay? This is our claim. Now, of course, we want to verify this claim. Right? So, let's see. Okay? Let's see. So the dynamics... First we want to do... The first thing we want to do is try to dynamics in the new variables, which is x and z now. Right? Because we introduced a back-stepping variable. It's not x and psi, but it's x and z. Okay? So we just do that. So first... So x dot is fx plus fx theta plus gx times a u. Right? And u can be written as... What? I'm sorry. It's not u. This is in fact... I'm sorry. This is my back. It's not u, but this is psi. Right? So this is what is the dynamics. Right? So and psi can be written as z plus alpha. That's what I do. Okay? And what is z dot? Z dot is just the derivative of this, which is psi dot, which is u. And then minus alpha dot. So then I have minus of alpha dot. So all of this is minus of alpha dot. Okay? Alpha is a function of two variables, x and theta hat. So first I take the derivative with respect to x. Okay? So del alpha del x times an x dot. And x dot is just plugged from here. Okay? So this guy is right here. Okay? And then minus del alpha del theta hat times theta hat dot. Okay? Yeah. Times theta hat dot. So what is theta hat dot? So this is where we have to be careful. Right? So this is actually... Now let me be careful. This term should be minus del alpha del theta hat theta hat dot. Remember, alpha cannot depend on theta tinder. Because it was the first stage control. It can depend only on theta hat because theta tinder is unknown. Okay? So, and remember from the previous derivation, theta hat dot is actually equal to minus theta tinder dot. So it is equal to minus gamma. Okay? Minus gamma. So I have to be careful. This is actually going to be equal to plus del alpha del theta hat gamma x theta hat. There's a plus sign and not a minus sign. So I need to be careful here. This is actually going to be a plus sign. Okay? This is actually going to be a plus sign. Okay? I hope that is clear. So what have I done? I have simply written the dynamics. It's just a lot of bookkeeping again. All right? I have simply written the dynamics in terms of the new variables x and z. This is exactly what I was doing for the scalar case also. All I'm doing here is being a little bit more careful because I have vectors that are involved. And that's it. Okay? So here I have x dot which was fx plus cap fx theta plus gx times psi. And psi is just replaced in terms of the back stepping error variable as z plus alpha. Okay? Then I have z dot which is psi dot minus alpha dot. Yeah? So when I say dot it is a derivative with respect to time. And so I take psi dot which is u and minus alpha dot. So there are two terms. Del alpha del x minus del alpha del x x dot minus del alpha del theta hat theta hat dot. So there are two terms. And then the first term is therefore del alpha del x and x dot which is substituted from here. And then I have del alpha del theta hat theta hat dot which brings in a negative gamma. So this becomes a positive. So I have fixed this sign. So when I write it in this form, again, remember that what's the dimension of z? So it's important that we remember all this. So x is in... So x dot belongs... Well, I will write it here. So z... x is in Rn, of course, I don't need to write it. But z is the same dimension as the control. So it is in Rn. Right? So this is Rn. Now, what is the dimension of this quantity? This is Rn. So if I look at thing in the bracket, this is simply x dot. So this is equal to x dot. And this, of course, belongs to Rn. I need to be careful here. Now, so what is the dimension of del alpha del x? Therefore, what has to be the dimension of del alpha del x? It has to be... It has to be Rm cross n. To be consistent with this expression, right? Because the right hand side has to be Rm. And this is in Rn. Therefore, this has to be an m cross n matrix. Now, how does this happen? Similarly, before we go on, gamma here is from theta tilde dot or theta hat dot whatever and theta was in Rp. Right? Rp. There are p unknown parameters. Therefore, this guy has to belong to Rm cross p. Okay? So how does this happen? So look at what is alpha. Alpha is a function of xn theta hat. Okay? What do I know about dimension of alpha? I know that alpha, if I may write it carefully, alpha is in fact a map from x states. So Rn and p states, which is Rp. So that is theta hat states to what? It is the same dimension as phi. So it is n, Rm. Okay? So now if I take partial of alpha with respect to only these states, it is obvious that I will get an m by n matrix. Okay? When I take partial of an m vector with respect to only n of the states, I will get an m by n matrix. Similarly, if I take the partial of an m vector with respect to p states, then I will get an m by p matrix. So this is all consistent. Okay? So just keep this in mind that all these partials are now Jacobians. What we call Jacobians. And they're all matrices. All right? Excellent. So what is it that we did today? We sort of started to generalize the adaptive integrator backstepping that we looked at in week number seven to the vector case. And we are sort of trying to understand the differences when there is the vector case or the norms appear, the transposes appear, and then there's the Jacobian and all these notions. So we have to do very careful bookkeeping. But as of now, I hope you've seen that the methods are not distinctly different just because the states become vectors or something like that. All right? So of course we'll continue working on this generalization of adaptive integrator backstepping to the vector case in the subsequent session also. All right? All right. Great. I'll see you again next time. Thanks.