 Hello everyone, welcome to yet another session of our NPTEL on non-linear and adaptive control. I am Shri Khan Sukhumi from Systems and Control, IIT Bombay. So we are into the 10th week of this course on non-linear and adaptive control and I hope that all of you now have a very fair idea of how to design algorithms that will drive autonomous systems such as the SpaceX satellite orbiting the earth that you see in the background. So what we were doing in this previous session was basically look at adaptive control in the presence of disturbance and how it impacts the robustness of the overall closed loop system. And we realized that one of the big issues that occurs because of adaptive control is the fact that though your errors might remain bounded under the action of an adaptive law, but what can happen in this is that your a tilde that is your parameter estimation error can very well go unbounded. And now this is a rather serious issue because this a tilde that is a parameter estimation error going unbounded means only one thing and that is that your parameter estimate a hat is going unbounded. And since this enters the control law in a very prominent way, this also means that in order to maintain this kind of bound on E, we will be forced to have an unbounded control law, which is of course something that is not acceptable in general. Not in general, but in every situation because obviously there is no way that you have unbounded controls available at your disposal. So one of the ways that we wanted to solve this was using projection of the parameter estimate. So the idea is if I do know what the bounds on my parameter are, then I implement a projected adaptive control law, which means that you sort of ensure that your parameter update stays within these bounds because of course since the true parameter value is within these bounds, it makes very little sense to try to search for this true parameter value outside these bounds. So we were starting to look at projection. So this is the most direct way to attack the problem. If you do know these bounds, then you can attack this problem in a very direct way. That is in the earlier situation, the lack of robustness is because of the parameter going unbounded, parameter estimate going unbounded. So now that's essentially what we do, we prevent the parameter estimate from going unbounded. And so it's like a direct way to attack the issue. How did we do that? We essentially related a tracking as always. So we also have a tracking error dynamics as always, nothing very new here. And then we redefine the control as this. This is not a big deal because I mean KE and R dot are both known. So obviously I can implement U if I know V and vice versa. So what's the idea of doing projection? How do we do projection is that we change the parameter that we estimate. We start to estimate this parameter phi star instead of A. How do we do that? We can write A in terms of this phi star using a tan hyperbolic function, which lies between minus 1 to 1. And because this function lies between minus 1 to 1 and it is 0 exactly when the argument is 0, that is phi star is 0. So if you put 0 here, you see that this outcome is essentially, if you essentially try to see the bounds on this, you can see that this lies between 0 and 2. This 1 minus tan hyperbolic lies between 0 and 2. And if you substitute 0, you get A min on the right hand side. If you substitute 2, you get A max on the right hand side. So this is the rather nice property that we are in a sense looking. And we will of course look at how to make the error 0 and so on and so forth. And what is the right value of phi star? So obviously you want to see what is the right correct value of phi star and all that. I mean the correct value of phi star need not be 0 obviously as you can see. Because that is not what we are looking for. In fact, if I put this equal to 0, then what I will get is something like A max minus A min over 2, A max. So let me write this out. If phi star is 0, I mean this is not the ideal value. Remember, I am just doing it for like an illustration. So if phi star is 0, then you will get something like A max minus A min over 2 plus A min. So this is going to be some value. This is going to be A max plus A min over 2. And this may not necessarily be the right value of A. So phi star equal to 0 is obviously not the true value. Phi star has some true value which we don't know. What we are doing essentially is that we are moving the unknown from A being A to being phi star. And this is what helps us. And we can do this because of of course the tan hyperbolic function which is available to us. Just give me a moment. So how did we go about this? We started to define what is called filtered variables. And they essentially are true to its name. It is just filtered variable means that I define a filter on everything that appears on the right hand side. So the right hand side is simply, yeah, I am sorry, the right hand side is written here. Its E dot is minus Ae plus V plus Ax. And so whatever quantities I know on the right hand side, I define filters for that. So I know A, I know V, I know X. I don't define any filter for A because it doesn't make sense. I don't know. Any filter that we define in typical adaptive control setting would need to be implementable. So therefore, we define these filters which is E of dot is minus beta Vf, V of dot is minus beta Vf plus V and X of dot is minus beta Xf plus X. So the important thing to remember is that we define all the filters with the same gain beta. So this is what we need to remember. Of course, we have arbitrary initial conditions. The initial conditions are not very important. And now what did we want to do? We wanted to define the dynamics in terms of our filtered variables. All right. How did we do that? We simply took a derivative of this guy because that brings in E dot and E dot can be substituted friendly. That's simply the idea. So that's what I do. I take derivative of both sides of this equation, right here. And then I get E of double dot and then I get V of dot and E dot. And that E dot gets substituted from here, which is this guy. Now, because I want the dynamics, just in terms of filtered variables, I substitute for E from here, that is E of dot plus beta F V as Vf dot plus beta Vf X as Xf dot plus beta Xf. All right. So just substituting everything in terms of my filtered variables, okay, wherever available. And then I separate out terms with beta on one side and without beta on the other side. And if I define this quantity that's multiplying beta on the right hand side as Sigma right here, it's very easy to see that the equation that I have is Sigma dot equals minus beta times Sigma. So now, what do we do? So because we know that this is an exponential decay, right? That is, Sigma is equal to some Sigma 0 e to the power minus beta T. What we can say is that this is pretty much saying that instead of looking at this exponential decay term, which is well known in stability analysis to be not of any impact, we simply and we directly put Sigma equal to 0. And if you put Sigma equal to 0, this is the dynamics that we get for EF dot. Okay. So of course, I could be, you know, very, very precise and say that this is actually plus Sigma 0 minus beta T. So basically, Sigma is not 0, but Sigma is exponentially decaying. But this term, like I said, does not affect the stability analysis. So ineffectual in stability analysis. Okay. This is ineffectual in stability analysis. And therefore, we can ignore it and we choose to do that. Okay, we choose to do that. So let's see. So then what do we do? We define, we implement what is called a non-certainty equivalence type of adaptation law, right? So we just like A, we define an A hat. A was defined with a Phi star here, if you remember. So we, but we define A hat with a Phi hat plus Delta hat. So there are two terms, right? Not just a Phi star or a Phi hat corresponding to Phi, which would have been certainty equivalence, but a non-certainty equivalence. So we have a Phi hat and added to it, we have a Delta hat. So we have two terms and we will see how we use each of these terms, right? So because of this sort of using A and Phi and Phi hat, Delta hat instead of adapting for Phi, Phi hat and so on, we don't have to worry about A remaining bounded, right? Because our control, so if you notice our control is now VF of course, it is in terms of the filtered variables, we will of course take it back to the original variable, not a big problem. Yeah, easy to implement. So the control is now containing A hat, okay? And it's not difficult to see that this A hat is guaranteed to remain within this nice bound. Okay? A hat is guaranteed to remain within this nice bound. So there is no scope of VF becoming unbounded, yeah? Because we will of course prove that, you know, we will of course prove that XF will also be bounded, right? So there's no scope for VF being unbounded. So VF remains bounded, okay? So the other thing to remember is because we have defined a filter, yeah, it's and V is defined via VF and beta VF and V is what we really implement on the system, we have to also claim that V is bounded. And that's not difficult either because this is a stable system, yeah? So, so basically VF is bounded, the derivative is bounded. So V, which is the sum of VF dot and beta VF is also going to be bounded, all right? So boundedness of the control is guaranteed. And that's essentially one of what was the key issue in standard adaptive filter, all right? And which we can avoid now, great. So I'm sorry to seem like a long introduction, but I sort of re-explained a few things. So this is where our lecture 10.4 begins, yeah, technically. But again, the explanation that I gave for the earlier material were rather important. And I hope you can keep those in mind, okay? Great. So what does the filtered closed loop system look like? It looks essentially like this, right? It looks like this. I mean, the expression looks simple, but in reality, this A and A hat are complicated. So we'll of course write it out, yeah? So A minus A hat is actually this expression, yeah? You notice that the A min cancels out, yeah? And you're left with this gap, all right? So not a big deal, pretty straightforward, yeah? Now, we define a variable, sort of parameter error if you may, as Z, yeah? We use a different notation here and not the tilde notation because this is not certainty equivalence. That's it. And the Z is defined as phi hat plus delta actually left because we are using hats everywhere. This should be a hat, okay? So this is actually phi hat plus a delta hat minus phi star, all right? So this is what is your parameter error if you may, all right? And now we want to write this term, I'm sorry, we want to write this using the Z. So of course, we write phi hat plus delta hat as Z plus phi star minus a tan hyperbolic phi star, okay? And so of course, we substitute this quantity right here, we get E f dot is minus a E f plus this whole thing, yeah? And where what have we done? We have simply used this shortening notation, a shorthand notation for this half A max minus A min and call it mu. And so what we have is minus k E f minus mu X f times this. Okay, we've just flipped the sign. That's not a big deal. Yeah. Now what do we do? We have to choose two things. One is how delta hat is obtained in one and the other is how phi hat is obtained. All right? So delta hat is not given any dynamics. It is simply defined as minus E f X f. It's minus E f X f. Interestingly, I mean, if you do wish to do this, if you did want to do this, this essentially the delta hat expression looks like, looks like the certainty equivalence, yeah, adaptive update law. Yeah, this looks like the certainty equivalence adaptive update law. Yeah, why? Because I mean, if you if you took the filtered system, which is this guy, yeah, and you and then you thought of this as say, you know, thought of this as say A tilde. Yeah, and you took your V is half E f squared plus half a tilde squared. Yeah, you would get your A tilde. I mean, in fact, it's a very quick calculation. I can simply do this. Yeah, this is an aside. If a minus a had to say a tilde, take my V is half E f square plus half. Well, actually, I don't need a gamma. So I take half a tilde squared. So V dot will be what E f E f dot, which is minus a f plus a tilde X f minus a tilde a hat dot. So if you wanted to choose a hat dot here, what would you do? A hat dot will be E f times X. Yeah, a hat dot would be E f times X. Yeah. So let me see, there's only a sign issue here that we might need to resolve. But I think the sign issue is because we are defining things the other way. Okay, so the expression is exactly the same if you see, you get E f X f and you get an E f X f. The sign issue is just because of how you choose a minus a hat and all that. Yeah, let's not worry too much about that. But the expression for delta hat pretty much is motivated by this. Okay, so interesting for you to observe, I hope. So I hope you can sort of connect to this. All right, connect to this. All right, great. Now, what do we do? We start to we actually don't use a Lyapunov function in this case to get a phi hat dot at least. Yeah, not yet. We don't go to the Lyapunov function at all. So we directly compute a z dot and z, z dot is just phi hat dot plus delta hat dot. This phi star is a constant, of course, unknown, but still a constant. So phi hat dot remains as it is. And then you compute delta hat dot. Sorry. Yeah, compute delta hat dot. So phi hat dot is as it is. And then delta hat is minus EF XF dot minus XF EF dot, which is this whole thing. Now, what do we do? We take phi hat dot so as to cancel everything we can cancel. Yeah, so notice I can cancel this guy. Right, I can cancel this guy. And that's it. Right, I cannot cancel either of these terms. Right, so I choose phi hat dot to cancel this term with this and this term with this guy. All right. And so what am I left with? I am left with the z dot, which is this stuff. Okay, so this is the important thing to remember. The update law is already chosen before the Lyapunov analysis. It is already chosen before the Lyapunov analysis. Okay. So that's important to remember. So that was the, there are two pieces in the update in the, in the, in trying to find the parameter. There is the delta hat and there is the phi hat. The delta hat is this and phi hat is and delta hat is given by a static term and the phi hat is an update with this sort of an expression. Important to remember, unlike the certainty equivalence method, which we were doing until now, the phi hat, nor the delta hat to be honest are chosen using a Lyapunov function. They just chosen separately, just intuitively. Yeah. I mean, we are choosing a delta hat comes from, is motivated by the CE adaptive update law, as we just saw. And phi hat is obtained by trying to cancel whatever we can cancel in the z dot term. Great. Now, now we can, now we can sort of move on to the stability analysis. Yeah. So we have now two pieces of the dynamics, right? We have the let us see. We have the EF dot, which is written in this form, right? And we have the z dot. So these two are the critical ones. In fact, I will mark them with a different color, right? So this is the EF dot and this is the z dot equal. So these are the two equations that are required anyway, because we have essentially these two variables, EF and z. X7 and all are, of course, functions of these. So we do not have to worry about that right now. Yeah. Right. So we want to do a stability analysis of this closed loop system, right? All right. So let us see how we do it. I mean, I will do a few steps first. The first thing to remember, to look at is the sort of Lyapunov function we chose. The Lyapunov candidate is a rather interesting one. It is of this form half EF squared, which is just the normal quadratic in the first element. But in the second term, this is rather interesting. There is a positive lambda divided by 2 times a log cos log cosine hyperbolic z plus 5 star minus z tan hyperbolic 5 star, right? Now, what we want to, of course, we want to know and claim. We anyway want to do typically Babalat's lemma type analysis. So we are more than happy for V to be semi definite positive, right? So we know that the first term is, of course, nice, right? So no problem. First term is, yeah, it is sine definite, yeah? So no doubt. But what about the second term, right? What about the second term? So this requires a little bit of careful analysis. So what we will do is, let's take partial of log cosine hyperbolic z plus 5 star minus z tan hyperbolic 5 star, okay? So how do I take the partial? Just whatever, take the derivative, right? So this will be, there is only one variable here, right? So therefore, we have to take partial with respect to only this one variable. And so this is, this essentially becomes the derivative of this is 1 over cos hyperbolic times sine hyperbolic. Works exactly like the standard logarithm. So this is tan hyperbolic z plus 5 star minus, if I take partial with respect to z, it's just tan hyperbolic 5 star, right? And so this is what we get as the partial with respect to z. And of course, you want to note a few things. The first thing to note is that this term looks very similar to the term here. In fact, just, it's just the flipped sine version of this term. You have tan hyperbolic 5 star minus tan hyperbolic z plus 5 star. And I've taken the Lyapunov candidate in a smart way so that it's partial actually has the same term. Obviously, this will help us in the analysis. But the other important thing to see is that this term is always greater than equal to 0, okay? This term is always positive, non-negative, right? Why we say that is because, I mean, again this is something you can verify, right? This is something you can verify. So this is because you have a 5 star here and it basically when z is equal to 0, these two are the same. So it's equal to 0, okay? For any other value of z, this is non-negative, okay? That's the whole premise on which, you know, you sort of say that, so in fact, let me see, let me think about this a little bit carefully, a little bit more carefully, all right? Right. So basically what I'm trying to do is to take this v function and compute a sort of minima for this function, okay? That's the idea. And we claim that, so how do you compute a minima? You take partial with respect to the variables and equate them to 0. And then whatever you get is the minima, right? So in fact, let's see, because I took the partial, I want to equate it to 0. I apologize, I didn't complete the steps. So I'm taking the two terms separately because you can see that decoupled, there's only EF here and there's only Z here. I can do them separately, decouple them, not a big deal, right? So it's obvious that this term is minimum at EF equal to 0, right? Because I can take partial and equate to 0. Now to find the minimum of this term, I take the partial, I took the partial here and I will equate it to 0, equate to 0 for minima maximum. Of course, it can be maxima or minima. So when this is equal to a minima maxima, it's evident that at Z equal to 0 and I claim it is in fact the minima. Yeah, I'm not going to do this computation. You can check it out on your own, but it turns out that this has a minimum at Z equal to 0, right? And what is the minimum value? It is exactly, what is the minimum value? It's exactly going to be 0, okay? It's exactly going to be 0, all right? So that's what you have to verify, okay? So you will have, if you put Z equal to 0 here, yeah, at Z equal to 0, this will be log cos, this is just log of cos hyperbolic of phi star, I believe. Ah, okay. Sorry, the minimum is not, the minimum is not, minimum value is not 0. The minimum value is not 0. It's a positive quantity. So it's a positive quantity, yeah. So the minimum is not 0, I apologize, that's my mistake. It is a positive quantity. It's exactly log cos hyperbolic, in fact, is lambda times log cos hyperbolic phi star. If I plug in Z equal to 0, because Z equal to 0 here and Z equal to 0 basically gets rid of this term, yeah, I think that is okay. Yeah, I think this is fine, absolutely, all right, all right. So the important thing to remember is that the minimum of this function is strictly positive, yeah. Again, I have not verified, it's a minimum, you have to take a second derivative and verify, but it is easily verified, yeah, that this is in fact a minimum, okay? So minimum value I was thinking would be 0, but it is not, it's actually log cos hyperbolic phi star. So Z goes, Z is 0, because if I check here carefully, you have, if I equate this to 0, then tan hyperbolic Z plus phi star is equal to tan hyperbolic phi star only when Z is equal to 0, all right, excellent. So what did we look at today? We have sort of proceeded further towards this proof of this projection based adaptive control. We are yet to complete the proof of course, but we are, we have done the filter design and we have also, you know, sort of given the update loss, which is the phi hat dot and also there is a delta hat in this non certainty equivalence type design. In the subsequent session, we will be able to complete the stability proof for this set of system and we'll also try to understand what the implications of such a, you know, this kind of an adaptive controller is, all right, all right, great. Thank you and I'll see you again soon.