 Hello, welcome to yet another session of our NPTEL on on linear and adaptive control. I am Shrikan Sukumar from Systems and Control IIT Bombay. We are well into the 10th week of our lectures on non-linear adaptive control. And we are now in the process of learning about not just design of adaptive loss for uncertain parameters, but also robust adaptive loss, which are not impacted because of disturbances that appear in the system due to unmodeled dynamics or external reasons that may impact the dynamical system such as what you see in the background. So what we were doing in this previous session is we had sort of started to look at the stability and adaptation design or update law design for this projection based adaptive controller. And the idea was that the framework was based on filtering the closed loop signals. So we actually filtered the known closed loop signals first and they were of course the important thing to remember was that these were all identical filters that is identical gains. And we then of course computed the dynamics in this filtered variables. This was another very critical step, right? And then we figured out that, you know, this dynamics has an additional exponential decay term which we ignore in standard stability analysis. This is pretty common. You can also keep it and continue your analysis, but then those terms are anyway going to go to zero. So it doesn't matter. We'll essentially we end up with a lot of exponential decay terms which you don't care about. So we don't carry it any further anyway. So we sort of ignore this. So we have a dynamics in the filtered variables which looks very much like the original system dynamics and this is what is rather key. And we choose a VF in terms of this, what we call a hat, which is in fact the projected version of. Yeah, so we actually have a phi hat plus delta hat here. And so this results in a hat always lying between a man and a max. And these are the bounds that are already given to us, right? These are bounds on the parameter that are given to us. And this a hat is what appears in the control of VF, right? And so therefore the controller remains bounded. And then we also, of course, had a short discussion and I mentioned that if the control law VF is the filtered control is bounded, then V is also bounded because VF dot is bounded and beta VF is bounded. All right. So this is again something rather important. This is what is the robustness aspect. And even in the presence of disturbance, we are not why nothing changes. All right. So then if we look at this, we started looking at the filtered closed loop. Of course, we had the EF dynamics written in terms of the new parameter error Z. Yeah, again, a new sort of expression instead of a tilde because we are using a non-surgeon to equivalence paradigm. Yeah. So we had an EF dot dynamics and we had a Z dot dynamics corresponding to the parameter error, if you will. All right. And the important step or the interesting thing to remember was that there were two terms, phi hat and delta hat, that had to be sort of defined. Yeah. And delta hat was just directly an expression, not a dynamical system. And it was motivated by the certainty equivalence adaptive law. And after that, phi hat is chosen just by computing a Z dot and cancelling terms that can be cancelled in the Z dot using a phi hat dot. So there was no Lyapunov analysis in order to compute a Z dot. So this is, again, another interesting perspective in non-surgeoned equivalence where your update law for phi hat is not computed using a Lyapunov function. All right. So once we have cancelled whatever we put, we get a nice Z dot law. All right. And then we sort of wanted to start the stability analysis. And in order to do so, we of course needed a candidate Lyapunov function, which has a rather interesting looking expression right here, not your usual Lyapunov function. The important thing to see is that this is non-negative. Right. And how we sort of claimed that was by trying to find a minima for this function. And since the EF and the Z terms are decoupled, we could deal with them separately. It's evident that this is anyway a non-negative term. In order to prove that this is a non-negative term, we took a partial with respect to Z. That is, we tried to find the minima and then equate it to zero. When we took a partial with respect to Z, the first thing we realized was that the partial looks very much like the Z dot expression, right? It contains a large piece of the Z dot expression. It's deliberately done, of course, because this will help us in the analysis subsequently. And so, essentially what we do is we took the partial with respect to Z. In order to find a minima or maxima, we equate this to zero. And so, it's evident that the minima is at Z equal to zero. Well, it's evident that the optima is at the Z equal to zero. And we claim that it's a minima. We leave it to the audience to figure out why it's a minima and not a maxima. And if we substitute Z equal to zero in this expression, we found that the second term goes to zero. So, what we are left with is a non-negative term. All right. So, that's sort of nice. That's sort of nice. Yeah. So, this is essentially what we are trying to say that this V is, you know, sort of has a nice lower bound. Yeah. Which is critical for us. Yeah. So, again, let me, so at Z, so this is again something I want to verify and stress on. So, at Z equal to zero, we have a minima. And that minima turns out to be lambda over two log cos hyperbolic of five star. And the expression for the cos hyperbolic is something like this. Right. The expression for cos hyperbolic is something like this. And you can see that essentially the cos hyperbolic is, so if we write the expression out, let's see. So, this is, I believe this will be lambda by two log of e to the power five star plus e to the power minus five star divided by two. Yeah. This is the expression. So, I'm wondering if this can turn out to be negative also. I'm wondering if this can turn out to be negative too. It may be possible, it's not impossible for this to be negative too. But the important thing to remember, so I don't want to say that this is necessarily, you know, I don't want to say this part that is necessarily positive semi definite. I believe this is certainly lower bounded. I believe this is certainly lower bounded. So, this is what I wanted to verify that you have tan hyperbolic z plus five star is equal to tan hyperbolic five star here. Yeah. So, this is what I wanted to verify for the minima and let's see if this becomes greater than equal to zero or not. So, this is what I wanted to verify a little bit more carefully today. So, so tan hyperbolic z plus five star is equal to tan hyperbolic five star. So, the tan hyperbolic function looks something like this. Yeah, it looks something like this. Now, the only time the two will match is I believe when z is exactly equal to zero. Yeah, I believe when z is exactly equal to zero. Exactly. So, that's the only time they will match. Only time they will match. So, when z is zero, this term is of course gone and then I have this term. I have this term which is going to give me lambda over two log posh five star. Right. Right. Log posh of five star. And that's what will be will give me a bound. Okay. So, this is what will be the expression. Okay. So, let's not worry about it's not exactly zero. It's not necessarily zero. Yeah. So, this is not necessarily zero, but this is essentially the lower bound that you will get. And this is essentially a constant lower bound that you will get. Okay. I don't think we have to worry too much about what that exact value is because for my Bob Lutz lemma type analysis, all I need is a lower bound. Okay. Great. So, this is where we this is where we were. So, I will start here on lecture 10.5. So, now all I do is take careful derivative. So, I take an EFEF dot which is this guy and then I take the derivative of this guy which is lambda by two partial with respect to z which is this times a z dot. Yeah. And now if you substitute z dot, z dot has exactly this term in the negative. Right. So, this is the mu XF squared. Right. So, that's what you get. You get the z dot term here. So, this, so this there is already a mu here. So, the z dot, so the z dot also contains a new here. Yeah. So, the new XF squared. And so, what you will have is a new lambda by two and hyperbolic five star minus hyperbolic z plus five star times XF squared. And this is also squared here. And there's a square here. Okay. I believe this square is on this whole term. And the square is on this whole term. This is fine. Okay. So, now you see I have nice negative terms. Yeah. Negative definite term minus KF squared minus KF squared. And I also have a square term here in XF times tan hyperbolic five star minus tan hyperbolic z plus five star. Right. So, I'm going to call this term as omega. In fact, this whole term is omega. That's what we do here. Right. And so, you have minus KF squared minus lambda by two mu cap on a square. Right. And then this term can be written using our sum of squares as mu by two times EF squared with an R sum gain R. Yeah. I can use, you know, what I'm using here is that AB is less than equal to, well, I'm actually doing this. AB is equal to like a square root R A times A times one over square root R B. So, this is less than equal to half R A squared plus B squared by R. Yeah. That's what is being used here. If you notice from here, so the mu remains as it is. And from these two terms, I get R EF squared plus one by R omega squared. Right. So, then once we do this, you get this, you know, K minus mu times R EF squared and mu times lambda by two minus one by R omega squared. So, you get this term. Right. And therefore, you get V dot negative semi definite if lambda is greater than two over R. Right. And of course, K greater than mu times R. Now, one might ask why did we, one of the important questions to ask is why did we need the lambda? Right. Why did we choose put the lambda here? Remember one thing. Okay. Even though we get this nice term, this is not a negative definite V dot. Okay. Remember, right? Because V was a function of EF and Z. And it's not very clear that this term is negative definite in Z. Okay. Because there is an XF also. Right. So, negative definite with respect to Z is not very clear here. So, this is at best negative semi definite only. Now, because it's negative semi definite and I would like this to sort of dominate some other term. Yeah. So, I have to make sure that this term is smaller than this term. Right. And so, I take an arbitrary lambda. I don't know the value of lambda. Right. I took some lambda positive. Anyway, this lambda does not enter anywhere in the control law. So, it's not like we need to know the value of lambda is just for the purpose of analysis. So, this lambda now has a 1 by r. Right. So, I can choose r large. I choose r to be big. This is a small term and I have dominated this with some whatever. And correspondingly, I have, you know, I have this r here. So, no k can be large. You know, if I choose a large enough k, I'm done. Okay. So, that's so, if r is small, sorry, if r is large, then k is large. Yeah. And lambda needs to be small. Okay. Vice versa. I mean, if you don't want to really care about choosing large k, you don't know how large the k should be. Yeah. You can choose any k, arbitrary k. Okay. Then r has to be really small. So, if r is really small, then any k will work. And you choose any k correspondingly, there is a small enough r for which this will be positive. Right. And once this is positive, this becomes very large. So, then lambda has to be very large. Right. But we don't care. This lambda doesn't appear in the control law. Right. So, lambda being large doesn't change anything for us. It's just for the purpose of analysis. So, this is a rather neat trick. Yeah. In analysis, when you don't know how big this gain is, you know, and you don't want to push the control gain k large because k actually appears in the control law. Right. So, k is, I mean, k is not here in, in, you know, in vf. Right. k is not in vf. But if you remember, k is there in the control law. Right. Because we have to implement a u and v contains k. Right. So, k being arbitrary large is not a good idea. Right. So, if you choose any small positive k, there is always, one can always say that there exists an r such that k minus mu r is positive. And if the r is small, then all you need to do is have a large lambda in the analysis. But again, it's just for the analysis. So, it doesn't change anything for us. So, we are more than happy. All right. Excellent. So, that is the purpose. So, what we have now is we have a v dot, which is v of course, which is lower bounded. And we have a v dot which contains these two quadratic terms. So, it should be sort of obvious to you for all of those who have done bubble at Slemane signal chasing analysis several times that, you know, you can prove that if and omega go to 0 st goes to infinity. So, this is something you can always prove from this kind of a Lyapunov like analysis. All right. We are not claiming that v is positive definite and lower bounded at 0 and all that. But we know it has a lower bound. It is non-increasing. Yeah. So, we can prove all the quadratic terms that appear here are going to go to 0 using bubble at Slemane signal chasing. And that's what we do. Like right here. We can also show that E f dot goes to 0. Not difficult. We have done this before too. Once you prove E f goes to 0, we can prove E f dot goes to 0. And if E f dot and E f both go to 0, what do we know? We know from our filter construction that you have E f dot equals minus beta E f plus E, which implies E is E f dot plus beta E f. Yeah. So, if both E f and beta E f dot go to 0 means E goes to 0. I know that's what we have. Yeah. So, the other additional thing we sort of get, which we don't get in certainty equivalence. Remember, we never get a term in parameter error in certainty equivalence. Right? But here we do have a term in parameter error. It's a very non-linear term. So, not very obvious what this term means or what it looks like physically. But we at least have some kind of a term which goes to 0. Right? We know that this guy goes to 0. And this is sort of an attractive set for the parameter error. The important thing to remember is that if you start with 0 parameter error, that is when z is 0, suppose z is 0, then this quantity is 0. So, omega is 0. If you start with 0 parameter error, you will stay with 0 parameter error. Because look at this. I mean, in the absence of disturbance and errors, et cetera. First, if you look at this term, this is 0. This is 0. z equal to 0 is an equilibrium. Right? So, therefore, this right-hand side becomes 0. Now, this is not true in certainty equivalence. In certainty equivalence, this would have been your update law. See, even if you started with a tilde as 0, the right-hand side has no a tilde in it. So, it does not matter if you started with a 0 a tilde, you will still change the parameter value, which is of course, seems unhealthy. But well, that is the sort of solutions we have. In this setup, if you started with 0 parameter error, in this case z being 0, then the right-hand side is 0 and you do not move from it. Okay? So, it sort of creates an attractive set. In fact, attractive and invariant set. Okay? Creates an attractive and invariant set. Right? So, if you stay in that set, you will remain, if you start in that set, attractive plus invariant set for the parameter error. So, if you start in this set, you will remain in this set and you will be attracted towards the set also. Okay? So, this is a rather interesting property. Yeah? Which actually improves the adaptation behavior for the system. All right? Excellent. So, I hope that was clear and you understand that. Now, of course, we didn't go into any further details now. Right? So, but remember that in order to compute the actual control law, you have to compute v, which is equal to vf dot plus beta vf. Right? And we already have shown that vf is going to be bounded. Right? vf is going to be bounded because all closed loop signals are bounded. Right? And if vf is bounded, then vf dot is also going to be bounded. I mean, that also can be proven. We are not proving it. So, I will actually say prove boundedness. Yeah? I would leave it to you as an exercise to prove boundedness of this. Once you have the boundedness for vf dot and vf also, then you know that v is going to remain bounded and so is you. Yeah? And this is not going to be affected in the presence of disturbance. Again, we have not done any analysis with disturbance notice. There is no disturbance analysis here, but what is happening here is the fact that the parameter, the design is not going to change in the presence of disturbance. Right? Only the Lyapunov analysis changes. So, basically you will start getting some terms corresponding to the disturbance here. That is what will happen. Right? This will happen. Right? And so, again, I mean, if I was to sort of do this, there will be a f of t, some function of disturbance appearing here everywhere, everywhere. So, all these steps remain the same, but then I will have something like this. But the good thing is, because I have negative definite terms in both Ef and omega, so nothing changes really. I will still get like a residual set in the Ef, which will of course move to a residual set in the original states and all that mess. Yeah? And of course, my parameters remain bounded, so my control remains bounded. My control will never become unbounded. It's irrelevant what happens. Yeah? The control is never going to become unbounded. Neither is the parameter estimate. Yeah? So, the disturbance may affect how the residual set, I mean, sort of going to zero errors. Right? So, you know, instead of going to zero errors, you may not go to zero errors, but it doesn't really, I mean, you will get a residual set as you expect, but you don't change anything in your parameter boundedness nor in your control boundedness. All right? So, that is the sort of important thing to remember. Okay? So, this is the projection method. By the way, this is only one kind of projection method that I have specified. There are other ways to do parameter projection. In fact, this is a rather unusual way. The more common way is using the notion of convex sets, which I am not presenting right here, but it is available in, you know, I mean, in results from year-on-year and so forth. Yeah? So, you have references which talk about your convex sets and how to do projection on convex sets. Yeah? I'm not going to, I'm not going into that right now because, you know, for the sake of time, we have to restrict our material. Yeah? But this is a pretty solid, pretty good way of doing parameter projection. Yeah? And parameter projection as you can see is critical in adaptive control. Okay? Now, one of the concerns that some of you might have is that projection requires knowledge of your bounds on the parameter. Without parameter bounds, there is no way you can do projection. And if you remember, in the beginning of adaptive control, you did not assume any kind of bounds on the parameter. Right? So, what happens if you don't have parameter bounds available to you and you still want to impart robustness to the adaptive control? Right? So, these were, so this discussion began, of course, when we had that, you know, the fighter jet crash, right? And results came about in a few years after that. And the first one of them was first, whatever, first seminal result was by Ionu and Koko Tovich in 1983. And this is called, this was called a sigma modification in adaptive control. So, I will sort of classify this as in the absence of parameter bounds. Okay? In the absence of parameter bounds. So, the question is, what happens if you don't have parameter bounds and you still want to impart some kind of robustness to your adaptive control? Right? So, we start as usual with the same system. And this is called a sigma modification in adaptive control by Ionu and Koko Tovich. We start with the same system, x dot is Ax plus u plus g. And we know that there is a bound on the disturbance, right? I mean, we don't know the value of the disturbance at each instant in time. But of course, we do have knowledge of a bound, very reasonable assumption, right? If anything, you can just keep a very large bound, just to be conservative. But we do know a bound on most disturbances, right? For example, if I'm flying a drone, I have a pretty fair idea of how much the wind velocities will be. I mean, I have a pretty good idea of the range, right? Of what the wind velocities would be. If I'm, again, if I'm flying a satellite and, you know, I have actuator issues, I don't know how well the actuators perform if they precisely produce the amount of torque that I expect them to. I may not know the exact values, but I will still know how far off they can. So, that's essentially what's the job of a good engineer, right? You have a pretty good range in which your system will operate. So, what is the objective? The objective is, as usual, to try to drive the error between X and sum bounded to actually X m to 0, right? So, as usual, what do we do? We create the error dynamics and we have this certainty-equivalence adaptation law. So, we are back to the certainty-equivalence adaptation law. So, the idea is, when you do a, you know, if you do a certainty-equivalence adaptation law, you will take a v, which is something like 1 half e squared plus 1 over 2 gamma a tilde squared, pretty standard, where a tilde is a minus a hat. And if you do the v dot and compute everything and try to find the a hat dot, you will see that a hat dot is gamma times e times X, where gamma positive is the adaptation gain, all right? So, what Ianu and Kokotovic proposed was the sigma modification, which meant that you added damping sigma a hat term, so this adaptive law, right? So, the whole problem was that the a hat dot update law never contained a term in a hat or a tilde. Of course, it can't contain a term in a tilde because that's not implementable. So, Ianu and Kokotovic did the next best thing. They added a nice negative term in a hat, all right? And then, of course, we did some analysis to show what happens. And this analysis is what we will see in the subsequent session, all right? Excellent. So, what we looked at today is the sort of completion of two of our parameter projection-based adaptive control law. We had rather interesting Lyapunov candidate and we took its derivative. We also had a sort of lambda coefficient on this Lyapunov candidate, which we saw was very useful for analysis and played no role in the control. So, it did not matter how large we took it. And we also realized that this non-certainty equivalence type paradigm in projection gave us a sort of attractive and invariant set, which has nice properties such as if you start with zero parameter error, you will remain at zero parameter error in the absence of disturbance. So, these were the kind of properties that were absent in the certainty equivalence adaptive function, right? And finally, we sort of started to look at the signal modification in adaptive control, which would be a way of imparting robustness if you did not have the knowledge of upper and lower bounds of your parameter. Excellent. So, we will continue with our discussion on sigma modifications in the subsequent session. So, I invite you to attend. Thank you.