 Welcome to yet another session of our NPTEL on nonlinear and adaptive control. I am Srikanth Sukumar from Systems and Control IID Bombay. So I welcome you all very warmly to lead number 10 of this course on adaptive control. We are as always motivated by very, very cool and very interesting autonomous systems that support us and also enthrall us such as the SpaceX satellite that you see in the background orbiting the earth. Until last time, we were already well into the way for learning how to design algorithms that will drive systems such as what you see in our background, which is the SpaceX satellite. What we were doing specifically is to learn the tuning function method for adaptive control design. So this is one of the more advanced and one of the more recent methods in nonlinear adaptive control. And of course, it is still based on back stepping, but it also required knowledge of, you know, control Lyapunov functions and such. And it extended the notion of control Lyapunov functions to adaptive systems. And then we sort of were able to use this in an back stepping setting that is in a case when you have, you know, back stepping level or back stepping layer in the system. And in such cases also we showed that if you do have, you know, an asymptotic adaptive asymptotic stability of systems such as this, then adding an integrator still helps you retain this adaptive asymptotic stability property. And we even showed the construction of an augmented, you know, adaptive CLF for the system. And once you have this augmented adaptive CLF, we were able to help construct feedback and also an update law, of course. An update law was of course constructed earlier using the adaptive tuning function method. We also worked out an example which we had already worked out with the, you know, say the extender, the integrator back stepping method and the extended design. And this was the unmatched, you know, parameter case. Now, the solution turned out to be very, very interesting. Yeah, very, very interesting. In fact, at one point it seemed like it baffled me also. Yeah, because I could see some quadratics in the parameters, but that's the really cool thing, right, about this kind of a construction. Right, so this is the sort of, you know, you, we had an alpha basically, and then using this alpha, we designed our back stepping error. And using the back stepping error, we were able to define an ACLF for the integrator system. Right. And the interesting thing for us to see was that the control contained quadratic in the unknowns. There is an unknown term here and an unknown term here. So, obviously, this is quadratic in the unknowns. And this is where we, you know, sort of, I had to stop a little bit myself to think if, you know, I was getting it right or not. Yeah, but nothing to worry. Right. You, of course, had to be very careful because of the vector mathematics involved. I would really strongly urge you to look at this example carefully and also do this yourself. Yeah, because if you cannot do this, you'll not be able to handle any real applied problem. And then that's, and therefore this is critical that it's critical that you actually follow this example completely. Do not worry about the fact that I got quadratic terms here. Eventually, the control still requires you to replace P by P hats, which is like the certainty goal is idea. Right. So anyway, so this is, I mean, this is of course expanded into this here. It's the same thing. I'm just expanded out here. Right. And then, of course, we had the adaptive law. The control in the control, like I said, the P gets replaced by a P cap. Yeah, I really hope that you are no longer confused. And I also hope that you have been able to do this kind of a problem or in fact this particular problem itself. I would strongly urge all of you to do this by hand yourself. Yeah, the exact same problem. For the tuning function, of course, we had already given a formula and this is a precise, precise formula. And we simply apply the formula. All right. So all we do is take a partial of the original ACLF for the original system. And with respect to x and then minus z del alpha del x is taken the original feedback alpha and then multiply it with f. And so in our case, f is the small f. So, you know, all this nice expression appears. And again, you have again, in this case, you have the P hats in the update Lloyd's is again something very interesting. Right. In your parameter update law because tau is essentially the parameter update law. Right. The expression for a parameter is this guy. Right. So your parameter update law, in fact, does contain the hats also. Yeah, this is not something that we have seen until now. Yeah, only the tuning function method. But remember the tuning function method is alleviating both the issues that we had in our earlier designs in the integrator backstepping method. The issue was that there was an additional estimate for the same parameter. And in the tuning function, sorry, and in the extended matching design, the issue was the appearance of theta hat dot. Okay, in the one level extended matching, that is when the control was one level below the unknown parameter. But here in this case, nothing of that sort happens. You do not see any P hat dots appearing here. In fact, you can do any level. Your control can be 10 levels below, but it doesn't matter. You will not see a derivative of the parameter here. Yeah. But of course, you can see the contract construction of the tuning function and alpha one and all these are way more complicated and therefore different. Okay. So this I hope you appreciate this. All right. Excellent. So moving on to this week. Yeah, again, we are in week 10, but we are looking at the week 11 nodes. Please do not worry. Yeah, this is I've already mentioned this is just for, you know, tallying at homeworks and nothing else. Yeah, so I wouldn't worry much about it. Okay. Excellent. Excellent. So, so what we want to do this week is a slightly different topic. If you may mean still adaptive control, of course, but we want to deal with the robustness question. Okay. So robustness is one of the big, big challenges in adaptive control. All right. So big, I would say big challenge and concern. Right. So let me mark, let me mark this as lecture 10.1. Right. So, so robustness has been a big challenge and a concern in adaptive control. Because initially, when folks started doing adaptive control, it was in fact very well received and it became one of the cornerstones of nonlinear control. And in our excitement to try out adaptive control. This was one of the few of those nonlinear controls which did actually get implemented in such a fast pace in an actual scenario. And this was, you know, sort of a fighter plane in the U.S. Yeah, one of the fighter planes. And of course, it was a test bed. Yeah. However, this adaptive control work rather poorly and displayed very poor robustness in fact led to the crash of this aircraft. Yeah, like this fighter aircraft. And this is where things became rather complicated and a lot of researchers in fact started to say that adaptive control is not feasible at all. And it is absolutely not robust. And therefore it is simply dangerous to use it. And there was a little bit of a hiatus on research in adaptive control also because the thought was that it has no future in some sense. So, this is the sort of issue that we want to highlight in this week. And we also want to highlight, of course, we are researchers. So, somebody did find a solution to this robustness issue in adaptive control. So, the first thing that we want to do is to highlight the robustness issue. And then we want to move on to even try to solve this robustness issue or methods to solve this robustness issue. Okay, so that's what we want to do. Yeah, look at this big robustness issue. What is this big problem here? Yeah, although you have done adaptive control for such a long time, I am quite sure that unless you've seen adaptive control before this, this issue would not have occurred. Okay, so what we want to look at are systems with disturbance. Okay, so this is very realistic. Any real dynamical systems has external disturbance. For example, if you have an airplane that's flying, you have wind disturbance. Yeah, I mean, that's one of the simplest possible disturbance. I mean, you can have actuators which are not exactly functioning the way you want them to. Therefore, it's producing more power or less power than what you want to, that's disturbance. Anything that's not well-modeled can lead to a disturbance and can be sort of captured as a disturbance, not lead to, but can be captured as a disturbance. Because disturbance is typically assumed to be unknown but bounded quantity. So therefore, disturbance is ubiquitous in a plant. Ubiquitous in any dynamical system. Yeah, so almost always you want to have controllers which are robust against disturbance. If they're not, then you have a problem. Yeah, because then you'll no longer in the domain of something that can be used in a real situation. Yeah, it's very nice to sort of write good theorems and proofs, but it cannot be used. Yeah, because your theorems and proofs are all based on idealized system with no disturbance. If you notice none of the dynamics that we have considered until now had any kind of disturbance. Yeah, so it would absolutely be unusable if you were considering a case where there is disturbance. Yeah, so let's see. Let's see where the problem arises. Yeah, we're again looking at this very, very simple system, scalar system. x dot is a, x plus u. Yeah, as usual, a is unknown, x is a scalar and we want to. So I'm going to write that a is unknown. Yeah, and we want to solve the tracking problem, which is e equals x minus xm goes to zero where xm is the smooth bounded signal that we want to track. Yeah, the same arguments can be made for any kind of adaptive control problem, be it the MRAC problem or the back stepping type situations. So we just focus on the simplest case. So what do we do? As usual, we construct the other dynamics, which is e dot is x dot minus xm dot, which is ax plus u minus xm dot. And then we do the best. When you know the parameter, you cancel it, cancel the drift term, cancel the tracking term and introduce a nice good term. Like which gives you e dot is minus k e in the ideal case. And so this is the, we are considering the ideal case problem. So we have, and we use v equal to half e squared as the candidate Kliapunov function. All good. Now, what happens when there is disturbance? Just like I said, what is the nature of disturbance? This is the standard assumption that you would see in any problem where disturbance is introduced. The general assumption is that disturbance is bounded. So the infinity norm of the disturbance is generally denoted as d max. Yeah. Or I mean, whatever, I mean, is equal to d max or less than equal to d max. Either one is fine. I mean, you just need an upper bound. Okay. But of course, you do not know the exact nature of the, it could also be time varying. It's most probably time varying. Yeah. It's most probably time varying. So of course, we assume it is time varying. So in this case, what happens? Since I do not know d, that cannot appear in my controller. So my controller remains exactly the same. This is the known case. Okay. Known parameter case. All right. So that's what is written here. So in fact, I should not probably write this. a is not assumed to be known. a is actually known. So a is known. All right. So this is the known parameter case. Great. So what happens? I use the same controller because that is the best I can do. T is not known. Yeah. So my error dynamics is no longer minus k e. E dot is minus k e because I could not cancel the D term here. Therefore, e dot became minus k e plus. Okay. Because if I want to write it out more carefully with this x dot, my e dot would be a x plus u plus d of t minus x m dot. Okay. And if I substitute the same control here. Yeah. This is what you will get. You will just be left with the D. And then I continue to use the same Lyapunov function candidate Lyapunov function again because he's not known. So I cannot really construct a Lyapunov function corresponding to something that's unknown. Okay. So it doesn't make sense. Right. So when I now when I do the Lyapunov analysis with this candidate function, let us carefully see what happens. This is called the standard disturbance analysis using Lyapunov functions or bounding analysis or uniform ultimate boundedness analysis. Okay. So let's see what happens. Yeah. The derivative is e times e dot. So it's just e times this guy and I get something like this. All right. And now if I do a sum of squares, what is sum of squares as usual, I use, I mean I can do something better, but let me just use this. E d is less than equal to half e squared plus half e squared. Okay. This is the standard sum of squares. Basically, I use a b is less than equal to a squared plus b squared. In fact, this is, I mean, that's fine. You can use the absolute value if you want. There's a square. So it doesn't matter. So that's what I do. I use this expression here and I get k minus half e squared plus absolute value of e squared by 2. Okay. So this should also be squared. I mean, I mean, you can use, again, the absolute value can be used to make it more precise, but honestly, it's a scalar, so I need squared absolute value is irrelevant. If it was a vector, this would be a norm. Yeah. So that you should remember. Now, since I know that the infinity norm of D that is the infinity norm is a supremum norm, right? So the largest value D can take is upper bounded by D max. So I will say less than equal to. Yeah. Since the largest value this guy can take is less than equal to D max, I can just substitute this D max here. Okay. Now look at what happens. This is not exactly negative definite anymore. If it was only this term, which is what you would get in the ideal case, actually in the ideal case, you will not even have this. You just get this, right? Now, if it was just this term still, this would be negative definite. Okay. But it is not, right? It is something more. There is a term in the disturbance. So what do I do? Now, I know that this e square is actually just we do twice V. Okay. So I write it as such. I write e square as twice V and I get something like this. Okay. So this is missing a, yeah, we are missing this thing. Okay. It is something like I take 2k minus 1 common from here, right? And because this is twice V, I write this as twice V. So the 2 cancels out. So this becomes 2k minus 1. I take that common out here and this becomes D max square divided by twice 2k max. So what do I know about this? Okay. What do I know about this? This is interesting. The property that this quantity has is that whenever V is larger than this guy, larger than this, this is negative. And when V is smaller than this, this is positive. Okay. So if I try to make a picture, that's what I want to do. I want to make a picture here. So nice axis. Right. So this is in my x-axis, I have say whatever I had, I'll have my time in the x-axis, I guess. And in the y-axis, suppose I plot to V. Yeah. V is essentially e squared, but still I want to plot V. Okay. And suppose I make, you know, suppose I make my line. In fact, I also want to plot e. So that's fine. Suppose I make my straight line corresponding to, yeah, so this straight line corresponds to D squared max divided by 2 minus, divided by twice 2k minus 1. So this is it. Okay. So what is the nature of V based on this analysis? So based on the fact that V dot has this kind of a structure, notice that 2k minus 1 is greater than 0. So assume greater than, we assume of course that, we of course assume that k has to be greater than 1, right? Then what's the nature of this plot? Right? So whenever your V is outside, right, this thing, right, whenever V is beyond this guy. So then whenever V is larger than this, okay, whenever V is larger than this, then this V is actually a decreasing function. So suppose it starts somewhere here, in fact, I would like to also extend this a little bit. Yeah. Okay. Suppose V starts beyond this guy, right, here, then this has to be a decreasing function. And once it goes inside, it can do anything. Yeah. It can be increasing, decreasing and so on and so forth. So remember that V is greater than 0. So V greater than equal to 0. So it cannot of course lie below the, you know, lie below this guy. Okay. So this is the plot for V. I'm going to just plot V in this. Now suppose I plot, similarly suppose I plot E. Now the good thing here is that if I make this picture, right, suppose I plot E here on this one. And I make, now I make this one, right, remember that V is just one half E squared. So whenever I plot V, it's almost equivalent to plotting E. Okay. And so this picture in this picture will have a nice equivalence, right, I mean nice going back and forth. The only thing is the, this plot is on the positive side. This plot will be on both sides, can be on both sides. So suppose I make this side of a picture, okay. Now I have to make two boundaries, this has to be almost the same. So I have drawn two boundaries. This one is, I apologize, this one is d max over square root k minus 1 and this is minus d max over square root k minus 1. Now what happens? The interesting thing is suppose you have this kind of a picture and I start outside, again same thing with V, right, if V decreases it means E also decreases, right. So if V starts outside, right, and it has to decrease, then if E starts outside this boundary E has to decrease, right, because V is exactly square of E, no difference, right. So if I start outside, now the fact that V started above d squared max by 2, 2k minus 1 does not, the only thing it does not tell us is whether E started positive or negative because V is E squared by 2, okay. So this could mean that E started here or it could also mean that E started here, okay. But let us assume that E did start here, it does not matter, yeah. Then what it means is if V square is decreasing, V is decreasing, E square is decreasing, which means E has to decrease and E decreases, I mean, again not necessarily, I mean, I mean in this case it is monotonic because of the scalar, but once it is inside this line, right. Once it is inside this line, it can do anything, it can be oscillatory or whatever, yeah. The important thing to remember is that it will never get out of this, okay. So this is the important thing to remember, right. Solutions never escape this bound, okay, the solutions never escape this part, all right. Why is this? Just give this a little bit, let us give this a little bit of a thought. So I am going to sort of extend this guy also, all right. And suppose it so happens, right, at some instance, right, that this tries to go here, you know, sort of escape like this, okay. The question is, is this possible? But notice what happens that at this corner, the vector field, when you get to this boundary your V is decreasing, which means E has to decrease right across this boundary, as soon as you cross it, almost instantaneously you cross it, it has to, the E has to decrease because V has to decrease, yeah. So and this gets enforced exactly at this point, yeah. So if this trajectory tries to cross this boundary, there is a push downwards, okay. And therefore this trajectory cannot cross, it will just curve down and go this way, okay. And this can of course be proved analytically, we are not doing it of course, yeah. But this is the important fact that the solutions of this system, of such systems will never cross the boundary, okay. And therefore this kind of a set that you have, you know, this sort of a, yeah, this sort of a set that you have that I am drawing in the picture here is called the residual set, yeah. And the size of this residual set is also very well documented and understood. What is the size of the residual set? It is precisely this, yeah. So you can guarantee that your error will lie within this guy, within this boundary, okay. Now this is a standard feature of almost all Lyapunov analysis, okay. Almost all strict Lyapunov analysis, yeah, whenever you, so in this ideal case what happens you get negative definite V dot, so it's a strict Lyapunov function. So typically disturbance analysis is done with strict Lyapunov functions and in all those cases robustness is free. You will always get a residual set, not just that. Not only do you guarantee that you get within this set, it's important to notice that the size of the residual set is inversely proportional to your control gain, k, okay, k is in your control, right. Your control is something like this, yeah. So increasing k reduces residual set, okay. Increasing the control gain will reduce your residual set size, okay and this is a rather nice code property. All right, excellent. So what did we look at in this session? We started this week 10 lecture with a discussion of robustness and adaptive control. We've not really looked at the adaptive control problem at all, but we took a very simple scalar system and tried to understand what disturbance analysis looks like and what's the notion of residual sets and the fact that trajectories starting outside the residual state will converge to inside the residual set if you had a strict Lyapunov function, all right. And this is a real cool feature of any Lyapunov analysis. If you have a strict Lyapunov function in the absence of disturbance, then in the presence of disturbance also you will converge to a nice residual set. So you'll have a bounded nice performance, not just that. The bound can be controlled via the, the bound can be controlled via your control gain, all right. Excellent. So this is where we stopped here. Continue next time. Thank you.