 Hello everyone. Welcome to yet another session of our NPTEL on nonlinear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. So, welcome to week seven of this course on nonlinear adaptive control. And by week six, we have already started to learn about designing algorithms that can potentially drive systems such as what we see in our background, which is essentially a SpaceX satellite orbiting the Earth. So, to summarize a little bit of what we did in week six was that we started with a first order scalar system. So, a very, very simple sort of a system. So, we started with the first order scalar system and we designed an adaptive controller for the system. So, the way we worked through it is that we started by designing a controller for the known case and then we use the certainty equivalence principle in order to design the unknown case adaptive controller. So, the adaptive controller essentially consists of parameter estimator which feeds into the normal control signal. So, this is what we did for the first order case. So, we had this kind of controller and then we of course had a parameter estimator. Now, further that, of course we use signal chasing and so on to prove that everything works out fine. Further that we started to look at the second order system. And of course when we used like a very, very standard or a basic choice of Lyapunov candidate function as you would we realized that we end up with a detectability obstacle. So, this is because of the fact that we chose a non strict Lyapunov function. And so, what we try to do after that is essentially use something like an Ortega construction in order to overcome this detectability obstacle for second order scalar systems. And that is what we did in this section 4. We first showed how to use this for the like how this construction works for the general stability proof for a system like 4.1 which is like a spring mass damper system. And then we used it for an adaptive control problem in this section. And we showed that with this sort of a construction which is not even a Lyapunov function. In fact, it is what we like to call a Lyapunov like function. We could show that both the states that is the even and e2 converge to the origin as we desire. All right. So, this is where we start. Now, what we where we start off today is with the notion of back stepping in adaptive control. Now, back stepping is a very, very well known and classical by now classical method in non linear control. All right. And in recent years, it has also gained a lot of popularity in adaptive control community also. All right. Now, so basically this is where we start. Let me first mark the starting point of our lectures in week 7. So, basically we already see or have seen that we have these detectability obstacles that no bother us. One of the ways to get around it was something like a or take a construction which may or may not work for more general adaptive control problems. It was very specific if you notice to like a spring mass damper type system. All right. So, we are not quite confident if you know, if this will work out very well, if it was not in this particular kind of a structure. On the other hand, back stepping is well known to be a rather general method. Okay. And so, we want to explore this idea of back stepping for the same double integrator system that we had or non-linear double integrator system that we were looking at. All right. So, back stepping is of course, well known to be a method to generate strictly upon our functions for non-linear systems. Great. So, let us go back to the table and look at our usual double integrated dynamical system, which is, I mean, we are calling it a double integrator, but it is a non-linear double integrator because of this. And as always, we have an unknown theta star and some function f of xt where x is, of course, composed of both x1 and x2, right, x consists of both x1 and x2 states. And we as always, our typical objective is that our states, our x1 states tracks some trajectory r and the x1 dot accordingly tracks some trajectory r dot, right. This is obviously because of the matching condition, right, because our dynamics dictates that x1 dot is x2. Therefore, the trajectories also have to be identically designed, yeah. So, we cannot have any arbitrary trajectory for x1 and x2, but they have to be related by the derivative. And this is essentially the matching condition that we have already spoken about in detail in the previous week, right, great. And so, the steps are pretty standard. So, as you go on, you will start getting used to, you know, sort of doing these steps again and again, all right. So, what do we do next? We essentially derive the transform dynamics that is now we have generated or created an error variable. And our aim is to, as always, drag these errors to zero, right. And so now, you know, all these Lyapunov theory of considering the origin to be the equilibrium point, it all should start to make sense to all of you by now, yeah. So, what is the error dynamics? Even dot comes out to be e2 and e2 dot is theta star f plus u minus r double dot. So, I am not even going to try to explain how we got to 1.3 and 1.4 because we've already done this in the previous week, yeah. So, if you are, if you still have any confusion, please go back and refer to the notes from the previous week, all right. So, great. So, now we have this dynamics, a potentially unknown quantity theta star. And we want to do the standard adaptive control design, all right. Now, we already know from last week that if we use some standard Lyapunov candidate like a, you know, v equals to x1 square by 2 plus say x2 square by, sorry, no x1 and x2, but if I use something basic like e1 square by 2 plus e2 square by 2, then I will land up in some trouble, okay. And then this will need to do, needs to detectability obstacle, yeah. This will need to detectability obstacle, all right. So, we already know that, yeah. And what was the way around it? We use an ortega construction, yeah. So, now we will try to do the same thing with back stepping. So, think of this as like a beginning step and it did sort of an apples to apples comparison between the two methods, yeah. Just look at it as that and nothing more, yeah, for now. But of course, you will see, because this back stepping base adaptive control is something we will do for several sessions now, yeah. So, you will start to see that this has, you know, much further implications than the ortega construction. It can be used in many, many more contexts than you can use ortega construction, okay. All right, great. And of course, you will also understand that the ortega construction itself is sort of inspired by back stepping based methods, okay. Great. So, what do we do in back stepping in case for those who do not know or those who do not have not done this kind of an all-linear control design course before this. We first look at the first piece of dynamics which is e1 dot equal to e2. Forget about the second piece, just look at the first piece and we will assume that e2 is the control. So, this variable right here that is the second state is assumed as the control for the first state, okay. Remember, this is merely an assumption, an idealization, yeah. For the purpose of design, you cannot actually think of e2 as the control because it is in fact a state of the system. So, if I was thinking of a robot, e1 would be the position error and e2 would be the velocity error, right. Again, think of an airplane, e1 would be the position error, e2 would be the velocity error, okay. It is not actually a control but a state of the system, all right. But we assume it for now, again for the sake of the design, yeah. So, what do we do? If you assume e2 to be the control, we want to design a stabilizing control, right. As obvious, whenever we design a controller, we want it to be stabilizing. And what would be the stabilizing control? Remember, we always try to choose a model, right. We always try to choose a model. So, what is a good model in this case? We know that if I have any e1 dot is minus k1 e1 for any k1 positive, then we know that it's an exponentially decaying system, yeah. So, this is what we've been doing. We've been choosing a sort of ideal system to follow. And this is ideal enough for us, right. So, that's what we do. We choose e2 and we call it e2 desired, e2D, because again, we know that in reality, e2 cannot be exactly this. So, we call it e2 desired and we define it as exactly this quantity minus k1 e1 with a positive scale k1, yeah. And we know that this is under ideal circumstances. If e2 was exactly e2 desired or e2D, then it will exponentially stabilize the system, yeah. Great. Now, we of course also introduce a corresponding Lyapunov candidate function, because honestly speaking, backstepping is a method of generating Lyapunov candidates by augmenting one piece to another piece, okay. It's actually not a method of control design, but a method of generating candidate Lyapunov functions. And as you might have already seen, we are by now used to design parameter update laws and control using, well, I mean, not the control yet, but the parameter update laws are being designed using a Lyapunov candidate. We take a derivative and we choose a parameter update law so that we ensure that v dot is negative semi-definite. So, this is called Lyapunov redesign, right, and that's what we are used to do, okay. So, it's like we are not doing all that in this course, the notions of control Lyapunov functions, but essentially what we're doing is choosing a control Lyapunov function and getting a corresponding feedback u, or in this case, theta hat, which is making v dot negative semi-definite at least, okay. All right. So, we of course, choose a corresponding Lyapunov function for this ideal system, yeah, not the event dot equal to e to the sum, but the ideal system, which is this system. And what is a good Lyapunov candidate? It's very straightforward. Just take the first obvious quadratic that you can think of, yeah, and we have been using this for a while now, okay. Of course, if this was something more complicated, you would not be able to choose a quadratic, but in this case, because it isn't, I get to choose the quadratic. And again, in the ideal case, yeah, v1 dot is minus k1 even, yeah. In fact, I can even, it might even not be completely wrong to say this is v1 d, yeah, because it is like v1 desired, yeah. And then this, we say clearly when e2 is exactly equal to e2 desired. Now we go to the second step, where we really understand that e2 is not really the control. So, what's the next best thing we can do? Yeah. So, I'm actually trying to help you understand also what is back stepping, all right. So, what's the next best thing I can do? I know that e2 is not identically equal to e2 desired, but what I can do, maybe, hopefully, is that I can drive e2 to e2 desired. I will make e2 desired as the signal that e2 has to track, okay. Remember, my earlier objective was for e1 and e2 to go to zero, all right. But now I'm sort of shifting the goalpost. It looks like, yeah. So, what I'm going to say is that now I don't want e2 to go to zero, yeah, as T equals infinity, but I actually want e2 to track e2 desired. Now, one might ask, does this mess with the original control objective, yeah, because we were, of course, trying to, you know, go to zero. And now we are trying to go to e2 desired. So, one might ask, am I actually messing with the original control objective? We will answer this suspense soon, all right. So, so what we do is we define this because we want to drive e2 to e2 desired. We will define a new variable psi2 and psi2 is defined as the error between e2 and e2 desired. So, this is what we've been always doing. Whatever we want to drive to zero, we define a new variable, right. So, if you want e2 to go to e2 desired, we define a variable as the error between the two, e2 minus e2 desired and that is psi2 in this case. And so, if I plug in for e2 desired, yeah, because it was minus k1 e1, I simply get psi2 is e2 plus k1 e1, all right. I guess I get psi2 is e2 plus k1 e1, okay. Great. And now, what do I do? Construct the dynamics for this error. Same steps. The steps are similar, all right. Find, compute an error that you want to drive to zero. So, compute the dynamics of the error, then start to do Lyapunov analysis on that, all right. So, the steps are standard. So, there should be no confusion as to what direction you need to do when you start with a problem. Always try to construct an error, then try to design a Lyapunov candidate, then try to define a control, then try to find an update law. Same steps, okay. Anyway, so, as of now, we are not even assuming, until this point, we are not assuming theta star is unknown. So, we will find the dynamics of psi2, which is psi2 dot, which is e2 dot plus k1 e1 dot. So, k1 e1 dot is just k1 e2. And e2 dot is just the dynamics plugged in from above, all right. Okay. Great. Now, what do we do? We add an additional, we add a new piece to the Lyapunov candidate. So, which is again just a quadratic, half psi2 squared, because we want to drive psi2 to zero. So, this is the obvious choice, okay. And therefore, we get v2 dot as psi2 multiplied by psi2 dot, which is just this quantity, okay. Now, if I choose my control to be of this form, what is this form? This form is essentially trying to cancel everything. I introduce a good term, yeah, because the other terms are not definite. So, we don't know how they will behave. So, what we do is we try to get rid of them and we introduce a good term. Of course, we are calling this theta hat, okay, which is an estimate of theta star. But, I would say for the known case, for the known case, we use theta hat equal to theta star, right. When we actually know the value of the parameter, theta hat is just equal to theta star, right. And then what are we left with? v2 dot is minus k2 psi2 squared, okay. v2 dot is minus k2 psi2 squared. And what happens? v equal to v1 plus v2 serves as a strictly epsilon function, okay. So, this is not complete yet. Yeah, because remember, what we computed here was v1 desired dot, not actually v1 dot. So, the analysis is a little bit remaining. See, if I take, and I will do that here now, I will complete that here. If I do take v equal to v1 plus v2, and now I compute the actual derivative instead of the desired derivatives, v1 was, if you notice, half e1 squared. So, this is e1 e1 dot and v2 is psi2 psi2 dot psi2 psi2 dot, all right. So, this is e1 dot is e2 and psi2 psi2 dot is what? We just did this psi2, in fact psi2 psi2 dot is just, wait a second. Yeah, that's fine. psi2 psi2 dot is exactly this minus a2 psi2 squared. Yeah, as of now, there is no definiteness here, does not seem very evident. However, notice that we had transformed e2 to psi2. So, I have to write e2 in terms of psi2. So, what is e2 in terms of psi2? I can get that from here. So, e2 is simply psi2 minus k1 e1 and this is minus k2 psi2 squared, all right. So, now I start to see nice things. This is minus k1 e1 squared minus k2 psi2 squared plus e1 times psi2, okay. Now, we use a very, very standard trick which says that absolute value of a b is less than equal to a squared plus b squared, okay. So, we use that to write this quantity as less than equal to minus k1 e1 squared minus k2 psi2 squared. Plus half e1 squared plus half psi2 squared, all right. And now this can be clubbed with this guy and this can be clubbed with this guy. So, if k1 is larger than half and k2 is larger than half, right. Then this is negative definite, implies v dot is strictly negative definite, right. So, I got a strict Lyapunov function. Notice, this was a strict Lyapunov function, okay. So, if both k1 and k2 which are essentially gains of our choice, this is like a control game that the designer can choose. And as long as these k1 and k2's are greater than half, then I'm guaranteed to have v dot to be negative definite, yeah. So, this method that you sort of go from here to here using this kind of inequality is called the sum of squares method, okay. This is called the sum of squares method. And you will constantly refer to this terminology. So, whenever I say sum of squares method, it means that I took the mixed term which looked like 2ab, yeah. So, notice that these things, although I've written it in terms of absolute values, I apologize, right. Although I've written it in terms of absolute values, this can actually be written in terms of norms. This is like a norm inequality also, no problem. Yeah, this is essentially like a, I mean, I mean, I mean, this is just a standard a squared plus b squared plus 2ab equal to a plus b whole square type of inequality, right. This is just using the fact, you know, that a minus b whole square is greater than equal to c, yeah. So, that's what this is using, right. So, this is just using that a minus b whole square is greater than equal to c or some such thing, right. But we also use things like the Cauchy-Schwarz inequality that we saw sometime ago, triangle inequality, Cauchy-Schwarz inequality and things like that. So, all these inequalities come into play when we are using the sum of squares method. The idea behind the sum of squares method is to use any technique, any inequality, standard inequalities, which will help you convert the mixed terms into square terms, which can then be combined, right, which can then be combined with this term, the other square terms, right. Because that's what we did, right. Because I have no idea of saying how big this is in comparison to these guys. So, what I do is I write it as a sum of squares. And then this can be combined now with this and this can be combined with this. And so, now I have a way of saying that if k1 and k2 are greater than half, then I'm good to go. Now remember that this is pretty conservative, okay. This is fairly conservative. We can also do another thing. The only thing is it doesn't help us write it analytically very nicely. But I can always write this whole thing as a quadratic form, which is like this, which I, e1, let me see, e1, psi 2 and e1, psi 2, if I write it like that, then this is k1, k2, half and half, all right. So, what you want, what you want in reality is that this matrix in between the positive definite, right. You have to choose the gains k1, k2 says that this matrix is positive definite. And in fact, the conditions for that are pretty straightforward, yeah. I hope all of you know this just requires that the principal minors are positive. So, k1 has to be positive and k1, k2 minus 1 fourth has to be positive. So, these are the, these are the realistic, I mean more realistic and of course, the least conservative conditions, yeah. The only thing is it's just easier to write it out in this way and more tractable. That's it, okay. So anyway, so the steps are not over here once you compute v2 dot. The point is when you take v equal to v1 plus v2 and you compute v1 dot, earlier we had actually compute only v1 desired dot because we assumed v2 is exactly equal to e2 desired. But if you don't, then you start to get a additional mixed term e1, e2 and then you write this mixed term as in terms of e psi 2, you get something like this and then you get two nice negative definite terms and a mixed term which you use sum of squares to dominate, okay. So, this is really the idea behind back stepping, okay. So, this is the back stepping method for the known case, okay. This is the back stepping method for the known case. Now, so we are sort of done in this case, see. So, what have we proved? We have essentially been able to prove that e1 goes to zero and psi 2 which is equal to e2 plus k1 e1 goes to zero, right. So, we have proved these two, right. Because why? Because we took v as e1 squared plus psi 2 squared by 2 and we proved that v dot is negative semi definite, right. So, by standard Lyapunov theorems, both of them e1 and psi 2 have to go to zero and of course everything is asymptotically stable and all that nice jazz, all right. Excellent. Now, we had asked ourselves a question, does this mess with the original objective of driving the errors to zero? The answer is no. Why? Because we just proved that e1 goes to zero, right. And e2 plus k1 e1 goes to zero. But then because in this piece, e1 is already going to zero, what we have essentially proved is that e2 also has to go to zero, right. In this, if this summation is going to zero and this piece is already going to zero from here, then the only way the summation can go to zero is if this guy also goes to zero, right. If this guy was not going to zero, then e2 would have been non-zero as t goes to infinity. But that's not the case. e1 is in fact already going to zero as t goes to infinity and therefore the only way for the summation psi 2 to go to zero is if e2 also goes to zero, right. So, this is like an equivalent thing. So, we have essentially been able to recover that e1 and e2 both go to zero, right, as we require, okay. Excellent. So, this is sort of how we do back stepping for a general nonlinear system because I say that it's for a general nonlinear system because we did not consider any unknowns. If we assume theta is known, theta star is known, right. So, the next step would of course be to go for the unknown theta star, yeah. So, now notice already you should be able to see that the Lyapunov candidate was e1 squared plus psi 2 squared where psi 2 is e2 plus k1 e1. Now, this term is sort of very similar to what you had in the Ortega construction, right. I mean again not the adaptive problem, but the non-adaptive problem. It's like x2 plus alpha x1. So, e2 and in this case you have e2 plus k1 e1 squared. So, this term essentially looks very much like the Ortega construction, but then in back stepping there is also this additional term, okay. So, that it becomes a Lyapunov candidate and a Lyapunov function, okay. So, for this particular case you see that the Ortega construction piece is a part of the Lyapunov candidate function for the back stepping. Alright, excellent. So, what did we look at today? We sort of started to look at back stepping in adaptive control and we started with the non-adaptive problem. Until now we've done the non-adaptive problem and we've seen how back stepping control is actually designed. Essentially back stepping is a way of constructing additional Lyapunov functions by augmenting Lyapunov function corresponding to each state, right. And that's what we did, started with the even state, created a Lyapunov candidate, right. And then we started with the e2 state, alright. So, that's the idea of basically constructing these Lyapunov functions, alright. So, we will see further the unknown case in the upcoming session and I hope to see you there. Thank you.