 Hello, everyone. Welcome to yet another session of our NPTEL on Nonlinear and Adaptive Control. I am Srikanth Sukumar from Systems and Control IIT, Bombay. We are into the ninth week of this NPTEL course on Nonlinear and Adaptive Control. And I think by now all of you have gained sufficient expertise and knowledge in order to be able to tackle adaptive designs or uncertain autonomous systems such as the satellite that you see in the background. So in the last week or the last session, we were still working on our unmatched design. And what we had seen was essentially a cooked up example for which we had these unknowns. The unknown was unmatched. Therefore, we assumed that the P is unknown. And for this unknown P quantity, we actually designed an adaptive integrator backstepping based design. So we came up with two different parameter estimates. P hat and E bar with update laws and a feedback controller in order to stabilize the system, that is jive x1 and x2 to 0. So we continue to look at these stabilization problems. The tracking problems are, as I mentioned, not significantly different. I would strongly recommend all of you to look at the KKK book, that is the Kristic Analycopolis Kopotovich book, for more details on how to do this. So this is rather important that all of you do use these book references. I am, after all, constrained by whatever time I have. And I cannot cover every single thing that the book contains. So the book is definitely more advanced material and extensions of whatever we are discussing in the class. And so I would strongly urge all of you to look at the book, because that's what will give you more handle on how to solve the more real, more practical problems. Although whatever we are discussing here in the course itself is sufficient for you to be able to deal with a lot of practical systems. Great. So what I want to do now is that we understand our issue with the current design. That is, we have to create two different parameter estimates corresponding to one parameter. And so we want to alleviate this with the extended matching design. So how do we do this? So let me start here. What I'm going to do is, first I'm going to label the lecture number as lecture 9.2. And we start working our problem. So I'm going to rewrite the system, say quickly, but I'm going to cheat a little bit in a copy. So what do I do? I just paste it here. And for this system, I know that P is unknown, of course. P is unknown. P is assumed to be unknown. So the first thing that we want to do is that we don't declare a first stage Lyapunov candidate. But we do define an X2 design, which is essentially the same. If you see the X2 design, that is exactly the same. It is going to be exactly this guy, the negative of this guy. So the X2 design is just motivated by the dynamics. You just try to cancel whatever you can and introduce an estimate. So that's exactly what we do. So X2 design is the same. And so that's going to be minus FX1 P cap minus K1 X1. Introduce a good term and try to cancel as best as possible. The backup, right? So therefore the estimate gets introduced. But notice that we are no longer going to declare a candidate Lyapunov function V1. We are not going to declare, we're not going to do Lyapunov analysis to come up with a P hat dot get. Instead, what do we do? We define the Z, again, exactly in the same way. So that's what I will do next. I will declare the Z as X2 minus X2 design, right? Which is actually equal to X2 plus FX1 P cap plus K1 X1, right? So this is exactly the same as before. Okay? Now once I do this, right? So now I declare my V, right? I declare my V not like this, right? Because I don't have any V1 at all. Yeah, so, but in a rather simpler way, I would say, right? I declare my V as 1 half norm X1 squared plus 1 half norm Z squared plus 1 over 2 gamma norm P tilde squared, where P tilde is P minus P cap, right? So I never introduced a second estimate. I did not design a V1 the first stage Lyapunov candidate. I did not already define a P hat dot. All of that will be done using this candidate Lyapunov function. All right? Great. So let's move on and actually compute this V dot, right? So V dot is going to be what? It's X1 transpose X1 dot plus Z transpose Z dot minus 1 over gamma P tilde transpose P hat dot, OK? This is of course using the definition of P tilde, OK? So now if I substitute, right? I'm going to substitute for the dynamics. So X1 transpose is F X1 P plus X2 plus Z transpose. Z dot is now this entire mess, right? I mean, right? This is this entire thing. So I'm going to sort of copy this guy here. So I'm going to copy and paste this guy here, right? So that's what is Z transpose Z dot, right? And of course, I have still have my 1 over gamma P tilde transpose P hat dot. All right? I still have that same term here, right? Now notice again that unlike the previous scenario, P hat dot is not defined yet, right? So that remains as it is, yes? P hat dot remains as it is. It is not defined. But again, if you remember the extended matching design discussion, the good thing is P hat dot is something that we specify, yeah? So we know what it is, right? So we know what it is. So we are not quite worried about what is P hat dot. Yeah, I mean, whether we can implement it or not. So we can always use the control to cancel this P hat dot if we so desire, all right? Excellent. Now we are going to carefully club all the terms in the unknown, right? Let's first see how we go about that, OK? Let's first see how we can go about that, all right? So what do we want to do? What we want to do is we first want to write X2 as always in terms of Z plus X2 dZ. So that is, again, equal to Z minus K1 X1 minus FX1 P cap. So this is actually going to turn out to be X1 transpose Z minus K1 norm X1 squared plus X1 transpose FX1 E tilde, right? So once I substitute this guy here, this is what I will get from the first term, OK? So before I put in the, let's see, wondering if I should, yeah, absolutely. Why don't I substitute for my control also? So I will choose my control as minus omega cross X2. Basically try to cancel everything I can. So this is for this guy. Then I will have minus del F del X1. And here it's a K1I plus K1I just so that the dimensions match X2. This is to deal with this term, right? Minus FX1 P hat dot. That is to deal with this term. Then I introduce a good term, which is minus K2Z. And then finally I want to deal with this term, which should give me, for which I will have del F del X1 plus K1I FX1 P cap, right? Because I cannot implement a P. So I put in a P cap, right? So this becomes my control. I'm sorry, this is not my control. Well, I mean, that's OK. I think I will continue to retain this as my control. No problem, yeah. So what do I get from the second term here after substituting this guy is a lot of cancellations. So I will have this guy go away. I will have this guy go away. I will have this guy go away. Yeah, and I'm left with a lot of nice terms. So I'll end up with minus K2 norm Z squared. This is coming from this guy, this multiplying this, right? Then I will end up with one more term due to this and this together. And that term is now plus Z transpose del F del X1 plus K1I. Just this I, the purpose of this I is to just ensure that the dimensions of del F del X1 and the second term match. Yeah, remember that times FX1 times a P tilde. So this is what you get from the second piece. And finally, I'm left with minus 1 over gamma P tilde transpose kept, right? So this was our control, right? I mean, our control is this guy. Notice that I had already mentioned that this X1 term, additional X1 term is not required because so we will also not use that X1 term here, OK? So don't worry about that because I still have this additional term left. But we don't have to worry about that term at all, right? So now what do I do is I take, I have my minus K1 norm X1 squared minus K2 norm Z squared plus X1 transpose Z, these three terms. And then I take a P tilde transpose common and I have FX1 transpose X1 from taking transpose here. And in fact, I can take something common here plus again, FX1 transpose from taking common transpose here. I get del F del X1 plus K1 I transpose Z, all right? So that's what I get, correct? Minus 1 over gamma P cap dot minus 1 over gamma P cap dot. So what do I do? I just try to drive this guy to 0, right? Because I cannot do anything better than that, right? So that's what I do. I just try to drive this term to 0. And for that, I choose my P hat dot as gamma times FX1 transpose times X1 plus del F del X1 plus K1 I transpose times Z, all right? And once I make this choice, I'm left with. So this is my update law, right? Just this. So once I've made this choice, my V is basically minus K1 norm X1 squared minus K2 norm Z squared plus X1 transpose Z. And I know that this guy is less than equal to half norm X1 squared plus half norm Z squared by standard completion of squares, yeah? Using AB is less than equal to A squared plus B squared divided by 2, right? So this entire thing is less than equal to minus K1 minus half norm X1 squared minus K2 minus 1 half norm Z squared, all right? Pretty simple, pretty straightforward, yeah? It's pretty straightforward, OK? And I know that this is negative semi-definite, yeah? If K1, K2 are strictly greater than half, let me write this properly, all right? And so I'm done, right? As usual, I prove that X1 goes to 0, Z goes to 0. And because Z goes to 0 and X1 goes to 0, I can easily show that as X2 also goes to 0, as Z goes to infinity, just like before. So nothing changes in those steps, yeah? Now, notice that we have only one control law, right? One Lyapunov function, right? Sorry, one Lyapunov function in the previous page here, right? And then one parameter, update law, all right? Which is what you would want, yeah? Now, if you notice, so these are the standard similarities, anyway, that I always point out, even when we did the general extended matching. If you notice the p-hat dot, you see two terms, gamma f transpose X1, that's the first term. And you see that's the same as gamma f transpose X1, here for the p-hat dot. And the second term is gamma f transpose del f del X1 k1 i transpose z, the gamma f transpose del f del X1 k1 i transpose z, that's the second term. And that's exactly identical to this delta f transpose del f del X1 k1 i transpose z, yeah? So what happened? Because we declared only one Lyapunov candidate, the p-hat dot that we get is actually is sum of the two derivatives, that is, it's the sum of the p-hat dot and the p-bar dot in the adaptive integrator back-stepping case, okay? So that's one important thing to note. The second important thing to note, which you already talked about is the, sort of a drawback of this method, is that the control now contains the derivative p-hat dot, all right? Control contains p-hat dot. And as the distance between the control and the parameter becomes larger and larger, the derivatives also become larger and larger, yeah? Which can lead to implementation troubles and noise in the control implementation, all right? So this is essentially the idea of how to implement your standard adaptive integrator back-stepping and also the extended design, yeah? So we've seen both of them. So I hope that all of you will be able to now actually use these design methodologies for your real applied problems, all right? Great. So now we are sort of ready to move into week 10 lectures. Again, we are in week number nine. We seem to be always moving forward or we always seem to be ahead, yeah? But we're not really ahead because we have some more additional material to cover. This weeks are more for homework reference than anything else, yeah? So we are ready to move into week 10 lectures. And the key idea that we talk about is the tuning function adaptive design, right? So we already improved on one design flaw in the adaptive integrator back-stepping, which was that you have to sort of, you know, create two parameter estimates per parameter. And we were, of course, left with one issue in the extended matching, which is that you have the derivatives of the parameter update, the parameter update law appearing in the control law, right? So theta hat dot and p hat dot these things appear, which is not nice. So we want to alleviate that issue also. And that is why the tuning function adaptive design method is a popular method or an improvement over what we have looked at until now. And so that's what we sort of want to do. So this is what we are saying, tuning function in adaptive design helps us avoid the drawback of the extended matching design method, all right? So that's the idea. So we have a few definitions first, right? So we have a few definitions first. And I will go into some detail of non-linear control in this set of lectures for us to be able to understand these definitions better, right? So for now, I will sort of define, you know, one or a couple of these definitions and then we will start with the more non-linear control material in the subsequent session. So we say that this system, x dot is fx plus cap fx theta plus gxu. You're already used to looking at this system because we've been seeing this sort of a structure everywhere in adaptive integrator back stepping, right? So this kind of structure is standard. Again, this is standard construct in the KKK book, right? Because so there is a drift term, then there's a term depending on the unknown parameter, then there's a control dependent term. So that's the same structure here too. The only assumption here is that u is now just a real number, right? It's a single input system. Again, these definitions can be generalized, but to make the treatment reasonable and easy to follow, you use the single input assumption here. So this system 1.1 is said to be globally adaptively asymptotically stabilizable, right? So we've already seen global asymptotic stability, but here we are talking about global adaptive asymptotic stability, all right? So this system is called globally adaptively asymptotically stabilizable if there are two things, right? A control law alpha is a feedback law alpha which depends on x and theta hat because theta is unknown, right? And a adaptation law, which is tau, which is given by tau and the gain gamma, that's such that theta hat dot is gamma times tau. Again, depending on x and theta hat possibly, such that the x theta hat states are globally bounded and the x states go to zero as t go to infinity, okay? So tau is of course called a tuning function. That's the term here, right? The tau is called a tuning function, right? This is the important piece of terminology here, but so this is what you want need for a system to be globally adaptively asymptotically stabilizable. Now, remember that we did see something similar here also where we said that there exists a v readily unbounded and all that stuff such that the v dot is negative semi-definite, negative semi-definite at least and we essentially want to claim that w goes to zero as t goes to infinity, right? W goes to zero as t goes to infinity, right? Here it is a little bit more specific. We are not just happy with w going to zero because we don't know what kind of function of state and parameter update or parameter estimate w is. Here we are directly interested in claiming that x actually goes to zero, right? So we want the existence of an update law, sorry, a feedback law and a parameter update law and an adaptation gain such that the states remain bounded, x and theta hat remain bounded and x goes to zero as t goes to infinity, okay? So this is slightly different from the previous construct. Also this definition is not in terms of any Lyapunov candidate or anything like that. There's no v here in this definition, but of course it should be evident to you that if such conditions hold then there must exist such a v, yeah? So this is one of the key definitions which we will use, yeah? Global adaptive asymptotic stability or globally adaptively asymptotically stabilizable, right? Adaptive because theta is unknown so that everything there is a theta hat in the feedback law and the update law and stabilizable because we want x to go to zero, right? We don't care about theta hat converging to theta as all as usual, yeah? So this is a definition, okay? Well, remember this is a definition. We are defining this terminology, all right? But of course this is what we want in all our problems. We want the existence of such feedback and adaptive laws. So this definition makes sense too, yeah? So what we want to do in the subsequent session is talk about the ACLF adaptive control Lyapunov function. But in order to do that I realize that all of you need a little bit of a refresher at least on control Lyapunov functions themselves, yeah? So that's what we will do. So any, so what did we look at in this session? We had already done the unmatched adaptive control design via the standard adaptive integrator back stepping which leads to a two parameter update loss or two parameter estimates per parameter. We wanted to get rid of this two parameters per parameter two parameter estimates per parameter using the extended matching design and that's what we have completed today with the design and analysis of the same. We are now starting to look at the tuning function design which improves upon the drawbacks of the extended matching design also because the extended matching design brings in the derivative of the theta hats and the p hats of which we don't like, right? So we want to move into this tuning function design method which of course we started off with defining globally asymptotically, globally adaptively asymptotically-stabilizable systems and we want to move into defining adaptive control Lyapunov functions and before we do that we are going to in the subsequent session look at a little bit of theory of what is control Lyapunov functions themselves, right? So we have already seen candidate Lyapunov functions but we have not really talked about control Lyapunov functions and the theory behind it. So I'm going to spend a little bit of time talking about control Lyapunov functions, all right? So that's it for this session and I hope to see you folks again in the next session. Thank you.