 Welcome to yet another session of our NPTEL on nonlinear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. So, we are well into the 8-week lectures of this course and we are already looking into several algorithm design methods through which we can actually control autonomous systems such as the SpaceX satellite that we see in the background hovering the earth. Now, what we were doing last time was essentially the beginning of the generalization of the adaptive integrator backstepping method. So, this was the method that we had sort of looked at in the week 7 lectures, if you remember. So, this was essentially the unmatched technique. So, this was sort of what we had done in the week number 7 where we had the parameter which was actually unmatched with the control. So, and we had essentially devised how to use the backstepping method in order to design an adaptive controller for such cases. Now, what we have started already to look at in lecture 8.3 is how to extend it to the general case because here everything was a scalar. I mean, you had a scalar x1 and x2 and u and even the parameter was a scalar. So, now we are looking at the general case of a system where you have an x which is an Rn, you have an unknown parameter which is also an Rp, a control which is an Rm. And then, of course, there's an extended state psi which is also the same dimension as the control and therefore is in Rm. So, how did we start the setup of this system was that we said that we have this form for the system. We are now used to this form for the adaptive integrator backstepping for the general case. I hope all of you remember that. Here, there is one term which is corresponding to the drift. The second term which contains the unknown parameter which appears linearly. And the third which contains the control which also appears linearly. So, what we assume is that for such a system where of course the dimensions are stated and the functions are sufficiently smooth, there exists an adaptive controller which means that there exists a u and there exists a theta tilde dot such that the system has bounded states and some nice thing happens to the theta typically to the error states, the x states. So, this is codified in terms of the existence of a smooth function v. In this case of x and theta tilde such that its derivative is negative semi-definite at least. And from this negative semi-definiteness, we can actually conclude more often than not that this function w x theta tilde which is what you have on the right hand side goes to 0 as t goes to infinity. This is exactly what we had here. You had this sort of a system and the assumption was that you had a function v in this case the function v is something like this, this v1 function and the derivative of v1 after taking some theta hat dot which is essentially the same as having a theta tilde dot came out to be this quantity which is what is the minus w quantity. And we could prove that this goes to 0. And because this goes to 0, of course, we could prove that something nice happens to be x. I mean, I'm sorry, there is no x to state yet. So, of course, we can prove this goes to 0 which means that x1 goes to 0. So, that's exactly what we also claim here. Remember again that we are looking at the tracking problem, sorry, the stabilization problem but the tracking problem would have been exactly identical not there wouldn't be any significant difference in the tracking problem either. So, now if we add an integrator state that is when the u gets replaced by the state psi, which is Rn, you of course add this i dot equal to u sort of a thing here. Then what we claim is that this sort of a Lyapunov function v bar now, which is essentially the previous v with an added backstipping error term which is norm z squared and a new parameter error term because if you remember there was a over parameterization because of the unmet states that is the new parameter estimate is called theta bar the old parameter estimate was called theta hat. And so with the quadratic term in the new parameter error you can actually show negative semi-definite v bar for the new system that is the system with an integrator now. Now, as you would imagine because there was a nice adaptive law for the original system we define a backstipping error state z which is the psi minus the alpha because although psi is not the actual control of the system we want it to follow the alpha because we know that is the good control so we try to sort of push this towards zero. So, that's what we do now since everything is vector now instead of scalars that is z squared and theta squared and theta tilde squared we have norms that's the only difference. So now I have z transpose z which is norm z squared which is essentially this quantity and then you have sort of a norm squared here but with a scaling with this scaling is simply the adaptation gain that we already know about. In the scalar case also we had an adaptation gain even in the matrix case and now also in the vector case. So, this is the modified v bar which will work and this is again not very different and because v bar if you remember was the original function here and so we take our overall Lyapunov function and it's just this guy added with this which is exactly what we are talking about now because this is the backstipping error term squared and then there's a square in the new parameter error. So, square in the new parameter error and the backstipping error squared. All right, that's it. So, it's essentially very similar to the scalar case just taking into account the fact that we are now dealing with vectors. So, a lot of norms appear in place of simple squares. All right, great. So, now this is it. So, we essentially use this and now we simply started taking derivatives that's what we had stopped. So, I'm going to mark my lecture right here. So, we start our lecture 8.4 right here. Also remember again that we are looking at the v9 nodes. We're not going to worry about it. The purpose of the v markings is simply to sort of help align the homeworks. So, let's not worry too much about this. So, let's take the derivatives for this v bar in order to prove that this in fact helps us claim that our w goes to 0 and our psi goes to alpha and all that nice stuff. So, this is the dynamic. So, what we do is we now start to write our dynamic in terms of the new variable psi. Sorry, in terms of the new variable z, which is the backstipping error. So, x dot was fx plus capital fx theta plus gx psi and the psi is simply z plus alpha. That's what we do. And now, we also write the dynamics of the second state which is psi minus alpha. So, psi dot is just the control and the derivative of alpha has two pieces because it is depending on two different quantities and two different quantities. So, this is u minus del alpha del x u and this del alpha del x times x dot which is essentially this guy plugged in and del alpha del theta cap theta cap dot and so that is this with a plus sign. So, that's what you have here. That's what we verify here because theta cap dot is minus theta tilde dot and that is gives me the plus sign and that's what we have. So, now, when I write v bar dot, I get, again, two pieces from this partial which is del v del x times this plus del v del x times a gamma, let's see, it's not del v del x. This is, so I need to check the signs here. So, this is actually equal to, so v is a function, v was a function of x and theta cap. So, this term is not this but it has to be del v del theta cap theta cap dot. So, this is minus gamma x theta cap. Yeah, so that's what you have. The second term will be minus del v del theta cap gamma x theta cap. I'm simply taking the derivative of v which is a function of both x and theta cap and remember that theta tilde dot was declared as gamma and theta cap dot and theta tilde dot are related by just the negative sign. Okay, let's just remember that much and that's all we are doing. The second term is not del v del x again. It is del v del theta cap times negative of gamma x theta and then the second term is, the derivative of the second term is just you know this guy. So, it's z transpose z dot and we already have written z dot here. So, z transpose z dot is this entire thing. This guy getting plugged in here and the final term is this. So, I have theta minus theta bar transpose. So, I change the s gamma to s because we already have used gamma. So, you have theta minus theta bar transpose s inverse theta bar dot. So, I get the negative sign because theta bar has a negative sign here. So, therefore, I get a negative sign here. All right. So, that's it. So, it looks messy but honestly, we are just doing careful bookkeeping. Okay, just careful bookkeeping. Now, look at this expression for the v carefully right here. If you look at this carefully, I already know a few things. I know that v dot from the previous case when there was no integrator has del v del x fx plus cap fx theta plus gx alpha and del v del theta theta I see. So, here it is written as a function of theta theta. I'm sorry. This is gotten messy. This is actually v was a function of x and theta tilde. So, this was a mistake here. I apologize. This was a mistake here. So, this is fine. So, v was a function of theta tilde. In fact, so that is okay. All right. That was okay. So, then the second variable, second partial is in fact with respect to theta tilde. And then this is theta tilde dot which is just gamma. Okay. So, if you look at this expression, this guy is actually available here right. If you look at this, there is del v del x fx plus cap fx theta plus gx alpha plus del v del theta tilde gamma. Right. So, this entire thing is available and so this entire thing can be written as minus w. Right. And that's what we do. And then we pull this piece out. This piece is the only thing that's remain. Right. So, if I take this out, I have a z transpose and that shows up here. Okay. So, this piece is the only thing that's remaining is this piece and that becomes this guy. All right. This is z transpose g transpose and del v del x transpose but del v del x being this del v del x will be since v is symmetric. So, there is no need to transpose it but if you want you can just use the transpose here. No problem. Okay. That's not a problem. Let's just use the transpose here. Okay. So, so let's look at this. You have the rest of the terms as this. Right. u minus del alpha del x times x dot minus del alpha del the cap gamma. Sorry. The signs have to be corrected here. This is a plus sign again and this is a plus sign again. Correct. And so, that's correct. And then that's it. This additional term comes in from here. Right. So, that's it. We have this additional term and everything else is accounted for and then this entire term goes out. I mean, it's just reproduced here as it is. All right. Great. Now, now you start to see what's the advantage of you know, you start to see very soon what's the advantage of this sort of a term. Yeah, because because if you see effects theta which is the unknown theta is the unknown has appeared once again. Okay. In the along with the U. Okay. And if we do not have another handle here I would be in trouble. Okay. I would be in some trouble. So, what do we do? We as usual cancel as much as we can. We we can get rid of this all of this. Great. This guy also and this guy also and then we introduce a good term which is the minus k then for this term I have to introduce another cancelling term which is not again which is again not the true value of the parameters since it's not known but it is an estimate and a new one. Right. That's what we do. And once we do that what would we be left with all everything else cancelled out. Right. Except for this guy. Yeah. Corresponding to which I will have a theta minus theta bar type term. Okay. And that's what is this guy. Yeah. So again I need to be careful about the sign here because this was a plus. This has to be a minus sign. Okay. And so this term yields this guy and this is as it is of course with the S inverse instead of the gamma inverse and similarly the S inverse instead of the gamma inverse alright. So good things happen as we expect. Right. We have a nice negative term in the Z. We have a nice negative term W which we know has to be a nice negative term. And then we are left with these guys which we know how to cancel. Right. And how do we do that? We simply use this quantity to cancel this. Right. And how do we do that? So we take theta minus theta bar common and then therefore we have an expression here. Right. And in this case it will be S times F transpose del alpha del trans del X transpose times Z. Right. Basically it will be the transpose of this whole guy pre-multiplied by S. Yeah. So that's what it is. You have the transpose of that whole quantity pre-multiplied by S which is the adaptation game. Right. So this is the adaptation game. All right. So I hope that is the steps are clear. See all we did was very careful bookkeeping and carefully clubbing terms and canceling terms. Yeah. It's not very different from the scalar case at all. Yeah. So the purpose was to show that this method in fact has a very nice vector extension also and also to show that dealing with vector states is not significantly different from the scalar counterparts. All right. Great. So so we have this nice expression now after we have used theta bar dot in order to cancel this guy in order to cancel all the unwanted parameter terms that appear in the V bar dot. We do get this nice V bar dot expression. Right. And therefore we can show very easily that W and Z to go to zero as T goes to infinity and this is of course standard signal chasing plus bar balance lemma. Yeah. Notice that for a long time now we have in fact stopped showing these arguments because now we have assumed pretty much that all of your experts in this since we did it quite a few times and the steps were very, very standard and therefore we find no further need to keep repeating these steps. Yeah. So what so what sort of is a little bit of philosophical not really philosophical but actually an implementation concern is the fact that we have two different estimates. Yeah. Which is theta bar and also theta hat. Right. Also the theta hat here. Yeah. In order to you know estimate the same parameter theta. Yeah. And as the number of unknowns keep increasing which is you know RP in this case you will of course see that a number of excess unknown parameterizations unknown parameter estimates also keeps increasing. Yeah. And this is of course not a convenient situation to be in. Yeah. I mean eventually these are all going to be implemented in a micro controller and you definitely do not want to load your real time control system by adding more and more states. All right. So that's a really one of the critical concerns here and we seek to address it of course. But anyway the important thing is even for the unmatched case we are able to design very nice adaptive control loss. Right. And which actually help us to you know deal with unknowns that are not mapped with the control. Remember this is the way it's set up right now it currently works only when there is the unknown parameter one step above the control. Yeah. So if you have a system like this it's fine right because the control is one step below one integrator below if you may from the unknown parameter. Yeah. So this method can also be generalized to the you know sort of the more complicated situation where you have unknowns you know appear you know in several levels above the control. Yeah. And then the way you would do it is to do it sequentially of course. Right. I mean this is essentially what is called the parametric strict feedback form and the idea here is that you would think of like X3 as the control and work with X1 X2 and then so on and so I mean you move forward further down until you reach XN being the control and then you finally have you as the control itself. Okay. So this is of course these details are available in the KKK book which is the Kenal Agopolis Kokotowicz-Kristik adaptive control book which is one of the important key references for this back stepping base adaptive control design. In fact the book has significantly more details and significantly more methods and examples of what we do in this lectures I would in fact strongly recommend that all of you do look at that later all right. Now as I already mentioned one of the big issues is the requirement for extra parameters extra parameter estimates right and this is a you know significant constraint when we talk about real implementation yeah typically if you have a hundred parameters for example yeah I mean a typical engineering system may have even 500 unknown parameters right and if you're talking about such large numbers of unknown parameters and you have double the number of states required to be estimated so if you have 500 parameters you actually estimated using thousand states then you can imagine that this is a real concern yeah you can imagine this is a real concern so that is the idea that of extended matching design yeah is how to overcome this you know sort of over parameterization so I will not do it I will not start it now in this lecture but the system that we consider is almost exactly identical right that is you have this kind of a setup again we go back to the scalar case because everything is easier in the scalar case although it's the same it's just a lot more bookkeeping and it becomes more complicated to explain things yeah so otherwise everything is exactly the same what we will try to do is do one vector example also at a later point yeah so let's not worry about that too much but the system is again the same scalar system that we saw in week 7 I mean let me go back and sort of try to match it so this is the exact same system that we see these two look exactly identical right and now we want to do extended matching design which means that we do not want to have multiple parameter estimates anymore and just one and that's what you will see in the subsequent lecture right great so what did we look at today we had started the application of the back stepping design last time right now until the week number 7 lectures we had looked at the back stepping method for the matched case that is when the uncertainty appeared in the control dynamics and also had looked at the in which the control was present ok now of course the case when the uncertainty appears in the control it can be easily generalized to the vector case it's not a big deal yeah although we did not do that again we can try to do all of this at a slightly later stage now the point is that here we that is when the uncertainty appears one level one integrator level above the control and we want to look at vector states vector controls and vector unknown parameters and that's what we did in today's lecture and also the previous one so we completed the proof we started with the assumption that the first layer integrator has a nice adaptive controller along with stabilizing Lyapunov candidate construction and what we can actually show is the construction of the complete candidate Lyapunov function for the system when integrator layer is added to this and we can also show that you know our back stepping error goes to zero and you can also show that whatever your W that is this negative semi-definite function we had from the first layer will also go to zero so this is what we have been able to show until today and in the subsequent lecture we will look at what is called the extended matching design alright great so I really hope that all of you are with me I know that the bookkeeping part does make things look rather complicated but I hope you understand those the following what's going on and not really any big innovation in terms of theory and so you know I hope that all of you enjoyed these lectures and you are able to follow what we are doing here in these lectures yeah great so this is where we will stop today and I'll see you again soon thank you