 Hello everyone. Welcome to another week of our NPTEL on non-linear and adaptive control. For all of those who have been following this lecture series, we are entering week number nine in the course and all this time we have always been motivated by our desire to develop algorithms that will drive uncertain autonomous systems very much like what we see in the background which is a space-side satellite orbiting the Earth. Now I would like to welcome you again to week number nine of this NPTEL. I am Srikant Sakumar from Systems and Control IIT Bombay. So just to give you a quick recap as to what we were doing just in the preceding set of lectures. What we had looked at to begin with starting lecture 8.3 of course before that we did several other interesting topics like model reference and adaptive control. But in starting lecture 8.3 we moved through talking about the extension of the adaptive integrator backstipping to the vector case. So this is what we were looking at and we saw how things don't change significantly in the vector case except for a little bit of bookkeeping that is involved and we were able to design lines adaptation and feedback laws so as to stabilize this system we were of course looking at the unmatched parameter case and we did realize that having two estimates per parameter was a rather significant constraint for real implementation and so we started to look at the extended matching design. So what was different in the extended matching design if you all remember was that we don't define or declare the update law in the first stage itself but we rather wait and when we define the complete candidate Lyapunov function v that is when we actually define the update law theta hat dot and this way we have only one parameter estimate for an unknown parameter and so first we go back to the what is expected that if you just have one estimate per parameter and what we realized also was that this estimate that you get is in fact the sum of our previous estimate so we had two estimates earlier by a politics we had two estimates earlier and this turns out to be the sum of those two update laws so that's also interesting to see. Now one of the issues that we did point out was that although the estimates was nice there is only one estimate per parameter which turns out to be the sum of the previous estimates which is all nice but what is hidden and not very evident is that the feedback law contains the theta hat dot right so that is and we understood that there is one derivative per level of unmatchedness yeah so I'm sorry so for using a cooped upward but the control appears exactly one level below the unknown parameter and therefore you have a theta hat dot if the control appear two levels below there would be a theta hat double dot and so on and so forth and having these like successive derivatives of theta hat is not very healthy for any control implementation all right so this is one of the issues of the extended matching design but in any case it did help us alleviate the earlier issue of having two parameter estimates per parameter right so then what we sort of wanted to or started to look at in the subsequent lecture was an example yeah because I was I mean I'm all of us do want to know how these things do get applied so this was sort of a cooped up example we first did the matched case meaning that the uncertainty was in the same dynamics as the control right and and this was the vector case because a lot of us may not have had a practice to deal with the vector case and we saw that its things are not significantly different I mean it's just that you use norms right instead of using squares of scalars you use norm squares right and that's pretty much the only difference you have to be careful about taking transposers and careful of the sequence of things you cannot flip around move around things you know just like you do in the scalar case but other than that it was rather straightforward right so the matched case is what we had completed and then we had started looking at the unmatched case and that's where we will of course begin again right so I'm going to sort of start marking this again I'm going to mark it again here simply because I want to discuss the unmatched case quickly again I'm going to mark the lecture here and this is the first lecture of the nine three yeah so we are well into our course in adaptive control all right so great so the unmatched case dynamics again these are cooped up dynamics but you know you will later on see in your assignments and homeworks and exams and I mean all sorts of things that similar dynamics appear in several places several real systems so it's not I don't know so far-fetched after all right so we have again two the two systems x1 dot is fx1p plus x2 and x2 dot is omega cross x2 plus u where x1 x2 and u and omega are all in r3 they're all vectors in r3 and fx1 is a 3 by 3 matrix p is also in r3 and in this case because we are considering the unmatched case p is assumed to be unknown while omega is assumed to be known so it was sort of flipped of the uh matched case because in that you had something unknown here and everything was known here right so how do we go about this we are first doing the standard adaptive integrator backstepping design where you have two uh estimates for parameter right so how do we do that we start of assuming that p is unknown we don't look at the known case and unknown case because I think all of you are now well exposed to the steps so it's not very difficult to now skip some of these steps right so we start with the unknown p-case and we just look at this piece of dynamics like in the adaptive integrator backstepping and the first thing we do is we define a candidate Lyapunov function the best choice is you know take a norm square for the first thing and you of course take you know a p tilde squared term 1 over 2 gamma p tilde squared term right because this is essentially you know this is the parameter error here okay this is the parameter error and now what we declare what is our desired value of x2 right because we think of x2 as the control so that is the x2 design how we declare it is we just take fx1 p hat because we cannot use a p p being unknown so we just use the estimate instead so it's minus fx1 p hat and we introduce a group term right now we can we sort of continue the Lyapunov analysis assuming that x2 is in fact equal to x2 desired yeah so that's how we do that's how we implemented the adaptive integrator backstepping right so let's diligently take v1 dot v1 dot is just x1 transpose times x1 dot which is now fx1 p from here plus x2 desired which is minus fx1 p hat minus k1 x1 right and then you have this from the second piece you just have minus 1 over gamma p tilde transpose p hat dot right so let's see if I can make this smaller right so this is the second piece is from minus 1 over gamma p tilde transpose p hat dot this is from the fact that p tilde dot is minus p hat dot okay so that's just been applied here now you can see that I get a nice term here minus k1 norm x1 square right and here I get fx1 p tilde so I get x1 transpose fx1 p tilde and minus 1 over gamma p tilde transpose p hat dot okay now you can see that there's a p tilde here there's a p tilde transpose here I know that this entire thing is in fact a scalar quantity right because v1 is a scalar therefore this each term is a scalar so I can take a transpose and nothing changes so I get p tilde transpose f transpose x1 and I can take p tilde transpose common and I will just implement p hat dot as gamma f transpose x1 all right and once I do that once I implement this parameter update law p hat dot I will get my v1 dot as minus k1 x1 norm squared which is negative semi definite right so great so we have you know obtained the first parameter update law and the first we have a candidate we have enough now what is the next step now we know that x2 is not actually equal to x2 design so the next step is about creating a backstepping error variable that's this thing and that's just x2 minus x2 design which is exactly this right so we know that x2 design also brings in the parameter again so we are going to compute a z dot the dynamics of the backstepping error variable first so what is z dot it's x2 dot which is these two terms and then I have an x2 desired dot right so what is this x2 desired dot x2 desired dot is just this guy x2 desired dot is just this guy which is del f del x1 times x1 dot and k1 x1 dot okay so I have combined the two del f del x1 x1 dot plus k1 x1 dot I've combined into these terms because x1 dot is just this guy all right and then I have a fx1 p hat dot that's this okay and I already know what is p hat dot because I've already designed it so I substitute for it and this term comes out to be gamma fx1 fx1 transpose x1 okay great so now I know uh what's happened I have already defined the p hat dot but the unknown quantity p appears again okay the unknown quality p appears again and this is what is going to create a problem when I design the controller I cannot use a p hat again I need to create a new estimate and we call it p bar okay and what's the idea of the adaptive integrator backstepping is that you take the earlier candidate function add to it a norm squared term in the backstepping error and a norm squared term in the new parameter error okay the new parameter that is p minus e bar squared so you can see that this is exactly I even showed it to you last time this is exactly the new function z uh let's see it's not here so this is the extended design but in the original design this is the new one so earlier v then a norm squared term in the backstepping error then a norm squared term in the new parameter error here there's a vain matrix instead I have just used instead of s a vain matrix I've just used a scalar right I've just used a scalar delta so that's okay you can choose a matrix or uh keep a scalar your call yeah uh having a matrix of course gives you more handle on the adaptation game yeah so that's more general so once I have this v I'm going to uh diligently take derivatives again fine and then uh using that I will try to give an update log p hat dot that's the idea because everything else is more or less chosen we will of course choose some things here let's go forward and see what we need so v dot is first v1 dot right so uh v1 dot is let's see it's uh x1 transpose uh x1 dot which is fx1 p plus x2 uh plus we had a 1 over gamma in fact there will be negative sign minus 1 over gamma we tilde transpose v hat dot that is your v1 dot and this is just copied from here yeah this is just copied from here I mean of course here you had plugged in x2 desired but I'm not plugging in x2 desired because x2 is not actually equal to x2 desired I just eat the x2 as it is everything else is just this okay right right so I have this and then I have a z transpose z dot which I can plug from here omega cross x2 plus u plus I will write this as del f del x1 plus p1 uh fx1 p plus del f del x1 plus p1 x2 uh plus gamma fx1 fx1 transpose x1 this is just z transpose z dot and then I have minus 1 over delta p minus p bar transpose p bar dot all right all right so here again for p hat dot I can substitute uh this quantity gamma f transpose x1 right so that's what I do I will erase this and this becomes p tilde transpose f transpose x1 correct yes yeah so here of course I have substituted substituting for p hat dot okay after substituting for p hat dot this is what you would do okay great great so now I use the usual tricks right what I will do is I will write x2 as z plus x2 desired right in terms of the back stepping error variable and that is z uh minus fx1 p cap minus k1 x1 okay so this goes in here so what will happen I will get v dot as equal to minus k1 norm x1 square this is from this term and this term together then I will get uh plus x1 transpose z that's from this term and this term and then I will finally get uh plus x1 transpose fx1 p tilde and that's from this term and this term okay and here I had minus p tilde fx1 transpose x1 and that's just rewriting this term okay and now let's see what I'm also going to do before I I mean I'm going to actually move this downwards right because what I'm going to do is I'm also going to choose my control right how do I choose my control I just choose my control to cancel whatever I can and I mean do my best and introduce a good term right so I'll choose my control as minus omega cross x2 minus del f del x1 plus k1 x2 minus gamma fx1 fx1 transpose x1 so that takes care of this term this term and this term and I'm left with this guy which I cannot actually cancel completely but I can always introduce the estimate first I introduce a good term minus k2z excuse me the good term and then I will introduce minus del f del x1 plus k1 fx1 and p bar right so because I cannot introduce the p hat or the p which is not available I introduce the new estimate which is p bar and which is p bar so basically I am going to cancel this this this this this introduce a good term and get something like this here so once I do this right once I have this I will get my v dot right in a more simplified form I get a minus k2 non z squared that's because of this guy combining with this guy right and I will get a plus z transpose del f del x1 plus a1 fx1 p tilde that's from this guy combining with this guy okay so sorry so this is not p tilde because p tilde is p minus p hat this is actually the p minus e bar all right so great so what has happened now I have a I have a couple of nice terms this is nice and this is nice all right so I get a couple of nice terms right and then what I also have some cancellations right because this term and this term is the same right by the way I missed writing one term which is this last term so this is of course there minus 1 over delta I'm going to write it as p bar dot transpose p minus p bar I've simply taken this and taken a transpose right so this is just transpose yeah I can do that because these are all scalars I can keep taking transposes this is so that the p minus p bar appears on the right hand side in both of them yeah because they are vectors so I cannot reorder things the way I want anyway so I have these two cancellations these two are good terms right ah okay okay I think I I missed something okay I'm sorry I missed something right I missed something I also need to use my control to cancel this guy right so I can do that because this is actually equal to z transpose x 1 yeah and so if I just introduce a minus x 1 right so then z transpose minus x 1 is minus z transpose and that gets cancelled here right so so I'll also have a minus z transpose x 1 from the control that will of course get cancelled too right so what I am left with is I mean of course I now make a choice of my p bar dot as delta times f x 1 transpose del f del x 1 plus k 1 ah yes let's see I should have an appropriate dimension del f del x 1 plus k 1 I'm going to say I yeah because this has to be the appropriate dimension transpose times z so suppose I make this choice I know that these two will cancel out also so I'll be left with a v dot as minus k 1 non x 1 square minus k 2 non z square which I know is less than equal to 0 so by the way I didn't even need to cancel this out I didn't need to cancel this out with this additional term in the control because you see that this is a mixed term in these two terms so just by choosing a large enough gain k 1 and k 2 I could have dominated this term also so that was the other choice right I did not necessarily have to cancel this with this choice of control here yeah I could have removed this and if you remember I could have done a sum of squares to write this guy so I'm going to maybe I'll do that in red uh remove this term and then write this as less than equal to half non x 1 square plus half non z square right and and these get of course combined with this guy and this guy respectively and just by choosing a k 1 and k 2 large enough I can dominate this mixed term so this term is not essential so this term is not essential in the control law so if you want you can ignore adding this term just by choosing large enough gains k 1 and k 2 you will be fine yeah so once you have this v dot as negative k 1 norm x 1 x 1 squared and k 2 norm z squared you are done all right you have a negative semi-definite v dot and you sort of have you know you you can actually prove that I mean just like in standard back stepping you can prove that x 1 and z go to zero as t goes to infinity and I know that z is nothing but x 2 minus x 2 design so it is z is x 2 plus f x 1 p hat plus k 1 x 1 plus k 1 x 1 so now you know that this guy is going to zero here this guy is going to zero if f zero is zero then this piece is also going to zero so the only way only thing that remains possible is that x 2 also goes to zero yeah as t goes to infinity yeah this is the only possible choice all right so so that's it so essentially you have done your design right you you designed just like you want you designed two controllers so you so you've designed one control control is just one yeah you've designed two parameter estimates right you've designed two parameter estimates one is the p bar dot right and the other one is the mu hat dot sorry other one is the p hat dot right and you also have the combined liapunov function choice which is this thing okay so this is what happens in adaptive integrator back stepping right you have one combined liapunov function one control log and two parameter estimates corresponding to each parameter all right so this is what you would expect and this is what we had promised okay so great so this is the adaptive integrator back stepping the classical one what we want to do next and we will do it in the subsequent lecture is basically the extended matching design which sort of helps us to remove this additional parameter estimate all right so that is really the idea so any so what did we look at today we continued our problem on adaptive integrator back stepping for this vector concocted system we I hope all of us did get a fair idea of how to you know work with vector systems and I really hope all of you get the message that it's not significantly different from the scalar case yeah and we of course designed this adaptive integrator back stepping based controller so we have two parameter estimates and one feedback but now of course we want to move towards removing this additional parameter estimate for the same system so we will in fact go ahead in the subsequent session and start with an extended matching design for the same system so I really hope that this sort of example is giving you a good exposure into how to do this design and I really hope that all of you will be at this stage able to pick up problems from your own fields own your systems autonomous cars suspension systems satellite systems electrical systems biological systems and start working on some adaptive control designs for peace all right great so this is where we stop now and we'll continue thanks