 Hello everyone. Welcome to yet another session of our NPTEL on non-linear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. So we are well into the seventh week of this NPTEL course and we are already looking into algorithms for designing adaptive controllers for single and double integrators matched and unmatched uncertainty systems. So the sort of algorithms that we are designing are not so far from the kind of algorithm design that will be required to drive autonomous systems such as the satellite in orbit that you see in the background. So what we were looking until last time was the design of the adaptive controller or the unmatched uncertainty case. So this we had essentially looked at how to use back stepping based methods in order to design an adaptive controller for systems that look like this where the uncertainty is now in the state that is different from the state in which the control appears. So this is what is called an unmatched parameter system. So we of course saw the back stepping design and it was a little bit more complicated than what you would expect for the matched case and we also saw that one of the most important or key things to remember is that we had to design two different parameter update laws which is the theta hat and also the mu hat. So we had to design the theta hat and the mu hat that is we had to over parameterize the system. So we had to over parameterize the system in order to be able to sort of complete the adaptive control proof. So in this case although the uncertain parameter was only one scalar parameter we had to design two different estimates for the same parameter so of course then we were able to prove tracking or we did the stabilization problem but as we mentioned the tracking problem also would be a very straightforward extension of this stabilization problem. In fact I would encourage all of you to try the tracking problem. So that's something I would write here. Try the tracking problem on your own. So now we move on to the next set of notes. I mean again please don't mind that it says week eight and so on we have been doing this. So this is merely to sort of organize the material. So we are sort of once we have completed one set of notes for that particular week we have moved on to the next set of notes and we will continue to do this because there is enough material to cover till the end. So we will move on to a slightly different topic or set of topics now. So we want to start off with the unknown control gain problem. So we will look at what is the unknown control gain problem. So this is what we want to do in hopefully the next couple of sessions. We want to look at what is the unknown control gain problem and how do you handle it. And as always we look at the scalar versions of things here and once we have done this unknown control gain problem we will be in a very good position to actually introduce and look at the model reference adaptive control which is probably one of the most popular adaptive control frameworks for linear systems. So that is sort of the outline of what is in this subsequent lectures. So let us start looking at this. So I am going to mark this lecture as lecture 7.5. So this is the fifth lecture of the seventh week. So don't worry that it says week eight in the beginning here and that is mainly for organization sake. All right great. So what is the unknown control gain problem? So until now we have been looking at unknowns that are not in the control term. We have been looking at the unknowns in the drift term. So like in the previous week we had, I mean in the unmatched case also we had the unknown which was connected to the drift term or the state terms. There was never any unknown connected to the control term and that is what changes here. That is what changes here. Now we are again looking at the scalar a single order system just to make our treatment simple. Again we will immediately after look at the model reference adaptive control case which is the entire vector case. So we don't have to worry about missing any details as such or this being not applicable to any real system you are working on. This is in fact with minor little extensions applicable to a lot of vector systems also. All right great. So let us look at this system. So what is this? This is x dot is a fxt plus bu. Now x0 is given to be x0 is the initial condition of course and both the state and the control are in real numbers and the function f is of course taking the state and the non-negative time and giving you a scalar quantity r. And here the important thing to remember is that both a and b are unknown constants. In this case there is an unknown connected to the state which is what we have been seeing until now but there is also an unknown which is connected with the control. So this is new and so then the question is can we solve the same problem? So what is the same problem? The objective is of course that we want the tracking error to go to zero that is we want t goes to infinity e of t goes to zero where e is defined as the tracking error which is the state minus the r. So there may be some non-uniformity in us putting the time arguments with the states and the r and so on and so forth. Please don't worry too much about this notational inconsistency if you may we have just put in the time arguments wherever we think it's not very clear from the context. So when it's clear from the context we don't put any arguments but if it's not we are just putting in some arguments of time. So it's not nothing to worry about if you want to be of course you want to write articles and journal papers and conference papers in the field then you have to be more careful and be very consistent with your notation but here the purpose is instructional and so we are keeping notation so that everybody can follow. So the aim is more that you understand the material then be very precise and exact and consistent with the notation. So the purpose of the notation is so you can follow carefully. So this is the tracking error we always define this tracking error because this is the quantity we want going to zero. So as usual we assume that the signal that we want to track is in fact smooth bounded with bounded derivatives. So this is the standard assumption so it is infinitely differentiable it's bounded and its derivatives are also bounded so it's a very nice signal. We don't want to deal with non-nice signals and so on I mean simply because of that because in most physical circumstances you wouldn't want your robot or any other mechanical or air mechanical system or maybe even an electrical system to follow very bad trajectories. So that's really what's the aim. We also make some assumptions on F. So what is the assumption is that F is bounded if X is bounded. Okay so there are two assumptions one is that the function F is assumed to be bounded if X is bounded. So it's a function of both state and time but we don't worry about time so I would in fact even say that for all t and we also assume that B is non-zero. Yeah that is the gain that is connected to the control is assumed to be non-zero and this is a very reasonable assumption because if this is zero then there is no term here there's no control here so there's no problem to solve honestly we cannot do anything yeah the system becomes uncontrollable right. So this is very I mean this assumption is a very reasonable and a very fair assumption so we don't have to worry about making a very infeasible assumption and so is the assumption on F. Yeah so here so notice that we are assuming that F is bounded when X is bounded we are not saying that F is bounded for all states if X goes to infinity X the function F is also allowed to go to infinity yeah so it's not something like a sinusoid or something it's not something as simple as a sinusoid because if I had something like a sin X right sin X whatever I mean sin X times t or some such thing right this will be always bounded this F will always be bounded irrespective of you know what is X and t because it's inside a sinusoid yeah but we don't care for we are not restricting ourselves to functions like these yeah we are even allowing something like say linear in X yeah even allow something linear in X yeah so the only thing is we have to remember that it should not grow unbounded with time yeah it should not go unbounded with time yeah so that is one thing that we sort of have so I mean it could be something like X sin of t okay something like this is also allowed so this is a fairly you know in general enough set of functions all right it's a fairly general set of functions so I wouldn't worry about whether we are restricting ourselves to a very very simple class of functions or anything like that all right so whenever we talk about the problem setup when we talk about assumptions it's it's it's sometimes or in most times it's critical to understand that the assumptions are not very very restrictive in the system all right great great so now what do we do we again do the standard steps right whatever we are used to doing until now we we compute the error dynamics right so that's e dot is x dot minus r dot right and what is that it's afx plus bu minus r dot okay so this is just obtained by plugging in the dynamics of x great so now when as usually do our design with the known case first this has always been the I mean method of how we have been doing things and so for the known ab case we want to have a very nice target system we've always been doing that too we've been choosing a nice target system and what is a nice target system if I want to drive e to zero asymptotically it's e dot equals minus k e for some k positive and this is the simplest possible linear in fact exponentially decaying system that I can think of right so we've also already worked with the system like this and if you want to do this what would I do I would so what so what we sort of do is we introduce this term the minus k term in the dynamics here yeah I mean earlier we simply chose the control but now we don't directly choose the control just bear with me because of how we want to use the structure okay so what we do is this minus k e is the nice term so I introduce it in the e dot and then I also add a plus k so that these two just cancel out and I have not really changed anything in the right hand side okay so this minus k is there and then I look at all these terms together yeah because I want to sort of cancel this also right what do I do I take the control game being outside okay so this is a trick okay so this is a I mean I would qualify this as a neat trick okay and and lot of nonlinear control design is about such nice neat cute little tricks yeah so don't I mean one should not be feel that this is just you know a magical wand that I'm waving these tricks are based on very good intuitions all right so so it's a neat little trick and a lot of nonlinear control in fact a lot of mathematics is based on doing such neat little things tricks in fact a lot of proofs in mathematics that you will find are accomplished because somebody came up with a trick okay so just because it's we use the word trick is not derogatory yeah if you can actually come up with these tricks often you will be a very very good nonlinear control theorist okay all right so so this is a nice trick we just add and subtract the scale and then this scale is clubbed with this guy yeah how do we club it we take the control game be outside the bracket right and then I look at two different terms there is one term where it is just a one by B because there's no unknown here in these two terms and so there's no unknown here it's just a one by B and there's another additional unknown here so there's an A by B F all right and then there's just a U because the B was pulled out of the bracket all right so this is a sort of rewriting of this term by pulling the B out of the bracket this is also another trick that we pulled the B out of the bracket technically you would have to do it even for the known case because if I prescribe the control out of this my control will have been the denominator right and so we are essentially doing that we just writing it as such okay because we want to rewrite or redefine the unknown parameters that's the A although as of now we are saying that this is a known A B case but we still want to redefine the unknown parameter redefine the parameters of the system okay and how do we do that I now choose A over B as theta one star and one over B as theta two star okay so instead of A and B being my unknowns I have two new unknowns which is theta one star and theta two star defined as such now remember that this is not over parameterization I had two parameters to begin with and I have two parameters now so I'm not doing any kind of over parameterization I'm simply doing a redefinition of the parameters in fact it is very easy to see in this particular case may not always be the case in this particular case it is very easy to see that if I exactly estimate theta one star and theta two star then A and B can be uniquely computed okay A and B can be uniquely computed so not a big deal all right so great great so we understand all this so some tricks and some redefinition of parameters and what do we have I choose my control in terms of these new parameters now because these are known right and that is minus theta one star f to cancel this out and minus theta two star ke minus r dot cancel this out remember I don't need my control to introduce any new term because I already have a nice negative term here so all I need to do is cancel these guys out and once I do that with the control I'm only left with this much yeah so I achieve the target system for the known case yeah this is exactly the same expression of control the control expression is the same if I had started here with the original system this guy right and I had this target system this is exactly the same control you would have obtained I've just rewritten the control in terms of these redefined parameters and this is going to be very helpful to me subsequently okay this is subsequently going to be very very helpful to me that is the whole reason why we are doing this redefinition okay great so this is the control and with this control I know that I'm left with only this so of course I choose a all right so I have not written that here but I will choose a v which is one half e squared and I will get a v dot which is minus k e squared which is negative definite all right great so this is what we've been doing right so this known case doesn't seem to significantly different except for some redefinition of parameters all right great so now we move on to the unknown parameter case so what do we do we take the control law here in equation 1.5 and we replace the theta stars by the hats because this is just what certainty equivalence principle has taught us and this is what we've been using for so long successfully so we don't want to change our successful formula or a success formula right so we replace the theta stars by the theta hats okay and these are of course the estimates right now what happens of course I don't get the desired e dot equals minus k e but I do get that plus b times the tilde terms okay this is I mean this is very easy right I hope this doesn't confuse you right so because if you look at this e dot is minus k e plus b times theta 1 star f plus theta 2 star k e minus r dot sorry plus a control right and now if you plug in this control here okay and you keep in mind this definition yeah it's very easy to see that the hats will give you theta 1 tilde here this hat will give you theta 2 tilde here and that's it this term remains as it is this remains as it is and that's all we have okay now the question is how do I do the analysis now well we try yeah we try our usual idea what do we do we take the earlier Lyapunov candidate which was half e squared and we add to it quadratic terms in the unknowns this is what we have been doing with some of course adaptation gain in each case right so this is what we've been doing right so we want to continue doing something similar right right so then we start to compute the v dot very carefully this contains e e dot and so e dot is simply this minus k e plus b times theta 1 tilde f plus b theta 2 tilde times this right this is based on the control law we have then we have derivatives of these terms which comes out to be minus 1 over gamma 1 theta 1 tilde theta 1 hat dot and minus 1 over gamma 2 theta 2 tilde theta 2 hat dot and we are of course yet to define the update loss we are yet to define the update loss great now again what do we do we have this nice negative term coming from here right but then we have these theta 1 tilde and theta 2 tilde term so of course we try to club these two guys and these two guys that's what we've been doing again right we've just been clubbing the theta tilde terms and using the theta hat dots to cancel out the errors and whatever these terms right so in fact I will write it out because it's not written here so what will this be this will be theta 1 tilde times b e f x t I will take a gamma 1 in the denominator so that I get a gamma 1 here and minus I get a minus theta 1 hat dot okay and similarly I have a theta 2 tilde divided by gamma 2 times gamma 2 b and k e minus r dot minus theta 2 hat dot okay so these are the terms I get in theta 1 tilde and theta 2 tilde okay if I want to and the other term is of course this is equal to minus k e squared so this is of course the nice term nothing to do but I want to get rid of these two terms all right and if I do want to do that I just simply use theta 1 hat dot and theta 2 hat dot here but well look at what happens something rather bad actually theta 1 hat dot is gamma 1 b e f x t and theta 2 hat dot is gamma 2 b e k e minus r dot everything looks nice except the b here yeah this b is bad yeah this b here is bad this b here is a problem it's a problem term why b is in fact an unknown okay and theta 1 hat and theta 2 hat are essentially being designed and defined so that we can identify this b right and so if we are still have an update law right and then notice that this theta 1 and theta 2 appear in the control so it's not like these are for show these are not for show these have to be actually implemented even if you have a microcontroller and a robot then these have to be implemented in the microcontroller these have to be integrated in the microcontroller so obviously these are not for show and they contain the unknowns themselves right so this is a circular problem right so that I'm trying to identify the unknowns using this theta 1 theta 2 which contain the b and the a but then the b appears in their derivative and that's not okay yeah of course this is no adaptive control so this is not an adaptive control that can be implemented okay this is not implemented all right so that's why I put a big cross so whenever I get any adaptive control solutions from unfortunately several naive students and I find that and so the first thing I look for is if the unknown appears in the update law or not yeah I don't have to look at the rest of the solution at all yeah my checking becomes very easy yeah that you know because if you ended up with an unknown in your update law then you made a wrong design your design is something that cannot be implemented yeah so please don't make my life easy please make it hard and design adaptive laws which do not contain the unknown in them okay so this is wrong okay the question is what went wrong and did we design the controller wrong did we design the leapon of candidate wrong but it seemed like we had followed the steps that you know we were doing until now so the problem is that the steps that we were doing until now do not work for this situation yeah so we have to make a modification the modification is not in the control law which is still the certainty equivalence control law so we don't make a change there but we have to make a change in the leapon of function right so here we had chosen a leapon of function which was very much motivated by what how we have been doing things until now with this just a quadratic term and adaptation game here but now we have to do something different we actually have to introduce an absolute value of p here okay we have to introduce an absolute value of p here everything has remains the same and just this new term shows up all right just this new term shows up okay so what happens what happens everything else is same again okay the only difference here right the only difference that we see here is that your because of this absolute value of p which is again a constant remember this is a constant all parameters are assumed constant there's no derivative with respect to it but the second and third terms which were simply these guys in equation 1.7 now have an absolute value of b here so when I again combine this term and this term my update law does contain something about b but it is not the b itself because we use the fact that b divided by absolute value of b is simply signum b yeah I hope you understand this and b is simply equal to absolute value of b times the signum okay so although b doesn't appear now which was wrong signum b appears here okay so so everything has remains the same just that instead of the appearing which was absolutely not available I have signum of b appearing here this I would say is a much more benign requirement okay this is a much more benign requirement okay what is the requirement the requirement is that there is a signum of b that I need to know in order to implement this adaptation law so the amazing thing is that in the entire adaptive control community there is no other known choice of Lyapunov function or control which will help us to not know the value of signum b while implementing this control okay so the only solution that is available in the community is what I have shown you yeah and if you can come up with something that does not require the sign of b either then you have done some magic and you will get a okay all right so this is actually something that's not possible yet okay so if you can come up with something excellent so anyway so with the knowledge of the sign of this control game you can implement this adaptive controller and you will get v dot is minus k e square which is of course now not less than but less than equal to zero all right this is not less than zero this is only semi-definite because there were two other states theta one tilde and theta two tilde and therefore you can complete the rest of the proof with bar bar at slammer and signal chasing in fact you can show that limit as t goes to infinity e of t is zero okay the steps are standard yeah you will just show that v is integrable first you obviously have stability right from here I have of course uniform stability in the sense of Lyapunov no problem beyond that I will be able to show that e is bounded that is l infinity and so I know I can show that v is integrable because it is non-increasing and lower bounded and therefore e is an l2 signal when I integrate both sides so I obtain that e is both l infinity and l2 and then e dot is also infinity can be easily shown and therefore by bar bar at slammer scurrallary you will be able to claim that e goes to zero all right excellent so what have we seen today is how to handle unknown gains on the control control in the adaptive control context so until now we had been looking at unknowns only connected to the states that is in the drift terms but now we have also seen how one can handle controls sorry unknowns in the control vector field okay and this is again something that is very very naturally occurring all right if you have any sensor or sorry any actuator which is misaligned or you don't know the exact position or the orientation for example if I have a spacecraft with a control moment gyroscope as an actuator to help it rotate about its axis right and then I don't know exactly where if it's placed in the center of mass or if it's oriented exactly along the body axis or not then there will always be an unknown control gain okay or you can also think of a quadrotor type situation where the thrust is actually some coefficient multiplied by square root of angular velocity of the propeller now this coefficient may again be an unknown quantity or not a well measured quantity and therefore this also gets into the control gain and this unknown okay so this is a very common problem and we have given an adaptive control solution to control design in this context so we can still achieve tracking all right great so we will stop here and I will see you again next time thank you