 Hello everyone, welcome to yet another session of our NPTEL on non-linear and adaptive control. I am Shrikan Sakuma from Systems and Control IIT Bombay. We are into the ninth week of this course on adaptive control and I think all of you would agree that we have seen a sufficient number of analysis and design methods for all of you to be able to actually pick up real applied practical problems from your field and directly apply the methods that we have discussed. I would strongly encourage all of you to do so and report your results to me or to the community or you know actually to develop and design new technologies with these methods. Yeah, I mean that's essentially the aim and scope of what we want to do here. So what we were doing until last time was of course this sort of you know we were essentially looking at examples at a particular example of the unmatched design right and we did the design for this vector system using both the standard adaptive integrator back stepping which needs to over parameterization and also the extended matching design method which doesn't lead to over parameterization but eons a control law which contains the derivative of the parameter estimate right so we then wanted to alleviate this issue and so we have started looking at the tuning function method for adaptive design and the first basically piece that we saw last time was the definition for what is globally adaptively asymptotic stability and essentially this meant that for a system of this form 1.1 we are looking to have the existence of you know and a feedback law which depends on the state and the parameter estimate and a parameter update law which again has a depends on a tuning function tau and an adaptation gain a tuning function tau and an adaptation gain gamma and if you have these two which guarantees that x and theta had remained globally bounded and also go to zero as t goes to infinity then you can say that your system is globally adaptively asymptotically stabilizable so this is where we were and we want to continue from here so I'm going to mark my lecture as lecture 9.3 I believe right here as lecture 9.3 right here because I believe I was doing let's see 9.2 here except excellent so we want to talk about adaptive control Lyapunov functions but before we can actually do that we first need to know what is a control Lyapunov function those of you who have taken some non-linear control course would know this notion but for those of you who haven't I want to talk a little bit about control Lyapunov functions so I'm actually picking up from some notes that I have for my non-linear systems course I will of course post these but I will cover a little bit of the material from here not the proofs and so on and so forth but this should be enough for you to follow what we are going to talk about right so the first thing is we look at what is called control affine systems what does a control affine system it is essentially a non-linear dynamical system where the dynamics are linear in the control yeah so control affine means linear in the control and usually it is written as x dot is f0 plus summation ui fi ui times fi where i ranges from some 1 to m so there are m controllers you can see that the m controllers and each of these f because x is in Rn therefore each of these f0s and fi is also mapped to Rn right and they are sufficiently smooth they are c infinity functions this f0 which doesn't get connected with the control is called drift vector field and the fi's which are connected to the control are called control vector fields right notice that there is no unknown parameter and all that such here because this is just standard non-linear control we are not talking adaptive control and unknown parameters yet yeah so now for systems of this form 2.3 we of course first assume that there exists a u bar you know such that you know 0 is an equilibrium right so we of course want to talk about the zero equilibrium stability and therefore we want the existence of a u bar such that 0 is in fact an equilibrium right then we define a control Lyapunov function for this system as a function v which again takes the states and maps to real numbers again and is assumed to be smooth c infinity means smooth function such that it is 0 at 0 and it is positive definite for all x in some local domain around the origin br is basically a ball of radius R around the origin yeah so you want so this is standard positive definiteness as you can see the first line is just asking for positive definiteness of this function v this is standard requirement for also a candidate Lyapunov function I hope all of you remember right even for a candidate Lyapunov function you need sufficient smoothness and positive definiteness so this is standard but the second piece is something slightly different it says that if the partial of v with respect to x times if ix is 0 for some x right and all i so basically what am I saying I'm saying that if I state the derivative of v with respect to this system right what do I have I will have something like del v del x f0 plus summation fiui what you see here on the left right and now if I multiply only the control terms I get del v del x fiui and if all these control terms are 0 yeah for a particular value of the state then I want for sure that del v del x f0 be negative strictly negative definite okay so whenever x is such that the control terms become 0 I want the drift term to be strictly negative this is what it means to be a control Lyapunov function and we also say I mean although I did not really talk about the earlier definition we also say that this is equivalent to definition 2.5 which is the original definition of a control Lyapunov function right which is in this form but let's not worry about this form what we want what is the equivalent version is that now there exists u such that inf over u well actually I should not say that we want infimum over u del v del x f0 plus summation ui fi should be less than 0 whenever x is non-zero right so this is essentially what you need from a control Lyapunov function by the way I'm not proving it here I mean the proof is again in these notes and I will post these if you're interested you can look at the proof but that's not required for our scope of our discussion here so these two are equivalent right this point and this are equivalent right either you can say that if I take an infimum over the control and this is basically v dot right because it is del v del x times x dot right and this has to be strictly negative for all non-zero x or I can say equivalently as if the control terms do not contribute anything they are exactly zero for a particular value of x then at that particular value of x this has to be negative okay so this is what it means to be a control Lyapunov function what is the difference here when we're talking about standard Lyapunov function there was no mention of the control control never appeared in v dot yeah we always talked about Lyapunov analysis for systems of the form x dot is fpx right there was no control here there is a control so therefore we have to account for what the control does to the system therefore there is the concept of having an infimum right so there is the concept of having an infimum right and similarly there is in this case also here we don't have an infimum in this particular version right but we sort of have partial with respect to v of x del v del x multiplied by the fi right and you know which which if it's equal to zero then you want del v del x times the f0 to be strictly negative okay so we have to account for the contribution of the control when talking about the definiteness right essentially these are the second condition is sort of a definiteness condition on v dot right which is the same as what you had for Lyapunov functions also but here we have to account for the contribution of the control yeah and that's what makes it a control Lyapunov function now why do we care about control Lyapunov function uh there is very nice results by son tag and sushman which and of course also also arch time which say that it is possible to construct you know smooth almost smooth feedback if there exists a control Lyapunov function right so existence of a control Lyapunov function is equivalent to being able to construct a feedback and this is where a very strong very powerful result right but before we can say that we have to also talk about what is a small control property right so what is a small control property it says that uh you know essentially it's in this epsilon delta form it says that for all epsilon positive there exists a delta such that for all non-zero x and x smaller than delta there exists a control which is you know close to the equilibrium control in terms of epsilon and you have the v dot to be strictly negative right right so this is uh you know sort of the interesting interesting thing right I mean what is the interesting thing you know that at the equilibrium the control is u bar all right we know that at the equilibrium the control is in fact u bar right now what we are saying is that close to the equilibrium that is close to the equilibrium value of the control right there exists such a control u which is close to the equilibrium value which makes my v dot negative definite okay so there exists such a control so this is what it means to have small control power it means uh with a small control if I'm starting close to my equilibrium with a small control I can sort of make my v dot negative and if my v dot is negative it means that I'm going close to the equilibrium right that is the philosophical idea right of the Lyapunov analysis that v dot negative definite means asymptotically converging right and is and also asymptotically stable therefore we are saying that with a small control which is uh we we are saying which is a control which is close to the equilibrium value of the control I can push my states starting from near the equilibrium to the equilibrium value to near the origin in this case to the origin okay so uh just to give an example it is not always possible to have small control yeah for stabilization if you look at a system of like this x dot is x plus x squared u right uh you know what if you think about this kind of a system right uh if you uh you know if you want to stabilize the origin right if you want to stabilize the origin what would you have to do you would sort of uh you know have to get a minus x sort of to cancel this guy out right if I would want to have a minus x right and so what it means is that you know if if uh if say x is if x is small then this is really time really time so you will have to be very large and negative right you will have to be very large okay so let's look at different situations u has to be large for small x that's important if u is positive uh so u is positive for x negative right if x is negative right this is anyway positive doesn't matter if x is negative u has to be positive right because it has to counter this guy similarly if x is positive u has to be negative right so the sign of the uh u is opposite to that of x right so near the origin that is the equilibrium yeah u has to be opposite sign so x u has to be positive for negative x u has to be negative for positive x but as you go closer and closer to the origin u has to be larger and larger so very close to the origin u is minus infinity on one side almost and positive infinity on the other side so it is not at all a small control problem in fact this creates a discontinuous control right even though the states are very close to the origin starting very close to the origin the control that is needed to actually push them towards the origin is very large in the uh very large in almost plus minus infinity all right so this is uh sort of the problem that you want to avoid and therefore we declare this as a property called the small control property so I really would strongly suggest that you think carefully about this problem yeah I may have spoken too quickly but I would urge you to think about this very carefully this sort of an example very carefully and why the control becomes discontinuous if you don't have the small control property yeah so the proposition which is leads to what is called the artstein sondag universal formula which we look at soon essentially says that if there exists a control Lyapunov function then a system satisfies the small and control small control property if and only if it admits almost see infinity stabilizer and almost see infinity stabilizer means there exists a control u which is smooth everywhere except at the origin where it's only continuous right it's just still very nice right so uh the proof of this kind of a proposition which is called the artstein sondag theorem very very famous result very famous result uh was done using an actual control construction u for this system right for this system for this uh you know system 2.3 control affine system uh and that construction is what we show so we declare as and this construction relies of course on the control Lyapunov function v and we define as del v del x f0 and v as the vector containing the del v del x f i's right and the control is actually defined in this way yeah whenever b is non-zero right b non-zero means at least one of them is non-zero right and v zero means every entry of b is zero and this is essentially the control Lyapunov condition right if b is zero means all of these are zero for all i these are zero for all i and that is what it means for b to be zero right and b non-zero means at least one of them is non-zero so if at least one of them is non-zero then you have this nice construction which with a division by b norm of vx right and uh this is you know essentially what you have and this is non-zero so because b is non-zero so norm is non-zero so this is well defined but when b is zero this norm is not well defined so control is defined as single and this uh artstein sontag actually show that this is a smooth controller they show that this is a smooth controller v and there's also a proof here which i'm not going to go through but you can actually compute v dot uh here using our you know uh standard way of computing v dot which is del v del x so v dot is just this guy for this controller fine system and if i plug in for the control u1 to um because u is basically the vector of u1 to um if you plug in this guy uh this is ax plus bx transpose u and you you essentially get this expression yeah and you know very well that ax is negative by existence of clf and this quantity is also negative so in both cases whether b is zero or bx is non-zero in either case you have a negative quantity here uh therefore v dot is negative definite therefore you have asymptotic stability by standard Lyapunov theorems right so what happened is that the existence of a control Lyapunov function actually gives you a control design yeah in fact if you remember our control designs are also based on first designing a Lyapunov function yeah so indirectly we are in fact designing control Lyapunov functions yeah although we don't call them control Lyapunov functions and we don't use the artstein's on-tech formula we are in fact using the v to design a controller therefore we are designing control Lyapunov function so i hope you understand this yeah that i know that this formula is a complicated one and we will see this suppose i look at this kind of an example here okay something very simple system x dot is minus x cube plus u so we can do design in multiple different ways first i can directly prescribe a controller i cancel this term and introduce a minus x so x dot is minus x what happens because of the structure of the control u is large when x is large yeah because if x is even if x is 10 then you know this is this takes value 990 yeah now the second kind of design is when i use a v which is possibly a control Lyapunov function right yeah so so notice what what does the theorem say before i proceed to this example you know more carefully what does this theorem say the theorem says that the control Lyapunov function with a control Lyapunov function the small control property is equivalent to existence of a stabilizer okay so there are two ways to talk about a control Lyapunov function one is i have a control Lyapunov function and from that i can use an artstein's on-tech formula to get a controller the second is i have a function v and using that function v i come up with a stabilizing controller u and if i can do that then it means that the v i chose was a control Lyapunov function okay so this is the important message to remember right if with a v i can design stabilizing control u then v is a clf this is by virtue of the if and only if condition okay and this is what we have been doing right we have been picking up a v and then with that we've been coming up with a u right which was stabilizing therefore all the v that we chose until now where control Lyapunov functions okay without explicitly saying so the other side is of course i know that the v is a control Lyapunov function and i use an artstein's on-tech like formula okay so let's look at this example in more detail and try to connect it to what we have been doing right the first is i just chose blindly a controller because i know this is giving me a target system x dot is minus x excellent let's do the second thing choose v is x squared by two compute a v dot which is xx dot which is x minus xq plus u i know that this is minus x4 so it's a good term i don't need to do anything about i choose u as minus x yeah i choose u as minus x and i'm done right i i get a negative definite v dot great so i know that this v is a clf right because i can choose a stabilizing control u okay great so this what is the value of this control u equal to minus x it's minus 10 it's minus 10 all right so no problem very good okay let's look at again the v being x squared by two i know that it is already a clf yeah because of this previous case right because i know it's positive definite and it's it's you know there exists a stabilizing controller if i choose a v and i can compute a stabilizing controller using that v then it is a control Lyapunov function because of the if and only loop relationship right so therefore v is a clf now what do i do i apply the artstein's on-tech formula and i get this complicated expression for control right it's it's something like this right something like this now what is the cool thing about this controller the cool thing is that when x is large then notice that this is almost nothing right so this is almost zero almost zero when for large values of state that is again when x is 10 this control came out to be 990 this came out to be minus 10 and this quite will come out to be almost zero and when x is small this is of the order of minus x because this is almost zero this is almost zero so i'm have x of the order of minus x which is the same as the second control so somehow the artstein's on-tech formula though complicated looking mean the controller is rather complicated looking as compared to what we you know intuitively chose but it has better properties right it gave me a uh you know small value of control when the state is really large and when the states are really small the starting states are really small then it is almost acting like a minus x right so this is excellent right it's a very good controller yeah typically uh when you do a control design your your control value is very large for large states yeah so but this is actually very small for large states the control values very small for large values of states and when the state becomes very close to the equilibrium here then the control is uh you know of the order of minus x which is also relatively small yeah quite okay yeah so uh so this is the idea of a control Lyapunov function i know this is a very very unfortunately a very very uh minimal lecture on control Lyapunov function as you can see uh you know this is a rather you know relatively longer lecture i start here started here um i only talked to you about this guy that is definition 2.7 then we looked at you know the small control property and then large time santa guy didn't talk to you about the smoothness how it's smooth how to prove it's smooth um but this is sufficient for our understanding right what do we need to remember we need to remember that there is two aspects to a control Lyapunov function one is that it is a function such that if i take infimum over all possible control of v dot then it has to be negative definite this is what is a control Lyapunov function so if i take an infimum over all possible control of v dot then it has to be negative definite this is what i need for function to be a clf then there is also the small control property which is a critical property for continuity of the control near the equilibrium or the origin in this case and uh being able to choose a v and compute a control stabilizing control from here with this v means that this v was a control Lyapunov function right so that's the back um there's the converse implication the forward implication is if v is a control Lyapunov function and you know that then you can use the art times on tag formula which is this formula to devises smooth stabilizing control the converse is of course what we've been doing until now before in although we didn't call it a you know control Lyapunov function we have been doing this we have been taking a v and define designing stabilizing controllers out of that and uh that is essentially uh implying that the v we chose to begin with was a control Lyapunov function yeah so instead of choosing that particular stabilizing controller we chose for example with the same x squared by two i i might have chosen intuitively this u equal to minus x as my stabilizing controller but i need not have yeah once i knew using this u once i can come compute a u and i know that v is a clf i can actually go back and use the son tag art time son tag universal controller yeah that's also another choice so it's like you know for the same uh v i can get multiple choices of uh you know control designs yeah with with different properties of course i mean in this case we saw that u equal to minus x is larger in magnitude that is you know depending on the state but uh whatever i got from the art time son tag formula is actually very small in magnitude for large states yeah so that's rather nice all right excellent so what did we look at today we sort of started talking about control Lyapunov functions for non-linear systems that are not uh that are not uncertain that do not have any uncertainties we tried to compare it with what is the Lyapunov function itself and essentially it uh includes the role of the control in the Lyapunov function and it's derivative right the idea of positive definiteness and negative definiteness positive definiteness of v and negative definiteness of v dot still appear yeah it's just that the role of the control gets uh explicitly mentioned here in the case of a control Lyapunov function and the cool thing about control Lyapunov functions and why they are powerful is that using the art time son tag universal formula you can actually design a control corresponding to a control Lyapunov function so control Lyapunov function comes with a controller yeah and this is a smooth controller almost smooth controller that is it smooth everywhere except at the origin where it is at least continuous so you have a rather nice result i mean it's a little bit involved the theory of control Lyapunov functions and of course you will read the art time son tag papers they're also a little bit more involved and complicated um but the crux is this right and as you mentioned we have already been using this idea of clf we have been using clf's to design controllers we've just not been calling them clf's all right excellent so in the upcoming session we'll be able to start discussing adaptive control Lyapunov functions for uncertain non-linear systems all right so i hope you can join me there thanks