 So, let me sort of recap very quickly what we have talked about in the context of CLFs control Lyapunov function. So, we very conveniently and comfortably use the notation CLF or the terminology CLF. So, please do not get confused it just means control Lyapunov functions ok. We started with a general non-linear system like this alright and this was the definition for the control Lyapunov function alright. The first one the first condition basically required that it be a Lyapunov candidate a valid Lyapunov candidate and the second condition is sort of the nice negativity condition ok. So, you in the absence of the control you had the negative definiteness condition here what all we are saying is that we are saying you can actually find some control u ok such that you can make the derivative negative for every state yeah that is really what is the CLF it says that this function is such that it allows you to find a control such that for that value of control the v dot is negative and this you have to do point wise this is a point wise computation because. So, so the x here is fixed see notice that because you are taking infimum only over the control the x is actually a fixed quantity ok. So, once you fix the state is just some number that you have basically you just have a function of u in here ok and then you are just trying to find an infimum. In fact, you can just think of this as an optimization problem ok it is actually almost an unconstrained optimization problem I mean the only thing is you want the value to be negative you want the value to come out to be negative if it so happens that you run an optimization on this and the value turns out to be positive of the function here of the cost if you may hear then it is not a good it is not a good v at all ok. So, the v is not a CLF ok and of course such v such a CLF is very useful in control design yeah we will look at it because you can see that it allows you to design a control which makes the v dot negative and v dot being negative definite is what you require for asymptotic stability right. So, obviously that is what you want. Now, we then specialize this to the control affine case why did we do that because we realize that well we did not realize anything, but the researchers who have been working who have been working on this for several years they realize that with just a general non-linear system where the control can appear in any form not necessarily in an affine form then it may not be possible for you to design continuous controllers ok virtually impossible so they figured that it is not possible. So, then they specialize to systems that are linear in the control ok and hence called control affine systems ok. So, this is the structure and this structure is the most universal structure in non-linear control here the F0 is the drift vector field FIs are control vector fields these are basically state dependent vectors ok at each value of the state they give you a direction a velocity direction ok and then you have a control scaling right. So, it is almost like saying that I have if you count these I have F0 that is 0 and then I have 1 to m. So, I have m plus 1 such vectors ok I have potentially m plus 1 velocities notice that m is necessarily less than m ok. So, the state space is n dimensional and your number of velocity vectors that you have to play with is m dimensional. So, m is less than n ok. So, m is typically less than n alright which means you have less velocities than the number of dimensions if you think of the state space as a dimension right it may be less than the number of dimensions. So, the idea is can I play around with these velocity vectors so, I can go in the right direction ok. So, if you just think about you know moving on a sphere for example on the surface of a sphere for example ok if my now suppose I want to move on the surface of the sphere this is my requirement ok and I have say vectors in all three directions velocity vectors in all three directions. So, I can potentially if I specify this vector in a bad way ok I can potentially get thrown out of the sphere also right instantaneously I could just be thrown out of the sphere which is not ok with me ok. So, the ideas can I play with these vectors. So, it would be something if I want to draw some picture like this suppose this is the surface of the sphere I will have a vector this way I will have a vector this way I may have a velocity vector this way alright I may have these three velocity vectors. Now, as long as my actual velocity is in this plane I am more or less ok because this plane is the tangential plane to the circle or to the sphere I am more or less ok I will remain on the sphere at least on the surface of the sphere, but if I start doing anything in this direction I get thrown out of the sphere ok. So, what would my control try to do my control is just a scaling all it is doing is it is scaling each of these fail each of these vector velocity directions right. So, you are just in fact you are doing a linear combination of these velocity directions to get the direction you want to go in ok. So, although we never design controllers in this thinking like this honestly speaking nobody designs control like very difficult to do, but this is the logic by which the controllability of a system is defined ok. If you cannot reach all possible directions then you will have some issue with the controllability ok that is the idea alright. So, what we did was we specialized to control affine systems and for that we defined the equivalent version of the control Lyapunov function ok. We already proved equivalent well at least we proved one side the other side was supposed to be our homework which I will assign soon enough. So, this is what is it what does it say the first one is again exactly the same thing is before the second definition changes a little bit ok nothing significant it just says that if the contributions of the control vector fields are 0 then the drift vector field has to push you in the negative direction that is it should make v dot negative ok. If not then again in a sense what we are trying to say is the system is not stable at all ok you cannot make the system go in a good way ok behave well ok. So, that is the whole idea. So, we proved again one side of the equivalence then further we talked about the small control property this was the final sort of property that is required to design continuous control loss ok. What is the small control property it just formalizes the and we saw it with a very nice example right that for a system like this the control becomes larger and larger as you come closer to 0 ok and 0 is a equilibrium of this system ok. So, which is very bad right because if you if you want to try to reach the equilibrium from both sides you are going to give larger and larger control efforts which is sort of ridiculous you do not want to do that. So, this creates a discontinuity at the origin and in order to prevent this you say or you assume that the system has a small control property and what is the small control property it basically says that if you start with small values of the state that is norm x is less than delta then with small values of control that is control is close to the equilibrium control you can make this v dot negative ok. So, basically it says it essentially says what we do not have here yeah essentially says what we do not have here that if you are close to the equilibrium then the small values of control should sort of send you towards the equilibrium ok that is the whole idea here and this is the small control property we already sort of claimed that this small control property is stronger than the second condition of CLF yeah this is a stronger requirement yeah why because if this holds if this holds then you know that if all these terms are zero all the control terms turn out to be zero then this term is still negative ok. So, this is essentially so this implies the previous definition is satisfied ok. So, small control property is a stronger requirement than the CLF property ok all right. So, once we have this small control property this is where we were last time Archtyne and Sontag it is basically their work primarily work by Archtyne and Sontag yeah they were the ones who started talking about the CLF the small control property and then corresponding control again they gave a universal controller ok. The cool one of the coolest things about this result is that unlike a lot of other mathematical results and this result I tell you is very mathematical they actually give a constructive design of the control ok we already saw this last time. So, anyway what it says is if you have a control affine system with the with the control Lyapunov function as per this definition then if the system admits the system admits a small control property if and only if it admits a almost C infinity stabilizer ok and we clearly said what is almost C infinity it means that smooth everywhere in a perforated neighborhood of the origin and continuous at the origin ok. It means that the control that you obtain will be smooth everywhere infinitely differentiable everywhere, but at the origin it is only continuous ok not smooth not smooth at the origin ok it is continuous at the origin. So, this is what you can achieve and remember in specific cases like example that we will look at or we can look at you will find smooth control which are smooth at the origin also ok, but remember this is a very you know as a result which covers all such control affine system. So, it is a very general result ok therefore they are saying in general you cannot clean this that you will always find a controller which is also smooth at the origin ok. So, what you can claim is it is almost C infinity ok, but in the examples that you will see you might find smooth control ok using this formula itself. So, it is not you know not that this covers all cases ok. So, the Einstein sontag. So, this result the proof of this result is based on the Einstein sontag universal formula ok or the universal controller or the universal formula whatever you wish to call it yeah it is defined by first defining these two placeholders AX and BX what is AX? AX is the derivative with respect to the drift vector field and BX is the derivative of V with respect to the control vector fields yeah. So, it is tagged as a vector we already saw what are the dimensions what is the dimension of this gang we discussed this right last time R1 it is a real value ok and BX is this is Rn it is an m vector ok alright great. So, what is the universal formula? This this is the control slightly complicated looking, but this is the control what is it? UX is minus negative of AX plus square root of A square plus norm of B4 BX divided by BX squared if BX is non-zero and if BX is 0 then the control itself is put as 0 ok. So, you can see that B is a vector. So, therefore, we are being careful whenever we take a square of B it is the square of it is the fourth power of the norm of B you are taking the norm of the vector ok and here also we are dividing by norm square. So, this is B ok. So, you see that this whole thing is in the direction of B ok this whole thing is in the direction of B ok because B is actually a vector of dimension Rm right. Notice this is correct why because control is also required to be of dimension Rm ok control itself is Rm we have m controls. So, we have m control vector fields ok. So, this dimension is ok right because B is also dimension Rm we just discussed this alright. So, dimension wise no problem yeah what is the significance of BX being 0 and BX non-being 0 this is the CLF condition right because BX was what this guy BX is del V del X FIX for all I it is stacked column vector and if this is 0 it means that del V del X FIX is 0 for all I yeah I should write this yeah for all I from 1 to m ok. So, when is B 0 if del V del X FIX is 0 for all I ok. So, this is the control Lyapunov condition under such a under such circumstances you know that the drift vector field itself will give a negative V dot ok. So, we put no control because anyway if the contribution of the controls is 0 because here if you put any non-zero value of control it is useless right because the drift the V dot is going to be 0 even if you put non-zero values of control right because you have UIFI ok. So, it is irrelevant. So, we put the control as 0 vector itself and here we give it some particular value alright. In order to verify that this is in fact a stabilizing controller because that is what we want we can just take V as our control Lyapunov function itself and compute a V dot ok. So, V is our candidate Lyapunov function we already know that the control Lyapunov function is a candidate Lyapunov function. So, I take that as my V for the system for doing Lyapunov analysis right and then I take a V dot. What is V dot is exactly this partial of V with respect to X and this for this control FIN system right and then you know that this is actually AX right and this is actually B transpose U is that clear right because B is B is this right B is this guy. So, B transpose U is exactly this multiplied by this yeah ok. So, once I have that all I am going to do is substitute for the control from here. So, again I get two cases depending on whether B is 0 or non-zero alright. So, when B is non-zero you can see what will happen I will have AX plus this guy sorry I have the control here right. So, I will have my AX and then BX transpose of this. So, BX transpose times this ok. So, this is a scalar. So, BX transpose just moves here right BX transpose just moves here and BX transpose BX is what? Norm of BX whole squared. So, this guy will this guy cancel out. So, all I am left with is AX minus AX plus this guy. So, AX minus AX cancel out again. So, I am left with just this much as expected because V is a scalar. So, V dot is also scalar important thing to notice this is strictly negative yeah because BX is not 0. So, this is strictly negative whatever AX is is irrelevant just because BX is not 0 this is strictly negative yeah alright. And if B is BX is actually equal to 0 what happens to the control? Control is just 0. So, I am left with just AX ok, but I already know by my assumption that AX has to be negative because V is a CLF right V is a control Lyapunov function. So, AX which is defined as this has to be negative when these terms do not contribute when BX is 0 A has to be negative. So, when B is 0 this has to be negative ok this is true only when B is 0 by the way when B is non-zero A can be negative positive whatever ok, but when B is 0 A has to be negative. And so what what have I shown that V dot is negative yeah and in fact this is true for all non-zero X remember in entire CLF definition although we did not stress on it much everywhere you see that for X not equal to 0 for X not equal to 0 ok all these assumptions are for non-zero X ok. So, so it works out nicely. So, V dot comes out to be negative definite and this means by our Lyapunov theorems what? What does it mean if V dot turns out to be negative definite asymptotic stability done ok system is asymptotically stable all right.