 So, we start focusing on control affine systems like you said it is a affine because there is no control connected to this term. So, it is not even linear in the sense linear is defined alright. We assume because we well we are interested in looking at equilibrium the origin is the equilibrium we definitely assume that there exists a equilibrium control if you may yeah such that this happens ok. So, we assume existence of a u bar such that f 0 0 plus summation of u i bar f i 0 is actually 0 ok. So, we assume some equilibrium control existence also ok. Why did we do all this? Because we could not get a nice smooth feedback that we were looking for ok. So, at least C 1 feedback that we were looking for we only got a feedback that was C 1 and everywhere, but at the origin ok. At the origin we had you know we had this lack of differentiability alright and it so turns out I mean you can try different examples you can try different controls here for this particular example it so turns out that for this kind of a system you will never get a feedback law u which is you know smooth at the origin C 1 at the origin ok. So, in order to give conditions for nice smooth controllers also at the origin because origin is actually a point of interest we do want to go to the origin you had we had to specialize to control affine systems ok that is why we are looking at control affine systems alright. We state an equivalent version of the control Lyapunov function definition yeah for control affine systems I say this is equivalent for this system and we will actually prove one half of it other half is actually an exercise yeah. How do we redefine control Lyapunov functions? The first one is still the same it requires a function to be a candidate Lyapunov function yeah. So, this is still the same exactly the same alright. Now the second condition is where things change yeah for the control affine case if the this statement basically says that if the contribution of the control terms is 0 yeah the contribution of the control terms is 0 to the right hand side then the drift term has to give negative v dot this is what this is saying. These terms are what are connected to the control right when I take v dot I will get del v del x f 0 plus u i del v del x f i right basically this expression that you see here on the left ok you have del v del x times f 0 plus u i multiplied by del v del x f i ok. So, del v del x f i are the terms that give you the control movement due to the control. Now if it so happens for some x that all these terms are 0 then the control cannot move the system right in that case we require that the drift itself moves the system in the direction of the equilibrium ok. So, when the control terms do nothing then I need this term to act and give me a negative quantity ok. So, that is what is this saying ok how is this equivalent I only prove one side like I said I will assume this and prove the previous one ok the other way around is the exercise. So, if I assume this what am I saying let us choose an x bar such that this happens for some non 0 x bar of course x bar is non 0 ok this is what is this assumption ok and I have for this particular x bar I have that this happens right, but then things are very after that it is very easy right because because if this guy is negative and this is 0 this entire expression for v dot which we had in the first theorem right. In the first definition we had del v del x f x u ok and what is del v del x f x u in this case it is precisely this guy yeah del v del x f 0 plus summation u i f i right. So, if there is no contribution from this guy so this guy is 0, but then by this assumption I already have del v del x f 0 is negative. So, therefore this guy is negative ok. So, even though I am not actually taking any infimum over u taking any infimum over u is pointless in this particular case right because del v del x times f i is 0 yeah. So, there is no effect of the control at all, but the drift term gives me a negative outcome which is what I need ok. Now the other case where these terms are not 0 what happens if these are not 0 ok the only two possible cases are this some at least only two possible cases are this is 0 for all i ok and the other possible cases for some i this is non-zero for some i at least for some i this is non-zero that is the other possible case there are only two possible cases. So, what happens if del v del x f i is non-zero for some i it is pretty straight forward see look at this again this expression of the v dot for some x star ok. If I just think of one control just to you know just to illustrate that there is only one control one control vector field then this expression looks so alright. Now I already so then if I expand it I have this guy ok if I expand it I just have del v del x f 0 plus del v del x f 1 times u 1, but I have already assumed that this is not 0 anymore we have already covered the case when this is 0. So, now this is not 0 if this is not 0 I can choose a control like this by inverting this guy. So, I inverted this cancelled this and inserted a negative quantity some negative quantity minus alpha yeah. So, therefore, if you compute v dot in the like in the previous definition del v del x f x u turns out to be negative ok. So, I have actually given an expression for u yeah ok even if you had multiple controls and multiple control vector fields I know that for one i at least this is non-zero for one i this is non-zero. Then I will just make that u i to be this guy everything else will I will keep it at 0 all the controls will be 0 and I will just choose u i as this expression right here ok. So, then I have negative contribution I am done ok. So, this is how you can prove that this definition implies that definition for control affine systems ok. The other side that is the first definition implies this definition for control affine systems is what is the exercise that you have to do should not be too difficult. Now, we are still not reached where we want to yeah we still do not have this wave constructing nicer controls ok we still do not have a wave constructing the nicer controls ok that is that are smooth at origin and things like that ok we are still not there. So, for that in fact we need something more yeah we are already at control affine systems yeah we need what is called the small control property. So, most of this work is due to Hartstein and Sontek yeah and in fact the references are also here. So, they actually you know proved all these results you can see the years it is 83 89. So, not too recent actually yeah. So, they sort of came up with this notion of small control property which is actually a strengthening of this control Lyapunov requirement if you know for a control affine systems. What does the small control property say? It basically says that if your state is close to the origin then your control also should be small ok very reasonable requirement. It just says that your system should not be ridiculous that even though you are very close to your equilibrium you need very big controls to bring it back to the equilibrium ok. So, that is sort of what it says. So, how does it again whatever we say in word we try to write in the math epsilon delta kind of thing that is what this is ok. It says for all epsilon positive there exist delta positive such that for nonzero x in the delta ball there exist some control vector which is epsilon close to the equilibrium control and v dot is negative ok and v dot is negative. I hope you see that this is stronger than the control Lyapunov condition ok. Why? Because in this inequality it should be evident that even if all of these are 0 this is still required to be negative and that was the control Lyapunov condition right. The second condition was the control Lyapunov condition. The first was just positive definiteness. So, that is anyway you know there anyway ok. So, the second the control Lyapunov condition this is stronger than the control Lyapunov condition ok. This implies the control Lyapunov condition. So, we need this condition to state any result on nice controls ok. And like I said it is a very obvious result, but we still try to look at it with some nice very very interesting example. I mean these guys come up with very fun examples I can tell you that they can come with a counter example actually. If you look at this system x dot is x plus x squared times u ok. Now, I hope it is sort of evident to you that if I try to construct a control forget v and so on and so forth. There is no I mean there is a v here sure, but here we are not talking about the v ok. Suppose I want to construct a stabilizing control here alright close to the origin. You can see that first of all I will need a negative x. So, basically this term has to contribute something like a negative 2x one possibility. If this term contributes a negative 2x then I have x dot is minus x and I know it is a stable is going to go to the equilibrium right good to go. The only issue if I make this minus 2x and I try to compute a control out of it I may not be able to divide by x square all the time, but the point is I will still have 1 by x type of a thing happening in the control right ok or if you look at it in a different way whatever control you have here is being scaled by x square. So, when you move far from the origin or slightly far from the origin the control effect is significantly multiplied, but as soon as you come to x less than 1 ok. As soon as x becomes less than 1 you start getting closer to a norm x or absolute value of x becomes less than 1 you start coming closer and closer to the origin yeah. The effect of the control is significantly shrunk yeah significantly shrunk ok. So, even if you try to apply a minus 2x out of anything that I mean even tries to cancel this minus x this x with a minus x you will still have something like a 1 by x happening and that is why the x square very specific purpose. So, what does it mean? It means that you will keep having to scale up your control as you get close to the origin right as you get closer to the origin control will have to be scaled up further and further ok. I hope you are convinced. So, u is large for small x first thing second thing when x is negative when x is negative you have to push it in the positive direction. So, this has to be positive. So, control has to be positive. So, negative x positive control similarly positive x negative control. So, what have we concluded from these 3 points control is large for small x in the positive direction control is negative in the negative direction control is positive. So, what happens as I come closer and closer to the origin you see what I have just drawn here exactly this here you big control big positive control big negative control got closer big positive bigger negative even closer very big positive very big negative. So, you can see what is happening this cannot be a continuous control at all right as you get closer to the origin control will explode in the opposite directions. So, I mean it is not even a very very scary looking example I mean it does not look scary of the top of the you know just just looking at it does not looks at scary, but it is very very bad system that you cannot design continuous controllers for ok. So, this is sort of the example. So, this sort of a system does not satisfy a small control property yeah because even if you are close to the equilibrium you are not going to get this kind of property impossible you are going to get very very large controls. In fact, I mean infinite control if you get very close to the origin ok unbounded controls ok. So, that is maybe one of the reasoning why you know you this seems like a reasonable assumption that if you are close to your equilibrium you should have you require less effort yeah nothing very very bad should be happening with the system. So, this is a very reasonable sort of a control continuity assumption ok ok. So, if you do have such a small control assumption then you have this very very strong result called the Arch time Sontag theorem and this happens to be a constructive result. In fact, a one of the few constructive ways of coming up with a control law if you are given a control Lyapunov function ok. So, what does it say this is called the Arch time Sontag theorem or the Arch time I mean the corresponding control law is called the Arch time Sontag universal formula yeah. What does it do it says if you have a control affine system just like we saw and if there exists a control Lyapunov function for the control affine system then the system admits the small control property if and only if it admits an almost C infinity stabilizer with u 0 equal to u bar ok. So, very strong result why it is a very strong result first of all it is an if and only if result yeah in in typical mathematics and applied mathematics if and only if results are considered very strong because they very tight yeah it is like this implies that and that implies this. So, you cannot have one without it is a very tight result yeah. So, it says basically that the assumptions that you made are the least required for you to have a control like this yeah. So, this is these are good we are considered very good results. The other thing is because this is constructive we look at it later it actually gives you an expression for the control ok. Now, the only sort of fine point to see is that it says that it admits an almost C infinity stabilizer ok. You already know C infinity you would be in smooth ok and you know what is a stabilizer stabilizer just means that it the control will make you asymptotically converse to the origin ok. So, that would be a stabilizer, but what is the almost the almost means that all the nice properties are still in a perforated neighborhood of the origin origin is not included ok. All you can get is continuity at the origin ok this is what you will get out of this result out of this result also this is what you will get I mean for systems like this with no small control property this does not exist and you already seen that it is a very tight result. So, no small control property no all continuous stabilizer at the origin ok. So, if you do not have small control property there is no possibility which is sort of evident also right. Because usually control flips direction at the equilibrium right sort of very natural very intuitive at the control flips direction at the origin because if you are on one side of the equilibrium you are pushing it this way other side pushing in that way. So, very natural that if you are on the left you are pushing right on the right pushing left ok. So, I can think of this for aeromechanical systems with you know position velocity as states, but same thing can be thought of in electrical biological systems also one side of the equilibrium push one way other side push other way yeah. So, this actually gives you a way of constructing an almost C infinity stabilizer which means that it is smooth everywhere but at the origin where it is continuous ok. So, that is what you can sort of achieve with this Arsteins-Hontag formula I will just show you what the formula is very quickly and then we learnt yeah. So, in order to give the control they use the of course the whatever elements are given to us which is the vector fields the control Lyapunov function. So, you construct an A of x which is here which is coming from del v del x f 0 which is the drift vector field yeah. Then you have a B x which is basically the vector consisting of all the control vector fields ok. So, as you can see this will be a matrix ok. What will be the dimension of this guy what do you think is the dimension of A x how many states n states ok. So, what is the dimension of del v del x no del v del x v is what what is the dimension of v I mean v is what v is scalar value ok. So, partial of v with respect to x what is the dimension yeah you can say n cross 1 or typical convention is to say 1 cross n yeah you think of partials as row vectors del v you write it as del v del x 1 del v del x 2 del v del x n ok. Typically this would be your 1 cross n vector ok what is the dimension of f 0 n cross 1 right it is a just a vector field n cross 1. So, dimension of A 1 A is a scalar excellent similarly dimension of del v del x f i x 1 ok. So, this is actually a vector then right this is actually a vector ok. What is the dimension of the vector it is m dimensional vector ok and m dimensional vector ok great. So, this is what is the Partstein's Sontag universal formula for the control ok very cool control called a universal formula or many people just called the universal formula or the Partstein's Sontag universal formula, but it is one of the few formulae that gives you directly a way of constructing a control if you have a control Lyapunov function ok this will always work this control is c infinity everywhere, but at the origin where it is continuous. So, this will always work you can take any system any robotic system any aeromechanical system any electrical any biological system with a model and a control Lyapunov function this will give you a stabilizing control and that is something super strong right for any arbitrary system and say over v you are coming up with a you are basically have a formula ok. So, hardly do you I mean I do not think most of you would know of any non-linear control formulas right that that you can just plug and play right you can with this ok looks very very ugly actually for lack of another word, but it is actually quite nice and it behaves very well it is a well behaved controller yeah again because it is c infinity everywhere and continuous at the origin it is a very nicely behaved controller we will discuss it next time. So, you can see I am using a I am using norm of b and b itself. So, you have a you know so basically if you look at this expression b is what defines the direction of the vector the control direction the control direction is defined by b ok and you see there are two cases here when b is 0 and when b is non-zero why b is 0 corresponds to del v del x f i equal to 0 for all i that is the b equal to 0 that is how b is defined. So, if del v del x f i is equal to 0 for all i there is no point in applying a control because control has no effect anyway. So, what is the point apply 0 control ok, but when if b is non-zero if you remember what were we doing we were inverting that particular term right if we said that the second if del v del x f 2 x is non-zero then we just inverted del v del x f 2 and created a controller ok, but that is too basic because you do not know at which instant which one is going to be non-zero. So, this actually generalizes that idea ok, you do not know which one is non-zero for that particular x. So, depending on which other one is non-zero this will work always ok. So, that is the idea yeah this is the universal formula and we will stop here ok all right.