 So, anyway I will repeat it for the benefit of the audience that here the small control property and the continuity of the control and at origin are connected by an if and only if relationship which means that if the control is not continuous you do not have small control property, if you do not have small control property the control is not continuous at the origin ok. So, it works both ways everywhere in fact yeah that is the power of an if and only if result alright anyway. So, the key thing is you now have a formula for a universal controller which will work I mean it may look like a funny looking formula, but it works ok it gives you a stabilizing control it is smooth everywhere but at the origin where it is continuous ok still pretty good I would say ok. Now before going to the rest of the piece of the proof which is now a little bit more mathematical and it really just talks about you know the fact that the controller that we prescribed is smooth ok. So, that is really what we will prove, but before we do that I want to look at this example ok and the fact that there are multiple controllers possible and so on and so forth ok. So, if very simple example alright very simple scalar example x dot is minus x cube plus u ok this is the system that we are looking at. Now one obvious control without doing any analysis or anything is to simply prescribe my controllers this x cube minus x ok alright. Then you can see that your closed loop system is x dot is minus x. So, this is basically what you would call a feedback linearizing control and we have not looked at feedback linearization, but basically what this control is doing is linearizing the system in a sense ok. So, this is what this control does yeah when you look at feedback linearization this is the kind of controls you will design you will cancel you will try your best to cancel the nonlinearity and then introduce a nice linear term in this case you got a linear exponentially decaying system ok. One thing that is obvious is if your when x is large control is large yeah that should be obvious right because of the x cube term which will dominate alright. If x is small control is small yeah seems intuitive also large very far away from the origin large control close to the origin small control seems intuitive. Now let us do what we call typically called Lyapunov redesign. We do not design a control first like we did here which is sort of a feedback linearization based control we just we choose a Lyapunov function first for the system. In fact what we choose is a control Lyapunov function, but I do not say it like that let us say it is a Lyapunov function I know it is a candidate Lyapunov function because it is continuous and positive definite and all the nice things. So, this is a good function. So, I take v dot and I get this now v dot obviously contains the control right because I have not chosen it yet. Now without going into any CELF theory or universal formula I just use this and look at this to design a control what will I do? I know that this two multiplied are not giving me anything bad they are giving me a negative term. So, I do not try to cancel it I do not try to cancel it I just introduce another negative term ok. So, my control is just u equal to minus x in fact it is a linear controller ok it is a linear controller alright. So, I get v dot as minus x 4 minus x square which is again negative definite. So, by Lyapunov theorem I have asymptotic stability great done. What is the nature of this control again large when x is large small when x is small. However important thing is it is not never going to be as large as this guy alright never going to be as large as this guy. In fact, I have made a small computation also just for our reference if I take x equal to 10 this one comes out to 990 magnitude ok magnitude is 990 magnitude of this guy comes to be minus 10 ok this is positive 990 this is minus 10 when x is value of x is 10 units whatever that unit is ok alright. Let us look at the universal formula ok universal controller we know that v equal to x square by 2 is already a CLF yeah why because you can see that if the control term is 0 if the control term goes away I just verify it like this then I still have a negative term here yeah that is all CLF means just this tell you the control terms vanish or are 0 then this drift term is still negative which it is right. What is the drift vector field in this case by the way if you if I ask you what is the drift vector field and what is the control vector field everything is a scalar field here but still what is the drift vector field in this case f 0 what is the f 0 yes you should be able to parse this that what is f 0 what is f 1 if you want to apply these results what is f 0 you think and what is f 1 for this system this right this is the system that we are looking at so there is of course just f 0 and f 1 and nothing more right because there is only one control number of control vector fields is same as the number of controls so there is only this much so what is f 0 minus x cube what is f 1 1 1 ok great. So it is a CLF because del v del x f 0 is negative yeah when the other part is gone alright I will start the universal formula computation ok. So I first compute a x which is what del v del x f 0 x ok so what is del v del x it is x del v del x is just x times f 0 x which is minus x cube so a x is minus x to the power 4 what is b of x it is del v del x which is x again times f 1 x which is just 1 ok so it is just x ok so it should anyway it should be evident that I mean although I checked I said it is CLF and all that it should be evident to you that if bx is 0 the only way bx can be 0 is if x is 0 ok and remember that we have to check all the CLF conditions only for non-zero x ok only for non-zero x therefore I am saying that the condition 2 of the CLF definition is trivially satisfied which means you can never get in a situation where your drift where the control vector fields do not contribute and x is non-zero ok. If your control vector field do not contribute then x is 0 that is the only possibility in this case so it is anyway trivially satisfied we do not have to check anything but it is ok in this case you also see that it is minus x 4 is there which is a nice helping term alright. So I use this universal controller ok I do not have to write this case because this means x is already at the equilibrium right so this case is irrelevant for us yeah because if bx is 0 x is 0 which means I am at the equilibrium so obviously I am not applying any control it is stupid to apply a control at the equilibrium ok right. So I just compute this formula ok and what is that I have just populated the terms minus a plus a square root of a squared plus b4 and that is what it is a squared b4 and this whole thing multiplied by b over norm b square which is just what is b over norm b square it is sorry this is b norm b square is this where b is a scalar so norm of b is absolute value of ok. So this is b over norm b square ok. So basically I have just you can just simplify it just comes out to this guy this is what is the control ok this is what is the control it is x cubed minus x square root of x 4 plus 1 notice that the expression for this control is significantly more complicated than both the other ones one is a linear control which is minus x other one is x cube minus x so simple expressions yeah this expression way more complicated but it is a very nice controller right better than the other ones why if x is large what happens this can be ignored this is playing a very small role so then this is x cubed this is almost 0 for large x imagine for large value of state you are almost applying no control for small value of x what happens this is gone this is also gone you are left with minus x so for small value of control it behaves like this one like a linear controller for small values of x it behaves like the linear controller for large values of x it almost applies no control in fact you can compute I if you take x equal to 10 this is like 1000 minus 10 square root of 10 to the power 4 plus 1 this is almost 0 yeah but it is evident anywhere right because if I ignore the one both of these are x cubed ok this is a very cool controller right because it is like it is applying very small control for large values of state ok now whenever I say thing like this it is your job to tell me nothing comes for free ok nothing comes for free remember this these controllers both these controllers in fact this guy gives you a x dot equals minus x this guy gives x dot is minus x cube minus x both of these are beyond exponential rate of convergence super exponential exponential or super exponential ok because x dot is minus x is already decaying at minus 1 t minus 1 rate exponential decay this is even faster than that ok so both of these are converging rather fast ok this guy there is no guarantee only converging very slow ok it is not necessarily converging very fast so nothing is for free yeah we design controllers they are actually not that useless it is not like we did a very shoddy job no in fact more often than not we use this method sorry we use this method of control design directly guess it from the v dot expression rather than go to the universal formula alright but yeah this is a nice control if you want to keep your control commands in check and you do not care about how fast you go especially there is a lot of applications when if the states are very large you do not care for to apply very large control one of the most common application is spacecraft detumbling ok when a spacecraft is you have the launch vehicle right in multiple stages they get released then it leaves the earth's atmosphere then it says it gets close to the orbit then the spacecraft is released from the top ok when it is released of course it goes into the orbit but it is it is rotating at a crazy rate very very large rates of rotation ok and remember in this case the states are angular position and angular rate ok angular position is whatever 0 to 360 cannot be more than that but angular rates are super large and that is also a state of the system right so the states are very large in this now for the detumbling maneuver if you start firing your engines like crazy to stop the detumbling ok then you lost all your fuel in the first 10 minutes of your mission ok then what will you do after that yeah so they usually folks do not care about the equipments are well enough well protected enough that for large angular rates also the equipment is not going to get damaged this is important your equipment is going to get damaged then you better detumble soon enough but if your equipment is well stacked and you have done a good enough design so it is not going to create a problem for you if you are rotating fast all you want to do is make sure that it stops after say 2 days it is fine yeah so the detumbling maneuvers are really done with very small intensity jets very small or in fact a lot of times it is not even they try not to use jets so you find a lot of results on using magnetic torquers for detumbling so they use the because if it is a lower lower torbit you can imagine there is a small magnetic field on the satellite and and outside the orbit outside the atmosphere there is no real atmosphere right to stop it so so even these small magnetic forces are enough to stop the satellite or or slow down the satellite so mostly your detumbling maneuvers will be done with magnetic torquers so very small torques very small so these are the kind of controllers you want in such a scenario that you do not care when you stop you just want to stop okay so but of course like I said if you are if you are time critical application then yeah but then you you better have enough you can see that trade off right if you want fast you better have enough fuel or enough actuation ability okay because you are burning at this rate this is huge compared to this which is almost zero right so a lot of applications are there where you do not care about applying like you know huge torques just to stop a vehicle or something they find a stopping when it stops yeah the only thing is you still have to apply something in the in orbit because there is nothing else stopping it so then you are you can do a mission right if you want earth pointing satellites then how will they point the earth they do not stop okay so that is what is important all right okay so so I hope you understand that the universal formula though it gives the funny looking controls it is a very useful control design secondly more often than not we do not use the universal formula yeah we directly guess the controller from this kind of a Lyapunov redesign so we start with the CLF ideas right but then we design the control using you know just by guessing at this stage okay now one of the other things that you know you folk should also see is that actually I can keep control to be zero in this case then the system is still a stable system okay the purpose of designing controllers for such systems would be to get a particular convergence okay so this system is already asymptotically stable without any control yeah the purpose of the controller would be to get a rate particular rate of convergence go at a particular speed yeah all right so there is a nice exercise for you guys find a control Lyapunov function and apply the universal formula to get a control this is the exercise all right any questions okay absolutely yeah everything is dependent on the V there is no except for feedback linearization there is no real method which gives you a control design without a V well you have things like model predictive control and so on where you sort of guess a control out of optimization but in those cases stabilization guarantees are not rigorous I mean there are guarantees but the guarantees are very conditional for stabilization yeah they work more on I would say intuition see it is more like I mean how would a typical predictive controller work is it would basically say that I will discretize this problem so I will have say I look at say 20 time step horizon so now I have a discrete problem right I have a discrete problem meanings meaning that I can write this whole problem as a if my state space is say 5 5th order 5 states then I will write this whole problem as a 5 state and 20 time steps so 100 so 100 by 100 order matrix and I can pose an optimization problem right what will be my optimization requirement it would be like say fuel consumption is low so something like a u transpose u will be 1 and u transpose not u u u will also be the control at every stage every time step time step 1 time step 2 all the way to you know time step 20 okay so you put one piece of optimization cost is this a u square not u transpose u the other piece could be you want the your state to shrink so you can put another cost as x transpose x where x is again the stacked you know so it is like 5 states in 20 time steps so 100 there is extra so now you have a optimization problem you solve this optimization problem and if the solution is good then you know that if you keep applying this this is a control sequence right you got a control sequence for 20 time steps if you apply this control sequence for 20 time steps you know that you will hopefully reduce the x x transpose x because that is what you pose as the optimization problem then you apply the first or two first two three steps of that control then you do the do it for the next horizon if this is the receding that is what it is the receding horizon you switch the horizons first horizon second horizon so you compute for this horizon apply control only until this point then you compute for this horizon apply control here this horizon apply control here so you do not apply all the 20 steps of control you apply only the first two or so and so there is some proof which shows that yeah you will converge to the origin and all that the proofs are very highly dependent on a lot of things yeah they are actually dependent on on existence of a controller so it is very odd yeah so so they are more optimization I mean and if you want to see such setup such a setup is way more useful if you want to put constraints on the states and things like that way more useful yeah but if you are looking for stabilization type or or you know tracking type results when you rely on v have to rely on some v yeah and again it is also non-linear problem right optimization non-linear optimization so you do not know what comes out of it yeah you put it into an engine then what you do not know just like a lot of you will also use learning algorithms these days right I mean it is like I just have a hammer and I keep hitting all my nails with it right so you do not know that if you learnt well enough or not and if it will then perform well it will give us particular set of data so that is a little bit of an issue but anyway different context in this context we are looking at stabilizing controllers or tracking controllers hard to do it without a v yeah very difficult to do it without a v absolutely absolutely yes yes yeah so it means but it sort of means that close to the equilibrium you will have less control doesn't necessarily say that away from the equilibrium what will happen close to the equilibrium these controllers will guarantee that your control magnitude will be small yeah away from the equilibrium no such guarantees this is guaranteed yeah that is not dependent on the small control property will hold yeah yeah all right okay so what I want to do is to look at the proof is a little bit I am not sure how many of you will follow but it is okay there is a quick proof of why the controller is smooth okay how we do this is by invoking the implicit function theorem okay if you don't know what is the implicit function theorem I will just tell you in words you can look it up later it basically says that if you have a function of two variables or two or more variables yeah actually it stated in two variables only if you have a function f x y such that f x bar y bar is equal to 0 say okay and you also have df I think one of the partials right I would I think it is the partial with respect partial with respect to the y variable this is full rank okay or maximal rank when I say full rank it means partial of f with respect to y may be non-square matrix I hope you understand that yeah f is not necessarily a scalar function could be a vector function okay the implicit function theorem can be stated for a vector function f is in general a vector valued function of two parameters two you know variables x and y okay again different dimensions not necessarily scalars and the partial of f with respect to the y bar or the y y variable is maximal rank yeah not full rank maximal rank then you can say that y bar in the neighborhood of so y can be written as a function of x smoothly in a neighborhood of x bar y bar okay basically you can it is that is why it is called the implicit function theorem this function is implicit but you can it is possible to write it in an explicit way okay you can give an explicit relationship okay lot of times that is not possible if I give you an implicit function of variables then you may not be able to write an explicit expression yeah in this case it says it is possible if the partial of the function with respect to the y variable is full rank then y can be written as a g of x a function of smoothly also smoothly yeah okay all right and of course we started f with f smooth you also started with f otherwise of course nothing is possible okay all right