 Hello, welcome to lecture number 3 of the final week of this NPTEL course. So, we were talking about finite time stability alright and we already looked at sort of the conditions that are required to talk about finite time stable systems considering that there is some non-lipschitz kind of properties that we desire to have finite time convergence right. So, we essentially look at autonomous systems right. We have been looking at autonomous systems which are unique in forward time yeah. And therefore, the requirement is that the functions be locally lipschitz everywhere, but removing the origin right. And then we of course, define finite time stability which includes the notion of a settling time function right and a neighborhood on with this settling time function works right. So, the settling time function is the function of the initial conditions right and it essentially tells us beyond what time the system trajectories are going to go to 0 and stay at 0 right. So, we of course, had this proposition which essentially says that beyond the settling time the trajectories will remain at 0 for all time beyond the settling time right. Now, we obviously want a Lyapunov like characterization because that is what we have been doing in this entire course right. We have been looking to get some kind of a Lyapunov characterization for almost all of our results and that is what we want to do here as well right. So, we stated the Lyapunov finite time theorem which essentially requires the existence of a continuously differentiable v as before and positive definite you know that these two things actually make this a candidate Lyapunov function in the normal sense of the word. And then we would like v dot to be negative definite again in a removed origin removed domain right. So, D removing the origin. So, everything that we state here has been removing the origin because things are not uniquely defined at the origin right. The solution exists, but it is not unique at the origin therefore, we remove the origin from the discussion right. And further we require that we have this kind of a not just negative definiteness, but something much stronger. So, we already state negative definiteness, but this is something much stronger which is what gives us this finite time convert instead of condition for alpha that is between 0 and 1 right. Now, given these three conditions we have finite time stability and on top of that this theorem itself gives a expression for the settling time itself. In fact, an upper bound on the settling time if you know the initial value of v x 0 right. And we actually saw how to compute it, it is pretty straight forward right. You just integrate this differential inequality given initial and finite final values and initial and final time. So, you get something like this on the left hand side and this on the right hand side just by a standard simple integration. And then we have this time quantity on the left hand side and we take all these other quantities on the right hand side. And if we equate this to 0 we know that this guy is also going to 0 because it is lower bounded at 0. So, obviously it can never be less than 0 right. So, if the right hand side goes to 0 and the left hand side is less than equal to 0 means it is actually going to be exactly equal to 0 right. So, that is what we sort of want. So, once we equate this you actually get a nice expression for the time which is in fact this settling time. Yeah, I mean this is of course used as an upper bound because of the typical conservativeness of the Lyapunov analysis. Yeah, you can see that there is a less than equal to already here right. Therefore, it is possible that you might reach 0 faster than this time right. But this is definitely an upper bound and as you can see this time and as is typical for finite time stability, this time depends on the initial condition because v0 is nothing but v of x0 right. So, that is something that is already been highlighted here. So, v of 0, v0 is actually nothing but a notation for v x0 alright. Now, next we state like a converse theorem right that is what we do now right. We state a converse theorem. What is the converse theorem? It says that if origin is f t or finite time stable right and n is as in definition 1 as in well it is not definition 1, but as in a finite time stability definition. So, what was n? n was the neighborhood in which the settling time works that is it nothing very special ok. Then what we say the converse state is that there exists c0 function v Lyapunov like function which maps n to real numbers yeah such that it satisfies all the conditions that were actually requirements in the previous case right. So, what were the requirements? One is that v is positive definite right then v dot is also c0 and negative definite on n again right. So, v dot being negative definite obviously just means that v is c1 again exactly like we had in the previous theorem right you had v to be c1 yeah v to be positive definite v dot to be negative definite. Here there is d removing 0 is what we are considering, but here we are considering the set n right which was the set that comes from the finite time stability definition and further there exist k positive and alpha in 0 1 open interval such that v dot plus k v to the power alpha is less than equal to 0 on n ok. So, the only difference between the converse theorem yeah and the main theorem and the finite time stability theorem is that there everything was in the domain right here we are talking about a domain removing origin right. Actually there is an error here there should also be the domain removing origin right, but because because a priori we are not given a set n from the finite time stability definition the set n actually comes from the Lyapunov finite time stability theorem right. In the converse theorem on the other hand we are already starting with the assumption that the system is finite time origin is finite time stable. Therefore, we have a settling time function and we have a set n on which the settling time function is valid and therefore we are everywhere using this set n ok and so somehow you can say that this is a if and only if kind of a condition ok. So, if you have finite time stability you have such Lyapunov functions existing and if you have a Lyapunov function existing then you have finite time stability right you just have to satisfy these three properties right there are these three properties here and vis-a-vis these three properties here very similar looking ok. So, again we are not going to prove this you can look at these references yeah if you want to to sort of see and understand the proof we are simply giving an overview and so we are not really going to prove things here ok. What we are instead going to do is do more fun things and actually look at an example right. So, what is the example? The example is rather important right it is the spacecraft angular velocity stabilization right you already seen some spacecraft examples right. So, we are looking at spacecraft angular velocity stabilization and what is the spacecraft angular velocity model it is something like j omega dot is minus omega cross j omega plus some control u you know that j is basically 3 by 3 symmetric inertia tensor right you know that omega is the angular velocity in body frame and u is some external control for example a thruster yeah thrusters are the most commonly used external actuators in satellite whether so this is an orientation velocity angular velocity it is an orientation control problem. So, you still have thrusters to actually manage what is called reaction control system. So, you have thrusters as part of the reaction control system to manage the orientation speeds and so on yeah. Suppose we have started at some you know speed omega 0 and we want to actually drive the speed to 0 you can see that if there is no control then 0 is an equilibrium of the system right. So, it is a fair thing to ask to go to 0 equilibrium of course we usually do this via Lyapunov functions right if I wanted to do it in infinite time I mean though a standard Lyapunov function would be something like omega transpose j omega this you can understand is basically the kinetic energy of the system right and when actually half omega transpose j omega is the kinetic energy of the system right. So, we actually remove this half right it is so we use twice the kinetic energy of the system just for simplicity and so if we take a v dot we get twice omega transpose j omega dot and that is equal to twice omega transpose and j omega dot can be substituted from here which gives me minus omega cross j omega plus the control right. Now you understand that this vector is orthogonal to omega right that is evident because the cross product is orthogonal to each of the component vectors. So, you have omega and j omega as the two vectors in the body frame see if I take the cross product of these two then I am certainly going to get a vector which is orthogonal to these two and the dot product of the vector with its orthogonal is obviously 0 right. So, omega transpose omega cross j omega is 0 right. So, omega transpose omega cross j omega is actually 0 also property of the scalar triple product right. So, this is actually the same as omega dot yeah that is identical these are the same things yeah they are just saying the same thing and therefore, this comes out to be a rather simple expression and that is twice omega transpose u. Now if I wanted some kind of a finite time convergence I would simply in fact exponential convergence why not I would simply plug in u s minus j omega which would imply that I get v dot as minus well I will just take it as minus half j omega just to make my life easy and I will get something like minus omega transpose j omega as my v dot which is actually let me modify this further and say this is minus k by 2 j omega and this I will do this more carefully this is twice omega transpose minus k by 2 j omega this is a scalar. So, I can move it anywhere and this cancels with this. So, I will get minus k omega transpose j omega and that is minus k times v right as you can see this is exponential d k right. So, in fact, I have obtained exponential convergence right I can obtain exponential convergence of the angular velocity dynamics to 0 right I can exponentially go to 0. However, you know that exponential is also infinite time right. So, obviously that is not what we are interested in and you can also see that this control is also rather nice right I mean it is not just that I got infinite time convergence and I am rather sad no because my control is also rather nice and smooth right. So, that is something that is good right that is something that is good right I get nice smooth infinitely differentiable controllers U right and so this is infinite time convergent smooth controllers. So, that is the great property that we have it is infinite time conversion and the smooth controllers right. Now, if I want finite time convergence remember what is the property I am looking at for f t convergence because I have most of the other properties already yeah what is it I already have v to be c 1 right. If you see I chose a v that is rather nice it is c 1 in fact c infinity right it is infinitely differentiable and v dot negative definite is also rather easy in fact even in this case you see that v dot was negative definite. So, that also I have ensured I can ensure what I need for finite time convergence primarily is then v dot plus or v dot I would write it rather like that v dot is minus k v to the power alpha right. So, in order to achieve this I will actually prescribe a controller which is minus k over 2, but this time I will take all right this time I will take something like omega transpose j omega to the power alpha minus 1 multiplied by j omega. So, it is almost a similar looking controller it has this k by 2 j omega here still the only thing is I have scaled it with some scalar divisor it is something that divides because notice this alpha minus 1 is less than 0 right alpha is between 0 and 1. So, important thing to remember is that this is less than 0. So, it is actually in the denominator right that is important to remember that it is actually in the denominator all right. Now, if I do this what I will get as v dot is actually equal to let us be careful again 2 omega transpose minus k over 2 omega transpose j omega which is a scalar again time to the power alpha minus 1 times j omega right. So, notice this is a scalar these are scalars this is the only vector quantity that I cannot move around, but this entire thing again move around wherever right. So, obviously I move all of these out right I know that this cancels with this. So, I will have minus k omega transpose j omega alpha minus 1 times omega transpose j omega and this is actually equal to minus k omega transpose j omega to the power alpha because these two multiply to give me alpha. So, this was smartly chosen exactly so, this product becomes alpha right and because it is a scalar I could move it out no problem. So, the important thing to note is that this is actually minus k v to the power alpha as required. So, I wanted v dot to be equal to minus k in fact, less than equal to minus k v to the power alpha I have made it exactly equal to minus k v to the power alpha and this alpha can be any number between 0 and 1 right. Now, the important thing like I said is that this control now is not smooth right. Important to remember that this is not smooth unlike before why because as I said this is actually division a division by some omega transpose j omega to the power 1 minus alpha. So, there is a division. So, something funny is happening at the origin right at omega equal to 0 everywhere else it is fine right everywhere else it is actually fine. So, in fact, what you can claim about this is that this is locally lipschitz everywhere except the origin. This is exactly the kind of controls we have been looking at and it is of course continuous right. So, this we can say is c 0 right this is continuous right everywhere this is continuous everywhere just that it is locally lipschitz everywhere, but not at the origin. So, why is it continuous at the origin? It is evident that I mean as you can see the numerator is also going to 0 as omega goes to 0 the denominator is also going to 0 0 as as omega goes to 0. Therefore, you do have continuity, but you do not have the lipschitz property at the origin. And this is exactly the kind of controllers that we have studied that we have been talking about that give us finite time stability. And you can see that we have exactly this v dot equals minus k v alpha. And therefore, we know that from our finite time stability theorem that there is this time within which within exactly within which my states in this case the angular velocity will go to 0. So, and that is pretty powerful right I mean it is saying something rather nice that you are going to go to 0 in finite time and that is very very important all right great. So, that is sort of what we wanted to discuss on the finite time stability. What we want to do is start a new notebook and talk about sliding mode control right. So, I am going to do that in a new notebook right. So, that it is a new topic right. So, we started in a new notebook right. So, this is sliding mode control. The interesting thing is you will see a lot of similarity between what we spoke about in finite time control and what we see in sliding mode control. In fact, sliding mode control I would say is a kind of finite time control which involves sliding modes and we will actually look at what these are. So, again because of our time constraints we are not going to really look at you know we are not going to really look at lot of proofs we are sort of going to motivate the idea of sliding mode control as far as possible as much as our time permits. And so, we will do this mostly through examples and ideas that is what is our aim. So, sliding mode control again lot of lot of nice deep history here and it is it is Uthkin is mostly credited to bringing sliding mode control to the sort of mainstream control. And he has been active I mean he has written a large section of papers and articles in the area of sliding mode control establishing the area of sliding mode control. But there are many many researchers now in the area and it is a sort of falls under the under what is called variable structure controllers. This is because these controllers tend to change their structure depending on where they are operating. So, anyway we will explore different facets of sliding mode control as I said using examples that is primarily our idea. So, let us look at a simple second order system first. So, this is x 1 dot is x 2 and x 2 dot is U plus F x 1 x 2 t and you have some initial conditions x 1 0 is x 1 0 and x 2 0 is this sum x 2 0 alright. Obviously, you these are all scalars. So, x 1 x 2 is an R U is an R is the control and F x 1 x 2 t is again belongs to real but is essentially a non-linear disturbance. It is a non-linear disturbance term. So, that is important. However, we assume so this is something I will probably highlight that F x 1 x 2 t is bounded for all time. So, there is a uniform bound. So, obviously the idea is construct or I will actually put it more formally as an objective construct disturbance that is rejecting U such that x 1 x 2 go to 0 asymptotically yeah. Notice to begin with we did not require finite time convergence although I said that sliding mode control is sort of a finite time control sort of a method in finite time control, but sliding mode control actually has its more novel features than the finite time control the way we have seen it yeah. So, the aim is not immediately to achieve finite time convergence of both states, but in fact the aim is more to reject disturbance right reject bounded disturbance like this right. So, you can think of this disturbance obviously as you know standard external disturbances, but it can be disturbances that are coming also from some kind of reduced order dynamics or you know I mean model approximations and model truncations and several other things right I mean this is this could essentially basically comprise of all the non-linearities that you do not want to directly work with when designing the control. So, as long as you can say that this is bounded. So, one of the typical obviously a tough sort of question that is usually very common when you are doing sliding mode control is you know is how to deal with the unbounded cases right I mean because if you think of how these disturbances I mean at least these basic examples how we are what we are asking of this disturbance term is pretty heavy is pretty heavy because as a non-linear control theorist you would immediately ask why would a function of states be bounded I mean what you are essentially asking is not just a bound, but a uniform bound you are saying that this quantity has to be bounded has with this uniform bound for all x1, x2, and t right and that is a pretty serious ask because even if I think of something as simple a non-linearity as say f x1 x2 t is say some polynomial nonlinearity x1 square plus x2 square right or say t times I mean it is something as simple as t times x1 square plus x2 square you see that this is unbounded right it is not uniformly bounded it is bounded if your states are bounded yeah so unbounded but bounded if states bounded in fact even if it is even the linear case right you know things like t x1 plus x2 has the same property right it is bounded if the states are bounded yeah or for bounded time in examples like these I mean if you have actually I shouldn't say this is bounded if states are bounded I will not suppose I have something like this yeah so these quantity this even the simple non-linearity then even this linear function for that matter is bounded only if the states are bounded otherwise it is unbounded therefore this is one of the tougher critiques right however there are of course modern answers to this if you can a priori guarantee that your solutions that your states a priori guarantee with some control that your states are not going to escape some invariant set then some bounded invariant set then you are fine then you have these kind of guarantees on your states and you are you are more than happy to go along with that okay however in general it is not easy to satisfy this yeah but these are the assumptions that the very very classical standards sliding mode control methods work with and that's what we are going to see as well yeah so we'll continue with this in the subsequent video.