 Hello everyone, welcome to the second lecture in this week on finite time stability. So, what were we looking at last time? In the previous bit what we have seen is the need for looking at non-Lipschitz systems or systems which have non-unique solutions at the origin. So, that is how that is what we formalized that the Lipschitz property essentially means that solutions exist and are unique. However, because we are looking at notions of finite time stability it is not quite possible to have unique solutions from the origin especially in backward time and therefore we do not require it even in forward time all right. So, that is why we settled on the notion of having uniqueness in forward time everywhere, but at the origin. So, this is the notion that we choose to work with while defining the notion of finite time stability. So, that is the system we were working with it is an autonomous system there is no explicit time dependence. So, it is of the form y dot equals f of y and on top of that we have that f is continuous in the entire domain containing the origin f is locally Lipschitz in the domain, but excluding the origin itself and f 0 is 0 this is simply the standard assumption that is required to ensure that 0 is the equilibrium of the system right. Now, this essentially guarantees that solutions exist for all initial conditions in the domain and this Lipschitz condition essentially guarantees that the solutions are unique for all initial conditions in the domain, but at the origin yeah. So, we do not want to deal with the origin because there is possibility of non-unique solutions at the origin if you are demanding finite time stability yeah right. Now, we also defined the notion of finite time stability and how do we define it? We say that the origin is finite time stable if there exists a neighborhood n which contains the origin inside the domain D and a settling time function which maps the neighborhood to some time 0 to infinity and the settling time is exactly what you would understand from it it is the time at which the solutions go to 0 ok. So, obviously as you can imagine if you start at the so, so this settling time function essentially maps initial conditions to the time in which these initial conditions go to 0 ok. So, obviously if you start at 0 since you are already at the equilibrium you would expect the settling time to be nothing but 0 itself and we also want that T x 0 actually converges to 0 as the initial conditions converge to 0 right. So, notice that we never say equal to here right because we do not want to look at initial conditions at 0 because our assumptions are on sets that exclude 0 or the origin yeah. So, therefore, we always talk of convergence to the origin and so on ok. So, the second requirement is the second requirement of finite time stability is exactly the convergence the finite time convergence it says that if your initial conditions are in fact, in this neighborhood excluding the origin then we have unique solutions right which depend on initial conditions on this time interval which is closed on the left and open on the right. So, it is from 0 to T x 0 and it is important to note that the solutions are never 0 in this interval, but as time goes to this value yeah we do not say equal to again right because this is not part of the interval. So, as time goes to this value T x 0 we say that x actually goes to 0 ok alright. So, again we are carefully excluding 0 from the mix yeah we still want to converge to 0, but we never want to start there and so on. I mean if you are starting there that is considered as a special case where you are starting in the equilibrium and staying in the equilibrium. So, there is no notion of I mean we do not have to the finite time that we get is 0 anyway right the finite convergence tend that we get is 0 anyway that is what is encapsulated here right, but we do not want to start at 0 otherwise ok and we are always looking at converging to 0 yeah. So, limit also means converging to 0 right and this last requirement is obviously Lyapunov stability that I am not going to repeat we have already seen that Lyapunov stability is a requirement of SM dot x stability as well and therefore it is, but natural that finite time stability notion would also have Lyapunov stability as one of the requirements ok great. Now that we understand finite time stability we have a few conclusions that can be directly obtained again I am not going to prove any of these things in this course this is a very short introduction to finite time and sliding finite time stability and sliding mode control right. So, we are simply going to state a few results so the first sort of proposition which is an almost like saying an outcome of finite time stability is that is what we state here 0 if 0 is a finite I will just say if 0 is finite time stable for one or the system that we already considered and let and we therefore have this we already have this neighborhood n and settling time function t corresponding to this finite time stability definition then what is understood is that a the solution what is understood is that the solution is uniquely defined and most importantly it is understood that x t 0 x 0 is exactly equal to 0 for all t greater than equal to tx all right yeah. So, once you essentially how one would say this is that is uniquely defined for all x 0 in the neighborhood n ok what this is sort of trying to say is that once you fix an initial condition you have a very uniquely defined path and the important thing to remember is that I mean it is sort of understood just from the second piece here that once you that you will reach the origin in this time tx 0 and what happens beyond time tx 0 well you are already at the equilibrium so you are not going to move unless there is some disturbance but we are not considering the disturbance cases here this is for the purpose of defining for a disturbance free or a noise free situation right the notion of finite time stability therefore once you reach the origin which is an equilibrium we are never going to move therefore that is what is essentially said here that once you reach the origin you will always stay there for all time beyond right it makes sense so this is not a very complicated motion ok. The next thing to remember is that the solution the way this is written is has to be careful here actually there is a dot here which means that the purpose of writing this dot is it is uniquely defined with respect to the time right and in here we say that x t the solution is actually this is fine I will just keep the time here it is evident that it is uniformly defined in time right and here I am going to say it is continuous for all x 0 in a neighborhood of the origin uniformly in time ok. So, the first one says that the solutions go to 0 and will remain there forever the second one says that the solutions are continuous in the initial conditions ok this is important ok and the final assertion of this proposition proposition is that t the function t of x 0 is unique and again continuous in x 0 ok. So, these are the important sort of outcomes from the definition of finite time stability ok. Now, what do we want to do? Obviously, we have always seen that these notions of asymptotic stability these definitions never really helped us right. So, what we really want to do is to sort of have some Lyapunov like conditions ok. So, that is really what we would like to do have some Lyapunov like conditions to characterize finite time stability as well ok and so, that is sort of what we are moving towards right. So, let us see how to state this I am wondering if all right all right that is fine. So, I will just call this Lyapunov like characterizations ok I am just looking at Lyapunov characterizations for finite time stability ok all right let us see let us see. Now, suppose I am going to start with a more but I am just going to directly jump into the simpler case where and we already know the notion of v and so on and so forth at the Lyapunov function itself. So, we define as before v dot of x as del v del x f of x for v in c 1 right. So, if you have a continuously differentiable function v then you can actually make this kind of a definition for v dot ok all right and and this by the way this works even for this sort of special case where you have solutions which are you know unique but not at the origin and unique in forward time all right. So, this Lyapunov derivative idea is still valid here yeah that v dot turns out to be exactly this all right. So, what we need to understand is that important v dot is well defined in d removing 0 right because our solutions are well defined in d removing 0 ok all right. So, what is the main result? So, what is the main result? This is let me see Lyapunov I just call it Lyapunov theorem right I will just call it Lyapunov finite time theorem and what does it say? Suppose there exists a v which is continuously differentiable and has the following properties v is positive definite right this is a standard requirement I mean v is just seen as a function of some x therefore, we just need to verify the positive definite of v we already know how to do this. Next v dot is continuous well v dot being continuous is already evident from the fact that v c 1. So, v dot is negative definite on the deleted neighborhood right on the deleted neighborhood of the origin right. So, it is negative definite, but you need to verify only on the deleted neighborhood of the origin. So, yeah so you do not need to. So, if you remember negative definite has had two properties that you check has had two conditions to verify one is that it is the function is 0 at 0 and it is strictly positive for all nonzero values ok. So, in this case you just need to check that it is strictly negative for all nonzero values of the state and that is it and finally, we want to you need this special condition right that there exist k positive and alpha in 0 1 and a neighborhood v inside d right of origin such that v dot plus k v to the power alpha is less than equal to 0 on this v deleting the origin ok. If you have this if you have these properties then apologies then 0 is finite time stable also you can actually find or upper bound the settling time function right as t x 0 is less than equal to 1 over k 1 minus alpha v of x to the power 1 minus alpha for all x in n where n is as defined in the finite time stability definition ok. Now, it is important to sort of try to understand this result right I mean the first one I mean the Lyapunov finite time stability theorem is not too different from the typical asymptotic stability theorem. In fact, the first and second look rather similar right. So, we require that v is positive definite and v dot is negative definite on this deleted neighborhood on top of that we have this kind of a funny convergence type of a condition which is that there exists some positive scalar k and some exponent alpha between 0 and 1 yeah strictly inside 0 and 1 yeah never 1 and never 0 and a neighborhood v inside d of origin such that v dot plus k v alpha less than equal to 0 right. And if you have these three conditions we are in fact claiming that you have finite time stability ok. In fact, if you would understand that from a and b you would be immediately able to claim some kind of Lyapunov stability yeah I am not claiming I am not saying that I am giving a proof of this theorem, but I am just trying to indicate why this might work yeah. So, from a and b itself it is not difficult to see that you would have Lyapunov stability the only thing that is left for us is to conclude the finite time convergence and for that we can actually focus on this third statement right. If I try to look at this sort of third statement a bit more carefully yeah it basically says that I have v dot is less than equal to minus k v to the power alpha. And if you see this is a scalar differential inequality yeah it is not differential equation, but it is a differential inequality right, but it is not too difficult I mean you can deal with this in an exactly similar way as you would deal with inequality right. So, this will be something like and simply again this is because of the fact that v is positive definite that we can all do this. So, v is scalar value so it is actually scalar differential inequality right. So, I can actually do things like this and integrate both sides right and if you see v alpha is less than 1 right. So, I will get something like I will get something like v 1 plus alpha I guess divided by 1 plus alpha. I am just trying to see if I am doing this right no I apologize this will be v not 1 plus alpha, but 1 minus alpha because alpha is in the denominator right. So, this is 1 minus alpha and this is going to be less than equal to minus k t yeah and so if I solve from here for the value of v what am I going to get? I am going to get v is actually less than equal to minus k t or minus k times 1 minus alpha t and this to the power 1 by 1 minus alpha right this to the power 1 minus right right I believe that is okay I believe that is okay right and because so this is 1 over 1 minus alpha this is fine and you know that 1 minus alpha is strictly positive by assumption because alpha lies exactly between 0 and 1 right. So, this is strictly positive and therefore, this is strictly positive as well right. In fact, one can very easily show that 1 minus alpha lies between 0 and 1 as well right 1 minus alpha strictly lies between 0 and 1 okay. So, this is something that is right right right okay and now if I want to if I try to equate this to 0 right suppose I equate this quantity to 0 what happens all right. So, okay so I see that the expression here has been given in terms of the v itself all right all right. Let me actually try to see so the left hand side would have more terms that is what I am missing here so what I am missing here is like a v0 right. So, where v0 is equal to v at x0 right. So, in fact I am going to erase this and rewrite this carefully because I am going to have v to the power 1 minus alpha is less than equal to k v0 yeah that is what it makes sense right. This has to be v0 minus k 1 minus alpha to the power t correct right this is exactly it. And now if I make sure that if this equal to 0 happens then I know that v is obviously positive definition it can never be less than 0 right. So, I know that v will exactly be 0 right we will exactly be 0 right and if v is exactly 0 by positive definiteness I also understand that my states will also exactly be 0 right. So, that is sort of what I am trying to yeah yeah exactly exactly. So, this is exactly what it is 1 minus alpha this was also 1 minus alpha. So, I should not miss this exponent here as well right. And now from here from this condition I know that the t I need will be actually equal to v0 to the power 1 minus alpha divided by k times 1 minus alpha right. And that is exactly this bound yeah in fact this should be not x but x0 yeah this is not x, but x0 right this is a function of x0 right. So, that is exactly what we get from here okay. So, this so obviously this is the time we get and this is obviously conservative because all Lyapunov analysis is conservative we already understand that and that is why we say that the settling time is always upper bound yeah you never arrive at an exact value of settling time function, but you usually always almost always arrive at a upper bound for the settling time function okay. So, I hope you understand now that from the first two you get the Lyapunov stability that was the first requirement and the next requirement was of course that your finite time convergence which you get from here and in fact the expression for the settling time function is also rather easy to obtain just by integrating this scalar differential inequality condition right that we have written here. Just by integrating it carefully I was of course not doing it properly to begin with, but now that you have done it carefully there is also the value at time t and the value at initial time right at 0 and then divided by 1 minus alpha. So, this is this integral evaluates to this in fact right. So, this will actually be from yeah if you are being careful this will be from v0 to v and this will be from 0 to t right this definite integral and that is what we have done here now and you just get a minus kt and once you have that you know that you just need to equate this guy to 0 because that means that this is going to be exactly 0 because v can never be less than 0 and once you have that you know that the t can be calculated like this all right okay great. So, we are left with a little bit of the converse theorem and we look at a simple example in the next session.