 Hello everyone. Welcome to our course on non-linear control. We are into the final week of this course and we are going to start off today with the material on finite time stability. So we've already done notions of asymptotic stability, asymptotic convergence and already seen many many methods to actually guarantee asymptotic stability for several non-linear systems and we've identified analysis methods and as well as you know methods for construction of control. So now we are sort of going to look at motions of finite time convergence because as of until now we have always been focused on convergence that is asymptotic that is to say in infinite time. However there is a lot of value and importance to studying finite time convergence as well. One of the reasons obviously is that a lot of real application problems do not obviously have infinite time to perform a specific task. There is a certain fixed amount of time or a finite amount of time and therefore studying notions of finite time stability makes sense and it's also known that these kind of finite time convergence controls also help reject disturbance. So this is also something that we will look at to an extent as part of this week's lectures. So starting with finite time stability this is a notion that was very very nicely introduced by a very prominent researcher in India itself which is Sanjay Bhatt and I'm glad to say I actually know him very well and he actually worked during his PhD a long long time ago on establishing some very valuable notions of finite time stability although there were others before him who did a little bit of work on finite time stability but the notions were not very properly formalized and I would say that Sanjay Bhatt can be credited to actually formalizing notions of finite time stability the way we look at it now. So it's quite good to know that one of our very own researchers has been the originator of a particular area of control itself. So these are the two references that you see here that we will be using both as you can see are from Sanjay Bhatt along with his PhD advisor at the time Dennis Bernstein who is still an active researcher in Michigan and Ann Arbor. Sanjay Bhatt is of course in India and part of TCS research. So these are the two sort of articles that we are going to focus on to sort of formalize these notions and these articles are already mentioned here one is the on the Leaplan analysis of finite time differential equations and you can see this is in the proceedings of American control conference from 1995. So this is already quite a while ago and this is the second article is the more formal journal article named as finite time stability of continuous autonomous systems and this was published in the CM journal of control and optimization in 2000. So quite old material in some sense but then this has been the basis of a lot of new developments. So this is what we are going to start talking about today. So let's see now what is exactly I mean before even start talking about finite time stability and so on we want to understand a few things you know understand a few differences as to what we were looking at before the kind of systems we were looking at before versus the kind of system we are looking at now. Until now so I will say until now always looking at Lipschitz dynamics. What is Lipschitz dynamics? I am going to sort of write a few things. So we have been looking at I mean I am not even going to look at a control system right I am just going to look at a dynamical system and an autonomous non-linear dynamical system at that. So suppose you have a system of the form x dot equals to fx and you know suppose this is I mean I can always have some kind of a neighborhood on which this function is defined. So it maps vectors in Rn to Rn and then of course we have typical assumptions that f is Lipschitz on in something that we denote as d and 0 the origin of course contained in d because we are usually interested in the stability of the origin right and there is of course some initial condition because it is an autonomous system that we are looking at so there is no explicit time dependence. So it is not a much concern for us to have initial conditions that are non-zero so zero initial condition is quite okay right. So what was this Lipschitz condition? So we have not formally looked at this but basically this condition guarantees so I will say that this condition guarantees existence of unique solutions. So there are two things here we have existence of a solution and uniqueness of a solution. So there are two things here okay so rather important to understand it is not just one or the other it says that the solutions exist and as well as they are unique yeah on the other hand it is well known that if f is c0 which is continuous then solutions exist and they are not necessarily unique okay. So obviously whenever I am saying something is continuous something is Lipschitz I am always talking about a domain d right I mean it cannot typically you can't say that the function is same because these are non-linear functions so it may not be very nicely behaved everywhere. So I hope one thing is sort of clear that c0 is a subset of I mean well I mean f is Lipschitz implies f is c0 so obviously because of what we are able to claim from the Lipschitz property is more than what we can claim from the continuity property of the right hand side you can expect that the requirement of being Lipschitz is something more than continuity yeah. So what is this Lipschitz condition right so what is this Lipschitz condition it basically says that there exists some constant which is the Lipschitz constant positive such that norm of fx minus fy is less than equal to L times norm of x minus y for all x and y in this domain d okay. So as you can see this is in fact you know looks like a smoothness condition looks like a differentiability condition in fact right because this is how you define differentiability right this is how you define differentiability because if you look at this carefully you will see that you know differentiability at any point x you know so if how would I define say df dx is something like this limit as h goes to 0 fx plus h minus fx divided by h alright. So this so the fact that the norms are bounded like this and the denominator is obviously x plus h minus x right. So therefore I could use this to somehow get a bound on this limit yeah which means that the somehow I can say that the differential exists yeah. So therefore the Lipschitz condition is a sort of a smoothness condition and obviously more than continuity yeah obviously more than continuity okay. So until now we have always been looking at systems like these where the right hand sides are Lipschitz even though I may not have explicitly stated it everywhere but we are yeah and why because this guarantees unique existence of unique solutions okay and this is a rather important thing for us okay. Now the thing is that we want to switch our focus to something that is a little bit different and why okay. So I am going to say there is a shift of focus to non-unique solutions okay there is a shift of focus to non-unique solutions and why the question would be why let us think about this. Suppose I do have finite time convergence I am not going to prove anything I am just going to argue it. So suppose I have finite time convergence typically what would you expect that if you remember even in asymptotic convergence what do you say or asymptotic stability what do you say you have stability obviously that is one so I am not going to go into that and then you have convergence right. What does the convergence say it says that if you there exists a delta ball around the origin from which if you start all your solutions will converge to the origin okay. So all points within the open neighborhood around the origin will converge to the origin alright. So this is in fact the claim of asymptotic convergence yeah. Now as you can imagine a finite time convergence would also have a similar claim that not just from one point but from a set of points say around the origin you will converge to the origin right. So what you can say is finite time convergence would imply set of initial sorry conditions around the origin will converge to it in important thing is in finite time this is the important earlier everything was happening in in finite time. So actually it was it was not you already understand that the real numbers does not contain infinity. So but all the convergence was happening as t goes to infinity t goes to infinity right. So the convergence was actually you know so and then until t goes to infinity you are always very close to the origin but not actually at the origin yeah but not actually at the origin therefore it was asymptotic right. So in the sense that however much you increase time you actually are never going to read 0 itself you are always going to read somewhere very close to 0 yeah and depending on where you chose to stop was humanly impossible to see infinite time we are fine I mean we just go somewhere very close to the origin but mathematically you are not at the origin okay and this is a very subtle difference that we need to understand. On the other hand if you are converging in finite time to the origin then it is humanly possible right because now you are talking about t going to a real number a finite real number and in that finite real time you will actually go to the origin exactly you will exactly go to the origin. So now in some finite real time from all these different initial conditions so suppose I make state space yeah say I make a two-dimensional state space right I mean just a phase plane image like we are used to doing okay on this phase plane right I may have many different initial conditions doing this yeah all these different initial conditions starting from some neighborhood starting from some neighborhood of the origin right starting from some neighborhood of the origin actually converge to the origin in finite time yeah so this is a phase plane portrait I hope you understand that this is a phase plane portrait alright so so what does it mean now what does it mean as I flow forward in time starting from different initial conditions I reach the origin right so if I look at these trajectories backward in time right I make my time negative time so t to minus t I replace t by minus t in this differential equation then starting from origin I have so many different parts I can take right alright now this was not possible in the asymptotic case why was this not possible in the asymptotic case because you in fact never reach the origin at all you are always very close to the origin so so I mean it would be I mean I would really zoom this in and then I would still be like if I did an asymptotic law I would be something like this something like this something really close but never really going to the origin okay so in the asymptotic case there was no question of starting from the origin and in backward time going at into different trajectories but here because I reach the origin in finite time yeah in finite time therefore I can if I look at the backward in time trajectories I have started at the origin and gone to different points right this is exactly non-unique trajectories yeah this is exactly non-unique trajectories what is the meaning of non-unique trajectories starting at the same initial condition I can get multiple trajectories okay so this is so finite so what is the point finite time convergence definitely leads to non-unique system trajectories in backward time okay in backward time you're guaranteed to have non-unique trajectories and the important thing is this is not allowed by Lipschitz f of x okay if you have a Lipschitz right hand side for the differential equation then this is not possible okay and therefore our focus focus is on non-Lipschitz right hand sides yeah but even there it is not too non-Lipschitz we still don't like to work with functions that have I mean differential equations which have right hand side which are completely non-Lipschitz everywhere and all of that okay so we we still restrict our attention at least for this treatment to very special cases yeah so what is this special cases so what is our requirement so that's that's the important thing we need to sort of characterize and understand carefully as to what is the requirement here okay so so this is the system so what do we require informally I will say we will the requirement so in fact I will write the differential equation that we are dealing with first the notation so y dot of t is f of y of t so what is it that we require in words and then we will formalize it in mathematical terms and we require uniqueness in forward time everywhere but at the origin okay so we will require uniqueness in forward time a because the issue wasn't backward time that backward time I definitely cannot have uniqueness because I'm looking at finite time convergence but I can require uniqueness in forward time so I need to have some kind of Lipschitz property and I need to I cannot have this property at the origin okay I cannot have this Lipschitzian property at the origin which is the equilibrium by the way so by default all our discussions revolve around the zero equilibrium okay so uniqueness in forward time everywhere but at the origin is what we require and that's what we formalize okay now what is it what is the system now we say that f is again a map from Rn to Rn c0 in d which contains the origin f locally Lipschitz d with the origin removed yeah d with the origin removed and of course we have f0 equal to 0 to guarantee that origin is in fact equilibrium and equilibrium okay all right all right so now we are not going into too much detail we already know that solution exists because of continuity of f right so by continuity which is in the entire domain solutions do exist okay and further so this is what I will actually write here what are the guarantees from the above assumptions solutions exist for all x in d that is all initial conditions in d and further solutions unique for all x0 in yeah so this obviously the existence of solutions for all initial conditions comes from here and the uniqueness only for d removing the origin comes from here okay great great all right I hope that's clear okay so now we are once we have this setup we are in a good place to define the notion of finite time stability okay what does it say it says that 0 is finite time stable for 1 if you have a few things right if there exists a neighborhood inside our domain d so again let me be careful 0 is contained in both of course and there also exists a function which is called the settling time function on this neighborhood yeah such that you have these things happening a the settling time function evaluate to 0 if you plug in 0 as the argument so if the state is if the initial condition state is 0 so so the way this is stated that this settling time is a function of initial condition okay so the initial condition is 0 obviously there's nothing to do it's an equilibrium so settling time function is 0 and t x0 goes to 0 as x0 goes to 0 right so obviously I mean it doesn't say it's monotonic or anything but it says that if your initial conditions are going to 0 then it is but natural to expect that the settling time function also goes to 0 okay then b you have that for all initial conditions in this neighborhood which with again the origin removed the solutions which are denoted say xt0 x0 are uniquely are unique on this interval yeah are unique I'll be I'll go over this a bit more carefully are unique on this interval right and further xt0 x0 is not equal to 0 on times in this interval but if I take my time going to tx0 exactly then xt0 x0 is actually 0 okay what does this statement mean this statement means that if you have initial conditions which are not at the origin because we do not have uniqueness from the origin then these are the solutions solutions are depend on the initial time which is 0 and initial condition which is x0 so the solutions are unique on 0 tx0 so it's open on the tx0 side obviously and it they're never 0 here but as time goes exactly to tx0 then the solution actually goes precisely to 0 okay and the last and final one is exactly the Lyapunov stability condition right so this is the Lyapunov this is the same because this has got nothing to do with finite time or infinite time this is just Lyapunov stability which says that for all epsilon positive there exists delta positive such that if your initial conditions begin again in a delta neighborhood of origin removing the origin then the solutions remain in an epsilon neighborhood of origin solutions remain in an epsilon neighborhood of origin again for the same we use the same time settling time function yeah so the notion of settling time actually justifies its name right because it is exactly the time in which you go to 0 and of course that is also a requirement for Lyapunov stability right so finite time stability contains the notion of Lyapunov stability as well right so so and you see the only difference between typical asymptotic stability is is two things one is that t doesn't go to infinity it just goes to tx0 is a settling time function right and further the you are always in each of these parts of this definition you are always excluding the origin where the solution is not unique right so everywhere you see the origin is excluded from the definition okay all right so and then therefore whenever you go to origin also it is shown in the limit okay because again since you do not have uniqueness from the origin you cannot really talk about limits and convergence and all these notions when you're not when you're at the origin okay so so that's what's the definition of finite time stability and we will in fact continue to do more in this direction yeah