 Welcome everyone to the sixth lecture on nonlinear dynamical systems. So, we will continue with existence and uniqueness of solutions to differential equations. In the last lecture, we already saw that under locally Lipschitz condition on F, we are assured of a solution only for an interval 0 to delta. This interval 0 to delta may be very large or may be small, but it is only finite. That is all that is assured by a locally Lipschitz property of the function f. We also saw an example, x dot is equal to x square. What is important about this? The function f of x is equal to x square is locally Lipschitz. In fact, it is differentiable at every point. At any point x is equal to 100, x is equal to minus 2000, the function is differentiable. Hence, it is locally Lipschitz at that point. Hence, at every point it is locally Lipschitz. But we also saw that for every initial condition x naught greater than 0, the solution exists only for a finite interval, for a finite duration from 0 to some delta max. So, the interval is an open, is semi open, is what is called a semi open. It is closed on one side, it is closed on this side and open on this side. So, for whatever x naught we choose, as long as it is positive and non-zero, it turns out that the solution exists only for a finite duration. So, this delta max could be very large. This is possible when x naught is very small, but delta max cannot be assured to be equal to infinity. It is possible to have only a finite duration unfortunately. Because this function is locally Lipschitz, we have a finite duration of time for which the solution exists and is unique, but we cannot have global existence. So, the question arises, can solutions exist for all time greater than or equal to 0? In other words, can we have conditions on f such that the solution exists for all t from 0 to infinity? After all, for linear system, this is true. Given that it is true for linear system, x dot is equal to Ax, where Ax is n by n matrix, we would like to ask the question, under what conditions on f, a little more general than linearity, can we have solutions that exist globally on the interval 0 to infinity? So, here we have a theorem, global existence and uniqueness. So, suppose f from Rn to Rn is globally Lipschitz, that is, what is globally Lipschitz? This particular inequality fx minus fy is less than or equal to some L times x minus y norm of x minus y. For all xy and Rn, there is one constant L that works for all vectors x and y in Rn. This inequality is satisfied for no matter which x and y in Rn we put, the same constant L will work. So, this is what we saw was globally Lipschitz property of the function f. If this is satisfied, then the theorem states that for every initial condition x0 in Rn, the state equation x dot is equal to f of x with x0 as the initial condition has a unique solution defined over the interval 0 to infinity. So, here we have, as soon as you assume that there is one constant L that works for this Lipschitz inequality for all x and y in Rn, that is enough to assure us that there is a solution on the interval 0 to infinity and moreover that solution is unique. So, because we have the 0 to infinity interval, we have called this theorem the global existence and uniqueness theorem. So, consider the linear system x dot is equal to f of x is equal to Ax, where Ax is a n by n constant matrix. Since all the elements of A are bounded, in fact, they are all constants. There exists a number L such that Ax is less than or equal to L times norm of Ax is at most L times norm of x. There is some constant L that will ensure that this inequality is satisfied for all x. So, what are candidates L for this particular inequality? We could take, for example, the maximum singular value of the matrix A when we are dealing with the two norm, the Euclidean norm as the norm here. We have this norm and in general L depends on the particular norm, whichever norm you take, there will be a constant L such that this inequality is satisfied for all x in Rn. So, linear systems x dot is equal to Ax, in that Ax is a globally Lipschitz function and hence we have existence and uniqueness of the solution over the interval 0 to infinity. More generally, even if we do not have a linear system, if we have a globally Lipschitz property that is sufficient to assure us existence and uniqueness of solutions on the interval 0 to infinity. So, we will have a quick summary of the various things we have seen so far. So, x naught, let x naught be an equilibrium point for the system x dot is equal to f of x, then we have seen various properties of the solutions. Local existence, we have yes for linear, for locally Lipschitz f, also we have local existence of solutions. For globally Lipschitz f, we have local existence of solutions to the differential equation. For non-Lipschitz, we are not able to say anything. What about locally unique? Of course, it is again yes, yes, yes. For global existence of solutions, of course, yes for linear, for locally Lipschitz we are not able to assure. For globally Lipschitz, yes, we just now saw that. What about finite escape time? Is it possible that solutions exist only for a finite duration of time beyond which it goes to infinity? This is what we call escape time. For linear system, this cannot happen. For locally Lipschitz, we are not able to say. For globally Lipschitz, it cannot happen because we have already showed that the solutions exist over the interval 0 to infinity. So, it cannot escape to, it cannot become unbounded in finite time. For non-Lipschitz, again we are not able to say anything. The next important question is, is it possible to come out of an equilibrium point? x naught is the equilibrium point. So, if a solution starts at an equilibrium point, is it possible that at some time instant it comes out of the equilibrium point? This is not possible for linear. This is not possible for locally Lipschitz. It is not possible for globally Lipschitz. Why? Because globally Lipschitz is also locally Lipschitz and the solution is unique for some interval and hence it cannot come out. But for non-Lipschitz, this is possible. This we have already seen. Is it also possible to come into an equilibrium point? There is a solution that is initially out of the equilibrium point. Is it possible that at some time instant it comes and merges with the solution that is always sitting at the equilibrium point? This is not possible for linear. This is not possible under locally Lipschitz property. For globally Lipschitz also it says a question mark here, but it is not possible. For non-Lipschitz this is possible. So, what is the significance about this? We might require to reach an equilibrium point in finite time. For example, the steady state, the set point, we might require to reach there in finite time. This is not possible under for linear systems for locally Lipschitz property of F. It is not possible for globally Lipschitz also. You might require a non-Lipschitz dynamical system if you want to reach the equilibrium point at any finite time. So, come out of and come into an equilibrium point, we mean here is at finite time. We have also seen some examples. These are examples of linear system non-Lipschitz unstable, non-Lipschitz stable, locally Lipschitz but not globally Lipschitz unstable, locally Lipschitz but not globally Lipschitz stable. So, we will proceed with one other theorem about global existence and uniqueness that does not assume globally Lipschitz condition on the function F. So, before we see that theorem, we will just analyze that theorem about global existence and uniqueness of solutions under the globally Lipschitz property of the function F. What were the drawbacks of that theorem? F in linear systems happens to be globally Lipschitz and hence the solution exists for all time t greater than or equal to 0. For non-linear systems globally Lipschitz condition rules out several common examples. It is too much to ask for a function to be globally Lipschitz. Locally Lipschitz of course is satisfied by several examples and this is something we would like to retain. So, it is perhaps possible to have existence and uniqueness of solution over the interval 0 to infinity but without requiring F to be globally Lipschitz. The condition we saw was only sufficient condition for existence of solution from 0 to infinity was that F is globally Lipschitz. Maybe there are some other weaker conditions on F under which we will have existence of solutions and uniqueness over the interval 0 to infinity. So, this requires us to review compact sets, open, closed and bounded sets. This is something we will quickly review. A subset S of the set X is called compact if S is both bounded and a closed subset of X. When do we call a subset S open in X? A subset S is called open in X if for every point X in S, one can find some epsilon neighborhood of X of that point X such that that whole neighborhood is contained in the set S. So, this neighborhood is defined like we have seen so far. Set of all points which are less than epsilon distance away from the point X. This is an epsilon neighborhood of the point X. The set of all points in X such that the distance is less than epsilon and this epsilon is greater than 0. So, if a set S is called open if no matter which point X you take after you have chosen the point X you are able to find some epsilon greater than 0 such that that epsilon neighborhood of the point X is contained not just in X but in S. When do we call a subset S of X closed subset of X? We call it closed in X if the complement of S in X is open. This has another way we can define this in another way by saying that all the boundary points are contained inside the subset S but that we have seen before. We will not review that part now. Finally, when do we call a set S bounded? If all the elements are bounded from some number R in the norm. So, a set S is bounded if there is some number R greater than 0 such that norm of every element X is at most R. So, all elements in S are not more than distance R away from the origin in this case. We also need the notion of an invariant set. So, a set S is said to be invariant. This invariant here is with respect to some operation. In this case, it is with respect to the dynamics of X dot is equal to f of X. So, a set S is said to be an invariant set with respect to X dot is equal to f of X if whenever the initial condition is inside, sorry, there is a small mistake here, this S should be replaced by m. So, a set m is said to be an invariant set if whenever the initial condition starts inside m, the trajectory is inside m for all t greater than or equal to 0. In other words, if the solution is in m at some time instant, then it remains in m for all future. That is the definition of a set m to be invariant. So, please replace this S with m. So, finally, we have another condition for existence and uniqueness of solutions over the interval 0 to infinity. So, let f of X be locally Lipschitz on a domain D. So, we are assuming only locally Lipschitz. Please note, let W be a compact subset of D and this initial condition, some initial condition X naught is in W. Suppose it is known that for every initial condition inside this compact subset W, we have that the whole solution lies inside the compact subset W. Suppose it is known that every solution of X dot is equal to f of X with the initial condition in X naught lies entirely in W and this is required to be true for every initial condition X naught in W. If it is known, then there is a unique solution defined over the interval 0 to infinity. The solution exists and is also unique over the interval 0 to infinity. Notice that we have only locally Lipschitz condition on f, but we have this additional property that there is some compact subset W such that whenever it starts inside W, the whole trajectory, for whatever interval it is defined, that trajectory remains inside W, inside this compact subset. In other words, this W is an invariant set. For whatever time interval the solution exists, the solution does not leave the set W. In other words, the set W is invariant under the dynamics of f. If somebody gives us this compact subset W which is invariant under the dynamics of f and f is just locally Lipschitz, then we have a solution defined not just over an interval 0 to delta, but in fact 0 to infinity. So, for whatever duration the solution exists, that is an important thing here. So, this completes existence and uniqueness of solutions. As our study about that, we saw locally Lipschitz property, globally Lipschitz property and finally we have seen that if there is a compact set that is invariant, then also the solutions can be assured to be, assured to exist and it is unique over the interval 0 to infinity. We will now move on to stability, to the notion of stability. What do we want to say about stability of an equilibrium point? We would like to say that a solution starting at equilibrium point, of course we know that solution starting at an equilibrium point remains there, but what about nearby initial conditions? Can we say that solution starting nearby also remain nearby? Stability is what? Solution starting nearby, near an equilibrium point remain nearby. So, we are going to try to quantify this nearby and this nearby. So, one should also note that the definition of stability itself has evolved, like solutions evolve for a dynamical system, even the notion of definition of stability, even the notion of stability has evolved over the last few decades and finally it has converged to what we will see in the next slide. So, this is best understood as, this particular definition is best understood as a challenge. So, it is like somebody proposes a challenge and somebody who is facing the challenge tries to answer, tries to meet the challenge. So, what is this challenge, proposer-facer definition of stability? So, consider the non-linear system given by x dot is equal to f of x, in which at any time t, x of t is an element of Rn, x has n components. Let 0 be an equilibrium point. So, for convenience, we are assuming that the origin itself is an equilibrium point. If it is not origin, that is an equilibrium point, but some other equilibrium point we are studying, we can just shift the coordinates there and be studying with new coordinates. In the new coordinates, the origin is again the equilibrium point. So, let 0 be an equilibrium point, that is f0 is equal to 0. Then the equilibrium point 0 is called stable. If for every epsilon greater than 0, there exists some delta greater than 0 such that for every initial condition x naught inside this ball, inside this ball centered at 0 and of radius delta, for every initial condition inside this, we have the property that x of t belongs to this other ball, again centered at 0 and of radius epsilon for all t greater than or equal to 0. So, when do we call the equilibrium point stable? Somebody proposes that for this epsilon, can you find a delta? The equilibrium point will be called stable. If no matter what epsilon somebody proposes, we are able to find the delta greater than 0 such that as long as you start inside this initial, as long as your initial condition is inside this delta ball, your whole trajectory lies inside this epsilon ball. So, this star condition here is a very important condition where the delta where the epsilon comes here is a very important part of the definition. This star can also be replaced by whenever 0 is inside the ball, the ball of radius delta centered at the origin and open ball. The whole trajectory is guaranteed to be inside this other epsilon ball. Epsilon is what somebody else proposes to us and delta is what we are able to calculate and find. We can also replace the star by for every initial condition x naught inside the ball b0, delta. We have x of t inside this other ball b0, epsilon for all times 0 to infinity. So, what I have said in the previous slide, this challenger is a person who gives an epsilon and tells, can you ensure that the whole trajectory remains inside b0, epsilon? After some calculation, the facer of the challenge, the person who meets the challenge says, yes, take this delta, just start inside this ball b0, delta and if you start inside this ball b0, delta, we are guaranteed to be inside this other ball b0, epsilon. So, the fact that we are allowed to do some calculation means that delta is allowed to depend on epsilon. So, if you are able to meet this challenge for every epsilon, no matter how small, that is when you will call the equilibrium point as stable. Smaller epsilon might mean a smaller delta. Hence, delta is shown to be dependent on epsilon in the previous slide in the definition. So, this is, we should be seeing a figure here. This is a time axis. For the purpose of this figure, x has only one component and 0 is the equilibrium point. So, of course, acting at 0, we have the constant solution. The solution always remains at 0, but somebody proposes us an epsilon ball. So, this minus epsilon 2 plus epsilon band is here and by our convention our epsilon ball is an open ball. In other words, this boundary epsilon we should not touch. So, somebody proposes this epsilon ball and tells, can you ensure that the trajectory remains inside this epsilon ball? So, after some calculation, we come up with this delta. So, that as long as the initial condition starts inside this, it might leave the delta ball of course, but it will remain inside this epsilon ball. So, for the trajectory to remain inside this epsilon ball, this is a 0. This is another figure with x. So, this epsilon which is being shown large could also be very small. This is what the challenger decides how small epsilon should be. This is a 0. So, once this epsilon is given, we might know that the trajectories might have to start within this very small interval, the interval within which it should start so that it is guaranteed to remain inside this bigger ball. This is the ball. What I am showing is the diameter. The radius is this. This is the radius and this is the diameter. For it to remain inside this bigger ball, it is possible that we should ensure that the initial condition lies inside this smaller ball. As long as it begins from here, from the solutions to the differential equation, we know that it remains inside this epsilon ball. Another solution here also remains inside this epsilon ball. Of course, this solution might also remain inside this epsilon ball, but it is possible that every solution inside with this much distance does not remain inside this epsilon ball. To ensure that it remains inside this epsilon ball, we are forced to make this delta very small maybe. So, but the fact that this delta is greater than 0 is what defines this particular equilibrium point as stable. That no matter what epsilon on somebody gives us, we are able to do some calculation and propose this delta so that if the initial condition starts inside this delta ball, the whole trajectory remains inside this epsilon ball. So, once this epsilon is specified, is this delta unique? Suppose, after lot of calculation, we have found this delta and somebody else does a similar calculation but obtains a different delta. Can we say that one of the two deltas is wrong because the delta should be unique or is it possible that for the same epsilon, there are many deltas. So, this is the question that we can answer without too much effort. So, this is only a guarantee. Suppose, this is our epsilon ball and after lot of calculation, one person finds this delta ball. If we start inside this interval, then the trajectory supposed to remain is guaranteed to remain inside this epsilon ball but the same guarantee will automatically be satisfied for this smaller band also. If the initial condition start inside this smaller ball, then also it is guaranteed to remain inside this epsilon ball. Why? Because once we are sure that starting anywhere inside this initial condition band assures that the solution lies inside this epsilon ball, then we know that inside a smaller band also, if we have begun, we are guaranteed to remain inside this epsilon ball. Of course, this smaller one, the smaller initial condition ball is a more conservative one but this only tells that the delta is not unique. Once we have found a delta greater than 0, we can take another delta that is strictly smaller and positive and still that delta which comes in the definition of stability that condition is satisfied for this smaller and positive delta also. This is only to note that delta is not unique. One could ask the question, can we make this delta larger and larger and for each epsilon, there might be a unique, largest possible delta. In that sense, it might be unique. So, it is also clear that when epsilon is made smaller, then we might have to make the delta smaller. One could ask the question, in general, is delta, what is the relation between delta and epsilon? Our figure appears to show that delta is smaller than epsilon but in general, should such a relation be satisfied that delta is smaller than epsilon or delta greater than epsilon or should such a relation need not, does such a relation need not exist? So, this, please note that this is called this definition of stability in the sense of Lyapunov. This is just a definition. This is not Lyapunov's theorem on stability. So, the reason that we have emphasized, we have spent a lot of discussion on stability, on the definition of stability is because it is a difficult concept and to understand the definition properly is very important to understand the theorems on stability. So, before we proceed to the Lyapunov's theorem on stability, we note that what we have seen so far is the definition of stability in the sense of Lyapunov. After having seen stability, what do we mean by asymptotic stability? So, in the definition of stability, once we are given an epsilon, we were required to find a delta that meets a certain condition. In addition to that condition required in the definition of stability, if delta can also be chosen to satisfy this additional condition that x of t goes to 0 as t goes to infinity. So, what was 0? The equilibrium point x of t converges to the equilibrium point as t goes to infinity, then the equilibrium point 0 is said to be not just stable, but in fact asymptotically stable. So, we will call the equilibrium point asymptotically stable. If it is stable, for it to be stable, we already know that for every epsilon, we have to be able to find a delta such that all initial conditions starting inside the delta ball are guaranteed to have the entire solution inside the epsilon ball. This delta which was chosen to satisfy this condition, in addition to that, if it can also be chosen to satisfy this additional condition that the solution converges to 0, the equilibrium point as t tends to infinity, then that equilibrium point is not just stable, but also asymptotically stable. It was already stable because delta satisfied the condition that the definition of stability required. In addition to that condition, it has delta satisfies this additional condition and hence that equilibrium point is asymptotically stable. For every initial condition inside the delta ball, we also have x of t goes to 0 as t tends to infinity. Solution starting close by, not just remain close by, remain close by is what x of t is contained inside the epsilon ball mean. So, they not just remain close by, but in fact converge to the equilibrium point. We had assumed that 0 is the equilibrium point. So, the solution should also converge to the equilibrium point for every initial condition starting inside the delta ball. If this is satisfied, then we will say that the equilibrium point is asymptotically stable. So, asymptotically stable naturally means that the equilibrium point is also stable, but not vice versa. For just stability, we do not require that the solutions converge to 0. We only require the solutions to remain inside an epsilon neighborhood. So, we now come to Lyapunov's theorem on stability. After having seen that the definitions of stability and asymptotic stability in the sense of Lyapunov, we are now going to see Lyapunov's theorem on stability. Let x of 0 be an equilibrium point and let d be a domain that contains this equilibrium point. Let v be a function from d to r. So, the domain d is a subset of rn. v takes its values from this d and is scalar valued. v does not take vectors as its values, it takes only a scalar. Hence, r at any point x, v of x has only one component. Let v be a continuously differentiable function. So, it is v itself is continuous and its derivative is also continuous. That is the meaning of continuously differentiable function such that v satisfies some conditions v of 0, the 0 the equilibrium point at 0 v is equal to 0 and inside that domain at every other point v is positive. v is allowed to be 0 only at the equilibrium point. At other points, it is positive. Secondly, v dot is less than or equal to 0. We have missed a 0 here. v dot is less than or equal to 0 in the domain d. So, v was a function of x but this dot here means it is a derivative with respect to time. This I will clarify very soon. So, the rate of change of v with respect to time at every point is less than or equal to 0. If we are able to find such a v which is continuously differentiable, which is positive everywhere except at 0 where it is allowed to be equal to 0 and v dot is non-positive. It is less than or equal to 0 in d. If there is some v that satisfies these three conditions, then the equilibrium point 0 is stable, is a stable equilibrium point. So, what is important to clarify is this v was not a function of time, it was a function of x. And x took its values in Rn but how do we go ahead and differentiate v with respect to time? This is one important point that requires a clarification. So, v was a map from Rn to R. Why? Because x at any time instant was in Rn. Our domain d was a subset of Rn for the time being we assume that d is equal to Rn. Hence, our function v was a map from Rn to R. If a function v was not a function of time, how do we go ahead and differentiate v with respect to time? This is something we will quickly see. So, this v we are going to evaluate at different points x but through each point we have a trajectory that evolves with respect to time. So, v actually depends on x which itself depends on time. So, because x is changing with respect to time as x moves to another point, value of v will also change. In this sense, v depends on time also. Suppose this is our phase space, this is x1, this is x2 and this is some point and this is a trajectory that is evolving with respect to time and at this particular time, at some time t1, it was here. At another time t2, it has moved to this point because x itself is changing. Along this trajectory, we can see how the function v is changing. v has some value at this point, some value at this point, some value at this point. Similarly, v has some value at this point. Immediately after this, immediately further along this trajectory, v has a different value. Similarly, as the x evolves along this trajectory, the value of v of the function v is also changing. In that sense, v is a function of time also because we are evaluating v along a trajectory x. So, we will see what it means to differentiate v with respect to time now. So, v of x and x itself depends on time t. So, d by dt, this is what we call a composite function. v depends on x while x itself is a function of time. So, d by dt of v of x of t is equal to partial derivative of v with respect to x, partial derivative of v with respect to x because v itself depends on many variables, x1 up to xn and hence this derivative is not an ordinary derivative but a partial derivative and then x dx by dt. x itself depends on only one variable time, one independent variable time and hence this dx by dt. So, if v depends on x1, x2, x3, then this is nothing but del v by del x1, del v by del x2, del v by del x3 times x1 dot x2 dot x3 dot. So, throughout this course, the dot we will reserve for derivative with respect to time. If it is derivative with respect to x or some other variable, then we will just write d by dx del del by del x of v. So, v dot which we saw in our slide is supposed to be is to be understood like this. v was a function of x, v in fact depends on x1, x2 and x3, x1 itself was a function of time, x2 was a function of time and x3 was a function of time. Hence, differentiating v with respect to time is nothing but del v by del x1 times x1 dot plus del v by del x2 times x2 dot plus del v by del 3, del v by del x3 times x3 dot. This is the meaning of v dot. Thus, at each point in the phase space, for this figure, our x has only two components. At each point, v has some value. This is like v is a scalar function. At each point x, x is a vector, but the value of v at each point is a scalar and this is like temperature of a room at each point. The temperature itself has only one component and as the trajectory moves through each point, there is some trajectory. This is the direction in which the x trajectory moves. Given that these trajectories are all well defined at each point, we can associate the rate of change of v at each point. At each point, there is also not just v defined, but also rate of change of v with respect to time defined for each point. That rate of change is, it is possible to define that because we have all these trajectories defined at each point and the trajectories themselves have some rate of change defined for them. For those who are more interested in this topic, this brings in use of lead derivative techniques into dynamical systems. It is not required for this course. As far as we are concerned, we want to understand the meaning of v dot, even though v was a function of x and not time. So, here at each point x, not just v, but v dot is also defined because we have a differential equation x dot is equal to f of x. So, that one also remains to be written in this slide. x is equal to 0 is an equilibrium point of the dynamical system x dot is equal to f of x. So, with respect to that dynamical system, v dot x is defined. What is v dot of x? In general, it is del v by del x times f of x. Why? Because this is nothing but del v by del x times x dot and this is what d by dt of v of x is, which we have denoted by v dot. This dot here we will reserve only for rate of change with respect to time. So, at every point x, there is a v dot defined and if that is less than or equal to 0, 0 is also missing here. If that is less than or equal to 0, then the equilibrium point 0 is stable. Further, in addition to less than or equal to 0, if this v dot, if this v function v is such that it is continuously differentiable, it is 0, it is equal to 0 only at the equilibrium point 0 and it is positive at every other point and if it is strictly less than 0 over the domain d except 0, in the domain at all the points except the point 0, if it is strictly negative, then the equilibrium point is not just stable but it is asymptotically stable. So, this is Lyapunov's theorem on stability. So, please note that this is only a sufficient condition for stability. When do we call the equilibrium point 0 of the dynamical system x dot is equal to f of x stable? We have a definition of stability. One of the ways to prove that it is stable is if you can find some function v that is continuously differentiable whose rate of change is less than or equal to 0 and which is equal to 0 at the point and it is positive at every other point. If it satisfies these three conditions, then that v, we will call a Lyapunov function and that Lyapunov function helps us to prove that this equilibrium point is stable. If such a function v, if we pick a function v and it does not satisfy these three conditions, then we are not able to conclude that the equilibrium point is not stable. It just means that perhaps this function v should have been chosen more properly, more judiciously. There might exist another function v that satisfies these three conditions and helps to prove that the equilibrium point is stable. This is only one sufficient condition to prove that the equilibrium point is stable. We will see these things in more detail in the following lectures. We also saw a sufficient condition for proving asymptotic stability. This particular function v is called the Lyapunov function and we will see these functions in more detail and some examples in the following lecture. Thank you.