 Welcome everyone to the lecture number 8 of non-linear dynamical systems. So, we will see some further extension of the Lyapunov's theorem in different contexts. So, we will begin with global asymptotic stability. So, what is global about the asymptotic stability of an equilibrium point? If an equilibrium point is asymptotically stable, then we know that inside a small neighborhood, inside some neighborhood it is in fact the only equilibrium point. We cannot make such a statement for stable equilibrium points, but for asymptotically stable equilibrium points we know that there is a neighborhood such that all trajectories beginning within that neighborhood all of these converge to that equilibrium point. Hence, that equilibrium point is the only equilibrium point inside a neighborhood. Hence, this is also locally asymptotically stable, but we are curious if this equilibrium point is a globally asymptotically stable equilibrium point. In other words, no matter where the initial condition is, all these trajectories starting from arbitrary initial conditions, all of them do they converge to the same equilibrium point asymptotically? This would make it a globally asymptotically stable equilibrium point. Can Lyapunov's theorem tell us something about this? So, for this purpose we need to see what is the region of attraction. So, suppose this is an equilibrium point and trajectories are coming to this inside some neighborhood. We would like to make this neighborhood larger and larger. In the context of equilibrium in the context of asymptotically stable equilibrium point, we can speak about a region of attraction. No matter where the initial condition starts inside this region, this equilibrium point is attractive. It attracts all these initial conditions so that the trajectories converge to this equilibrium point. So, we would like to look for a larger set that is such that all initial conditions inside this set converge to the equilibrium point. So, we can speak of the region of attraction which is the largest set of all initial conditions such that starting from this set, starting anywhere from this set, trajectories converge to the equilibrium point. So, the region of attraction being the whole space is what makes that equilibrium point globally asymptotically stable equilibrium point. Of course, this rules out any other equilibrium point. So, of course, when there are some more equilibrium points, then the equilibrium point cannot be a globally asymptotically stable equilibrium point. That time the region of attraction is something genuinely difficult to calculate, but Lyapunov's theorem can give us some estimates. So, let us go back to that particular proof of the Lyapunov's theorem. This was our equilibrium point and we had taken this set omega beta. We had taken a value of beta and we had taken all those particular points whose value is less than or equal to beta, where all those points where the Lyapunov function has value at most beta. Of course, we also saw that if we take a larger beta value, then this omega beta could be will be larger because it of course contains this lower value beta set. This is omega beta 2 and earlier one was omega beta 1 because beta 1 is less than beta 2. This omega beta 2 set will contain omega beta 1 set and we also saw that this omega beta set is positively invariant and hence starting inside this trajectories will remain inside this set. Moreover, if the Lyapunov function is strictly decreasing at all points except the equilibrium point, then anywhere it starts inside this omega beta set, it converges. So, we could consider taking larger and larger values of beta and consider the corresponding omega beta set. This particular Greek alphabet is omega, it is capital omega, unlike the smaller omega we have seen. This capital omega is what we also use for ohms. So, if we take a beta 3 value that is very large, then the corresponding omega beta 3 set could look something like this which is itself another estimate of the region of attraction. So, can we take this beta set larger and larger and construct this omega beta sets? That is the next question. So, this cannot be done for all Lyapunov functions unfortunately, but whenever we can find the beta value such that the omega beta set is a bounded set, it is automatically closed by the definition, but it is also bounded set this is not guaranteed for large values of beta. If it is a bounded set, then that particular set is guaranteed to be a region of attraction. So, let us see what can happen for large values of beta? Why would this not be a bounded set for larger values also? So, this particular this is the equilibrium point its interior and this particular contour itself is what we call omega beta set. Here only the contour now a larger value of beta will have a contour that encircles this that encloses this, but for this beta value a little larger it is possible that this contour does not close. It could go and become very large. We will see some problems about how this beta for this contour sets corresponding to larger values may not close. So, this is some possibility that can happen because of which our region of attraction estimate cannot be obtained from the omega beta set. So, this was suppose for beta equal to 1, this one was for beta equal to 4, while for beta equal to 8, for beta equal to 8 suppose it turns out that this particular contour does not close in this direction nor in this direction. So, if it is not a closed contour then that omega beta set would not be a compact set and we will not be able to use our result that this compact set is positively infinite hence the solution exists for all future time. Due to these difficulties we are not able to get a good bound for the region of, good estimate for the region of attraction. So, what do we do in this case? Can we have a Lyapunov function that rules out such contour sets? That is the next question we will ask. So, what is the problem before we formulate the property of such Lyapunov functions? So, notice that when we take, when we go along this direction as we go further and further from this equilibrium point the value of beta is not increasing. For example, if this contour does not close for beta equal to 8 then no matter how far you go along this direction beta equal to 8 value is never reached. Similarly, in this direction beta equal to 8 value is not reached when we go arbitrarily far along this direction here. However, when we go along this direction along this particular radial direction the beta value does increase. It could become arbitrarily large it appears along this direction but not in this direction. This is what motivates us to define a property called radially unbounded. So, as I said level sets of the Lyapunov function give us some estimates of the region of attraction. This could be possibly conservative in the sense that the region of attraction might be larger but this level set gives us only a conservative estimate, gives us a smaller estimate this is possible. So, the next question is can we ensure that the Lyapunov function has some property such that these close sets, these contour sets will automatically close will be bounded. So, this is what we will call radially unbounded. So, we will call a Lyapunov function v radially unbounded if along any radial direction v becomes unbounded. In other words whenever the norm of x becomes very large which means that we are going very far from the origin. If we are going far from the origin then we are going very far from any other finite point also. Whenever we are going very far then it implies that the Lyapunov function value also becomes arbitrarily large. So, this is what we with this property that the norm becoming large implies that the Lyapunov function also becomes large is called radially unbounded. So, it is unbounded along every radial direction. This is precisely the problem we had seen in the previous contour set because of which contours were not closing. However, if we ensure that this Lyapunov function is radially unbounded then it is ruled out that some contour will not close will not be a closed curve. So, not closed contours for any level set is are all ruled out by ensuring that this Lyapunov function is radially unbounded. So, what is the Lyapunov theorem on global asymptotic stability? This is also again a sufficient condition. Let x equal to 0 be an equilibrium point for this differential equation x dot is equal to f of x and let v be a function r into r which is continuously differentiable. Continuously differentiable functions are also called c1. Why? Because they are differentiable ones and the derivative is continuous. Suppose there exists a function v that is continuously differentiable such that v of 0 is equal to 0 and v of x is greater than 0 for all non-zero x. Of course, this is like we saw before v dot of x is strictly less than 0 for all non-equilibrium for all points other than 0 and this important third property that the Lyapunov function v is also radially unbounded. Then the point x equal to 0 is globally asymptotically stable. It is asymptotically stable was already implied by the first two of these three conditions by including the radially unbounded condition. It is also globally asymptotically stable. So, it is not just asymptotically stable but in fact globally asymptotically stable. Let us now come to the case of linear systems. Why? Because this is one situation where the asymptotically stable equilibrium point is in fact globally asymptotically stable. Also, this is a situation where the condition is Lyapunov's theorem is not just sufficient but also necessary. So, a matrix A, a square matrix A is called Hurwitz. If all its eigenvalues have their real part negative. So, if all the real parts, if all the eigenvalues are in the open left half complex plane then that matrix A is called Hurwitz. So, x dot is equal to A of x. A x has the origin as an equilibrium point certainly. This origin is an asymptotically stable equilibrium point if and only if the matrix A is Hurwitz. This is a standard result for linear systems. For linear systems, what happens to the Lyapunov's theorem? We will very soon state a theorem that says that this Lyapunov's theorem is necessary and sufficient. So, what is the result? Consider the system x dot is equal to A x where A is n by n matrix. Then the matrix A is Hurwitz if and only if there exists a function v from R into R which is continuously differentiable such that v is 0 at the equilibrium point and it is positive everywhere else and its rate of change is strictly decreasing. Its rate of change is strictly negative. In other words, Lyapunov function is strictly decreasing along every trajectory. So, please note that we have if and only if here which means that the equilibrium point is asymptotically stable if and only if such a Lyapunov function exists. In fact, the Lyapunov's theorem can be utilized in a more nicer way where the rate of change, rate of decrease can be specified. For linear systems, take v of x equal to x transpose px for some symmetric matrix P. So, a symmetric matrix P is called positive definite if x transpose px is greater than 0 for all non-zero vectors x. So, v of x is a positive definite function if and only if the matrix P is a positive definite matrix. So, what is the rate of change? Rate of change also turns for linear systems where the Lyapunov function v of x is a quadratic function coming from a symmetric matrix. It turns out that the rate of change is also again coming from such a symmetric from another symmetric matrix x transpose qx. This we will verify very quickly. So, it turns out that for linear systems for every prescribed rate of decrease, for every prescribed rate v dot of strict decrease of energy, then in fact exists an energy function v that decreases at this prescribed rate. This is possible such a Lyapunov function v exists that decreases at a prescribed rate if and only if the linear system is asymptotically stable. So, this is something about solvability of a so called Lyapunov equation. So, consider the system x dot is equal to a of x for a square matrix n by n, then a is Hurwitz if and only if for every symmetric matrix q less than 0. So, what was less than 0? We will quickly review the definition of less than 0. Less than 0 is just a negative of greater than 0. So, q being negative definite, negative definite just means that minus q is positive definite. So, a is Hurwitz if and only if for every q that is negative definite that exists a matrix P such that P is symmetric and positive definite. So, please note that we have used the word positive definite only for symmetric matrices that is this is the main convention. If and only if there exists a P such that P is greater than 0 and a transpose P plus P a is equal to q. So, equation 1 says that the Lyapunov function is positive which is which is required for a Lyapunov function and the second condition says that the rate of change of the Lyapunov function is equal to x transpose q s x which is which is guaranteed to be negative for all non-zero x because q was a negative definite matrix. So, this is something we will quickly verify. So, consider the system x dot is equal to a of x in which the Lyapunov function is defined as x transpose a x sorry x it was defined as x transpose P x where P was positive definite where already assume that P was symmetric which means P is equal to P transpose. So, what is the meaning of P is equal to P transpose in the in the context of this function because this is a scalar this is just a real number x transpose P x transpose of this is same as the number itself. What happens when we take transpose of such a product we can take transpose of all the individual elements and also reverse the order. So, first we will write this vector x and put a transpose then we will write this matrix here and put a transpose and finally, we will write this particular vector x here transpose again transpose. So, that leaves only x. So, because P was symmetric even if P were not symmetric it appears like as far as the effect on x transpose P x is concerned whether you take P or P transpose it is the same. Hence, we could as well assume without loss of generality that P was symmetric otherwise we could have considered it is so called symmetric part. So, this is our Lyapunov function we have already assumed P to be symmetric in this case. Let us see what happens when you differentiate the Lyapunov function with respect to time here we get by by the rule when we have a function depending on x on a product of such functions we will differentiate this. This is nothing but d by dt of x whole transpose times P of x plus x transpose P times d by dt of x. In this particular term we have differentiated this x in this particular term we differentiated this x whether you take the derivative first and then take the transpose or you take the transpose and then differentiate it is the same. So, this d by dt of x is nothing but a times x this is our dynamical system. So, this is nothing but a x whole transpose P x plus x transpose P of d by dt of x again is nothing but a x. So, we see that this can be simplified as x transpose something in the middle and x you can pull out this x this particular term evaluates to x transpose A transpose P x and this is nothing but the x here has gone here and this x transpose has come here and as I said this is nothing but a transpose P. So, the rate of change of this Lyapunov function we have prescribed as equal to x transpose Q x. This is what we want. What is that Q? It is precisely A transpose P plus P A. If P was symmetric you can check that this particular matrix inside the bracket here is also symmetric. You can take transpose of this whole thing and see that you get back the same two terms. So, it is guaranteed to be symmetric and it is precisely equal to V dot of x that is equal to x transpose Q x. So, now in other words prescribed V dot of x equal to x transpose Q x our rate of change of the Lyapunov function is negative for all nonzero x which is nothing but the definition of Q less than 0 Q negative definite matrix. If this is prescribed it means that A transpose P plus P A is prescribed to be equal to Q and of course the dynamical system A is already given. Can we find P such that this equation is satisfied after we are given with A and Q? To be given with A is just to be given the dynamical system while to be given Q is to be given a prescribed rate of decrease of the Lyapunov function. So, given A and Q in this matrix equation each of these three matrices A, P and Q are n by n matrices in which P and Q are symmetric matrices. If A and Q are given can we find a P such that this equation is satisfied that is the first question. This particular equation is called the Lyapunov equation. This is called motivated by Lyapunov's theorem of course Lyapunov equation. It is a matrix equation. So, we are interested in solubility of this equation given A and Q find a P. Moreover, we are also interested can we ensure that this P that we have found also satisfies P positive definite that is what we want from Lyapunov functions. They should also be positive definite functions. So, to this particular result it says that you can find such a P if and only if the matrix A was herwits. So, let us look at this particular equation slide again. So, consider the system x dot is equal to A of x for A and n by n matrix. Then A is herwits which means all its eigenvalues are in the left half open left half plane if and only if for every prescribed v dot in other words for every Q negative definite there exists a matrix P such that P is positive definite and P satisfies the Lyapunov equation. In other words A transpose P plus PA is equal to Q. So, we are now back to non-linear systems. We have seen what happens about Lyapunov theorem for linear system there it was necessary and sufficient. So, we are here to the situation of non-linear systems again. So, here it was only sufficient. So, we take a candidate v of x greater than 0 and we check if along the trajectories of x dot is equal to f of x whether v dot of x is less than 0. In case v dot of x is not less than 0 then there are two possibilities. The equilibrium point is not asymptotically stable. This is one possibility. Other possibility is that we did not choose our candidate v of x carefully which means that we should look for another v. We might have some physical reason some intuition why the differential equation equilibrium point is in fact asymptotically stable, but our candidate Lyapunov function did not satisfy strictly less than 0. Hence, we should not go ahead and conclude that it is not asymptotically stable because less than Lyapunov function not negative definite implies any one of these two possibly both. Whether it can be possibly both or not is a good exercise to think about. So, because the Lyapunov theorem is only a sufficient condition for either stability or asymptotic stability it can be any one of these two cases. So, an important question is what about the converse? If we knew for sure that the system is asymptotically stable then can we say that there should exist a Lyapunov function for the case of linear systems they have already seen. So, under some conditions on f of x and knowing that the equilibrium point is already stable by knowing by some other means whether it is possible that we know that the equilibrium point is stable or asymptotically stable can we guarantee the existence of a Lyapunov function. So, one way to do it is since we already know that we can find Lyapunov functions for linear systems the linear system that is obtained by linearizing the non-linear system at an equilibrium point that linearized system also can be studied for studying the stability of the equilibrium point. So, suppose a suppose small a is an equilibrium point which means that f evaluated at a is equal to the 0 vector define a of define the matrix a as the Jacobian matrix of f at the point a at the point x equal to a. So, we have we will very soon see a big matrix definition of this del f by del x evaluated at x equal to a. So, consider the linear system z dot is equal to az what are the equilibrium points for this z equal to 0 is certainly one equilibrium point it is the only equilibrium point we have seen if and only if a is a non-singular matrix if a is herwits then the linear systems equilibrium point it has only one equilibrium point if a is herwits why because if a is herwits all the eigenvalues are in the open left half plane in particular no eigenvalue can be at the origin and hence it is non-singular then the linear system has only one equilibrium point z equal to 0 and we have already seen that this equilibrium point is asymptotically stable if and only if a is herwits then we can use the conclusion from a we can use a Lyapunov function from a for the non-linear system also. So, let us see this in little more detail. So, f was a map from R n to R n a is a so called Jacobian matrix of f evaluated at a. So, what is the Jacobian matrix the first row is to differentiate the first component of f first with respect to x 1 then x 2 etc. Second row of the matrix a is the derivative of f 2 with respect to x 1 x 2 etc. Similarly, we construct the last row of the matrix a as the derivative of f n because f was a map from R n to R n f itself had n components we call them f 1 up to f n this Jacobian matrix itself is a function of x 1 up to x n we evaluate that matrix at x equal to a then it becomes a constant matrix for this constant matrix we consider its eigenvalues and check whether they are herwits whether this matrix is herwits or not. So, z of the linear system I am sorry about this mistake here. So, we need here z dot is equal to a of z z dot is equal to a z. So, z of the linear system is like deviations from a. So, z equal to x minus a. So, a is herwits implies that the point x equal to a is an asymptotically stable equilibrium point for the non-linear system x dot is equal to f of x. What about the Lyapunov function for the non-linear system Lyapunov function for the linear system already exists we know because for linear system the Lyapunov theorem is necessary and sufficient. So, for the non-linear system also we will consider using this Lyapunov function. So, the Lyapunov function for the linear system whose existence was guaranteed is also a Lyapunov function for the non-linear system. This is a great part about non-linear systems and we linearize if the linearized system is asymptotically stable then the non-linear system also is asymptotically stable and we can in fact use the Lyapunov function from the linear system as Lyapunov function for the non-linear system also. Thus, the linearized system asymptotically stable implies that the non-linear system also is asymptotically stable at that equilibrium point. At another equilibrium point we might consider again linearizing whose eigenvalues may or may not be in the left half plane. As far as this equilibrium point is concerned which when we linearize we get a matrix A that is her weights that equilibrium point is what we will call is so called exponentially stable. This is some good type of asymptotically stable equilibrium point which is automatically satisfied for linear systems. Whenever the linear system is asymptotically stable then that equilibrium point is also exponentially stable. We will see this in little more detail later. So, as I said this Lyapunov theorem is only a sufficient condition. If one particular candidate fails to be less than or equal to 0 or strictly less than 0 then we might consider another Lyapunov candidate. But after some time when various candidates do not work we could consider trying to prove instability using one or more of these candidates. So, here we have a sufficient theorem on for instability. So, what should that function we satisfy so that we conclude instability. After we have failed to conclude stability for some time we could consider instability proving since none of our candidates worked. So, this is again a sufficient condition. So, let x equal to 0 be an equilibrium point for the differential equation x dot is equal to f of x and let v be a map from R n to R. Suppose it is continuously differentiable and v of 0 is equal to 0. Suppose v x naught is greater than 0 for some point x naught that is having arbitrarily small length. So, no matter what length no matter how small a positive length somebody specifies suppose we can find the x naught point whose length is smaller than that particular specification such that v of x naught is greater than 0 at that point x naught. Construct the set of all u's such that construct a set u which is set of all points inside a ball B R where the function v is positive for some small enough R for some small enough radius R greater than 0. Suppose it turns out that this function v of x is increasing inside this set u then this x equal to 0 is an unstable equilibrium point. So, if we can show that inside some set u that comes arbitrarily close to the equilibrium point 0 if this Lyapunov function and also the rate of change of the Lyapunov function are both positive then this particular equilibrium point which we initially aimed for proving stable is now unstable. So, this is a sufficient condition to show that a equilibrium point is unstable. So, the fact that v is positive and v dot is positive there is nothing very important about positive both could be negative also because then we could have taken minus v instead of plus v. So, if for some point sufficiently close to an equilibrium point a function v and v dot have the same sign and are both non-zero then that particular equilibrium point is unstable. This is the same theorem reverted in plain words. So, now that completes Lyapunov theorem and its various extensions. There is one important extension called the LaSalle's invariance principle that we will now see. Before we go to this particular principle we will see an example. So, consider the system x dot is equal to this particular example. For example, this example comes when studying the differential equation of a pendulum. Suppose there is this pendulum and there is this deflection theta and this is how gravitation acts here. So, then we have two states let us call this x1 equal to theta, x2 equal to theta dot, x2 equal to theta dot is nothing but sorry this x2 we want to call as theta dot which is nothing but x1 dot. Notice that this equation is precisely the first equation that is written here. So, also this is a pendulum with some friction. In other words whenever it is moving at some velocity theta dot in the time there is some rate of decrease of x2 dot coming from frictional force. This d times x2 has an interpretation that there is some deceleration because of friction. In other words the second differential equation says that this is coming from just one second order differential equation. So, this particular differential equation is what we are going to study now in the context of Lyapunov's theorem and also the next principle that we will see. So, from physical principles we can consider a Lyapunov function as this is nothing but our potential energy plus the kinetic energy. Assume for now then the kinetic energy is just taken as theta dot square. Whether this is a good Lyapunov function or not we will decide soon. Of course, I have written theta and theta dot here. We will write this back in terms of x1 and x2 1 minus cos of x1 plus x2 square. So, let us see what happens to v dot of x. v dot of x is rate of change of this with respect to time. So, this is nothing but d by dt of minus cos of x1 plus d by dt of x2 square. So, derivative of cos of something is minus sin of that. So, this becomes plus sin x1 but x1 itself is a function of time and we were differentiating with respect to time this times x1 dot plus 2 times x2 times x2 dot and this is the rate derivative of this term. So, now we will use x1 dot and x2 dot from our dynamical system equations and put that here. Sin of x1, x1 dot is nothing but x2 plus 2 times x2 times x2 dot was equal to minus sin of x1 minus bx2. This was x2 dot. So, when we evaluate this it seems like sin x1 times x2 is also here but with a factor 2 and it is not cancelling well. So, we are not able to conclude. So, this whole thing is equal to minus x2 sin x1 minus 2 bx2 square. Of course, the frictional element b is such that b is greater than 0. So, this term is helping us to show that this less than or equal to 0. Unfortunately, this one is this guaranteed to be always less than 0? Can we say that x2 times sin x1 is always positive? Can we say that x2 times sin of x1 is always greater than 0? If we could say this, then this would always be less than 0 and we would show that the pendulum is as stable as x1, x2 equal to 0, 0 as a stable asymptotically stable equilibrium point. In fact, we are not able to say this. Of course, that only means that this, that could mean that this Lyapunov function was not a good candidate. But then we could also go back and see that look, this was not really kinetic energy. While this is potential energy, this is not kinetic energy, we could consider dividing by 2. So, let me make a change on the same slide with a different color pen. So, here we have this divided by 2. We have this again divided by 2. This cancels off this 2. This cancels off this 2 also, because of which we perfectly have a cancellation here now, because of which this 2 is gone, but that is not a problem. So, now we have that this is equal to, we will see the same example in a different context also. But at least here it appears that v of x equal to 1 minus cos of x1 plus x2 square, not a Lyapunov function. But v of x equal to 1 minus cos of x1 plus x2 square by 2 has v dot of x equal to minus bx2 square, which is less than or equal to 0. Hence, equilibrium point 0, 0 is stable. Of course, we should have checked at the equilibrium point, whether 0, 0 is an equilibrium point to begin with. One can verify in the system of equations that 0, 0 is indeed an equilibrium point. That is why we are trying to study whether this equilibrium point is stable, unstable, asymptotically stable. So, the pendulum with some friction has been shown to be a stable equilibrium point. Is this asymptotically stable? Can we say that this is strictly less than 0, whenever x1 and x2 are both not equal to the 0, 0 point. Can we say strictly less than 0 for the second candidate? And the first one was not a Lyapunov function for the second one. In other words, if we say v dot of x equal to 0, does that imply that when would we say that this Lyapunov function is, rate of change of the Lyapunov function is negative definite, if the only point where it was equal to 0, where the rate of change was equal to 0, happened at 0, 0. So, if b x2 square minus of that is equal to 0, this is a question mark here. It only implies that x2 is equal to 0. We cannot say anything about x1. So, x1 can be arbitrary. In other words, when the pendulum is not moving, when the pendulum is stationary, at all those places, the Lyapunov function rate of change is equal to 0. The Lyapunov function is not strictly decreasing at those points, but that does not imply that the x1 component itself is also 0. Only the x2 component is 0. So, there are various points where the rate of change of the Lyapunov function becomes equal to 0. It is not necessarily only the equilibrium point. And hence, this Lyapunov candidate does not help us to prove asymptotic stability. However, do we think that the equilibrium point is asymptotically stable? We could consider linearizing it at the equilibrium point and checking. So, equilibrium point is x1, x2 equal to 0, 0. So, we can differentiate this and construct our matrix A. The first function is not a function of x1. Hence, we have 0 here. This is a function of x2. Rate of change of this, partial derivative of this function with respect to x2 is equal to 1. Second function, this is our f2. So, this is equal to f1 of x1, x2 and the second one is equal to f2 of x1, x2. So, we are going to use our big matrix that we saw in the slide here. The component, the value that comes here is derivative of f2, partial derivative of f2 with respect to x1. In other words, derivative of this with respect to x1. So, this is equal to minus cos of x1. And what comes here is derivative of this f2 with respect to x2, which is equal to minus b. And this is expected to be a function of x1 and x2. So, we have to be evaluating at x1, x2 equal to 0, 0. So, when evaluated at this point, we get this equal to 0, 1, minus 1, minus b. Cos of 0 is equal to 0. Cos of 0 is equal to 1. So, what about this matrix? Does it have eigenvalues in the open left half plane? At least our Lyapunov function could not prove it is asymptotically stable. But the linearized system, let us investigate the eigenvalues of the linearized system, eigenvalues of A. In other words, eigenvalues of this matrix assuming b itself, the friction is caused by some b that is positive. Only then it is deceleration due to friction. So, eigenvalues of A can be found by finding the determinant of S i minus A. In other words, determinant of S minus 1 plus 1 S plus b. This is equal to S times S plus b plus 1. This is S square plus b S plus 1. So, what can you say about the roots of this polynomial? Roots of this polynomial is precisely the eigenvalues of this matrix A. We have constructed the characteristic polynomial and found the determinant of the S i minus A matrix and its roots, roots of S square plus b S plus 1. Of course, we could use the quadratic equation and find the roots. By using that b is positive, it will turn out that the roots are in the left half complex plane. Why? The roots have product equal to plus 1. We could evaluate this so that we do not need to discuss in detail why the roots are in the left half plane. Roots are equal to minus b plus minus square root of b square minus 4 Se. So, that is precisely b square minus A and C are both equal to 1 divided by 2. So, suppose roots need not be real. Roots could be either real or complex. If the roots are complex, which means b square is less than 4, then because b itself is positive, then roots in we will call this open left half complex plane. If the roots are complex, complex roots, then would we have complex roots if this discriminant is negative? In other words, b square is less than 4. That is when this quantity within the square root back under the square root sign becomes negative. In that case, the imaginary part is positive for one, negative for the other. But the real part is minus b by 2 and we already assumed that b is positive and hence clearly the roots are in the left half complex plane and they are complex. But what about when the roots are real? That means b square is greater than or equal to 4. When b square is greater than or equal to 4, then can we say that square root of b square minus 4 in absolute value is less than or is strictly less than b, strictly less than b, again absolute value. Why? Because you see this b square is greater than 4 and this is some number from which we subtracted 4 and then taking the square root and it cannot be even equal to b, it will be strictly less than b. In other words, this minus b plus minus square root of b square minus 4, this quantity itself in absolute value will be less than 0. When b square minus 4, we take positive root and we add that to minus b without the absolute sign. So, minus b plus b square minus 4 is also negative. Why? Because b square minus 4 quantity itself is less than b in absolute value. Of course, when both are negative, it will only be further negative. So, whether as long as b is positive, whether the roots are real or complex, they are in the left half complex plane. So, again, roots in c minus. So, we have investigated both the cases, whether the roots are real or complex. We know that the eigenvalues of that matrix are in the open left half plane. Open meaning, it is not on the imaginary axis also. It is strictly in the left half complex plane. So, what did we conclude from this? Linearized system. A linearized system for this differential equation has origin, that is, which is asymptotically stable. Even though our Lyapunov function constructed from physical principles, what physical principle? We use a Lyapunov function as a notion of energy in which we added the potential energy and the kinetic energy. In the kinetic energy term, we ensured that the number 2 is important. To divide by 2, divide the velocity square by 2 was required. In spite of constructing the energy function from physical principles, we could not prove that the origin is an asymptotically stable equilibrium point. We could only show that it is stable. But the linearized system has eigenvalues that are in the open left half complex plane, because of which the origin for the linearized system is asymptotically stable. So, there should exist a Lyapunov function. This particular symbol just says there exists. As soon as the linear system is asymptotically stable, that matrix A was Hurwitz. Hence, we already saw that there should exist a Lyapunov function, because for linear systems, the Lyapunov theorem is not just sufficient but also necessary. So, if we know that the origin is an asymptotically stable equilibrium point, then there is a guarantee that there is a Lyapunov function. And this Lyapunov function, we can in fact construct as X transpose Px. So, X transpose Px also is a Lyapunov function for the non-linear system. So, we know that there should exist some Lyapunov function, even though that could not be motivated from physical energy principles. There should exist a Lyapunov function which can in fact prove asymptotic stability. So, how do we actually find this? This is something that we can do as an exercise, because we know that for linear systems, we can prescribe the rate of decrease and still be able to find the energy function that has precisely that decrease. This energy function is also guaranteed to be positive definite simply because the matrix A was herwits. So, we will see an example of such a prescribed rate of decrease and solve the Lyapunov equation in an exercise. So, the next thing to do is we could consider using that same Lyapunov function which could prove only stability but not asymptotic stability to arrive at the conclusion of asymptotic stability by using a so called Lasals invariance principle. So, Lasals invariance principle helps us, for example, in this situation where the Lyapunov theorem could only prove stability but not asymptotic stability. It can prove at elsewhere places also. We will see now. So, for example, there could be a situation where we want convergence to not an equilibrium point but to a set. So, what are equilibrium sets? What is the trajectories are converging not to a point but to a set? We could speak about stability in stability for sets also. So far, we spoke only for equilibrium points but we could also consider speaking for sets. So, in this context, there is a Lasals invariance principle. So, let omega be a compact set that is positively invariant. Positively invariant means it is positively invariant with respect to the dynamics of the function f. We are considering the system x dot is equal to f of x here and in that context, we have constructed a set omega that is a compact set. Suppose, you could find a function v from this domain D to R and suppose this function v is C1. C1 we already saw means it means continuously differentiable and this function v satisfies the property that its rate of decrease, its rate of change is less than or equal to 0. So, it is strictly decreasing is not what we are assuming, non-increasing on omega. Now, considering the set E of all those points in this set omega where v dot of x is in fact equal to 0. Inside this set E, we will now look for an invariant set. We will now look for the largest invariant set in E. Suppose, we construct the set M which is the largest invariant set sitting inside E, then Lasals invariance principle says that every solution that starts in omega approaches M as T tends to infinity. As time tends to infinity, it approaches this set M. So, C1 as I said is a function that is continuously differentiable. One stands for at least once. So, what does it mean for a solution to approach a set M? This is something we have not seen yet. So, every solution starting in omega approaches set M as T tends to infinity. What is the meaning of this statement? So, we will see this quickly before we end today's lecture. So, for every initial condition x0 in omega, x of T tends to M. Approaches M as T tends to infinity is nothing but x of T tends to M as T tends to infinity. So, this x of T tends to M means converging to a set. So, for that purpose, we are yet to define what is the meaning of converging to a set M. So, distance of a point P, we can speak of the distance of a point P, not just from another point, but from a set. So, it is the shortest distance of P, shortest with respect to various points Q you can take in that set M. So, consider the set M and distance, this is nothing but distance. Distance of the point P from the set M is the infimum. For infimum, you can think of minimum for the time being. There is an infimum of this norm. You take different, different points Q and find the distance of P, distance between P and Q and you look at the minimum such value as you vary this point Q across the set M. So, the point Q in M which is closest to P we take and we take the distance between P and Q and this distance we will call as the distance of P from the set M. So, to converge to a set M means the distance of x of t of the vector x of t at any time t, distance of that vector from the set M, this distance is some real number now, it is a positive number. It goes to 0 as t tends to infinity. So, you can check that if this point P is sitting inside the set M, this distance is 0. Why? Because it is at distance 0 from itself which is already inside the set. So, it is with respect to this notion of distance of a point P from a set M and this distance going to 0. This is the notion of approaching, a trajectory approaching a set M. Using this, we look back at Lassalle's invariance principle. Suppose these conditions are satisfied that you could find a compact set that is positively invariant with respect to the dynamics of F. If you can find a function V that is continuously differentiable and non-increasing, its rate of change is less than or equal to 0 on this set omega. Inside this set omega, you construct the set of all points E that are such that V dot is equal to 0. Inside this E, you construct the largest invariant set M. Lassalle's invariance principle says that every solution starting in omega approaches M as t tends to infinity. So, we will see that this invariance principle allows us to use that same Lyapunov function motivated by physical principles to conclude in fact asymptotic stability. This we will see in the following lecture. Thank you.