 Hello, everyone. Welcome to yet another session of our NPTEL on Non-Linear and Adaptive Control. I'm Srikanth Sikumar from Systems and Control, IIT Bombay. We are, as usual, in front of our very, very cool representative image of this rover on Mars, motivating us to study the design and analysis of algorithms that are going to be driving systems such as these. So until last time, what we were looking at was a couple of examples illustrating the different notions of stability that we had seen. So we had spoken about asymptotic, globally asymptotic, uniformly asymptotic, globally uniformly asymptotic, exponential, and globally exponential stability. So we had looked at several stability notions. And I do hope that all of you are going to be able to remember all of these acronyms, which are easier to remember than, of course, these full-length, long words. So I do hope that all of you can remember these acronyms. The important thing to keep in mind is that we almost require always just to remember two properties. And these two properties are that of stability and attractivity. And once we have a good handle on these two notions of stability and attractivity, all the other definitions and notions sort of follow from there. After that, we looked at a couple of examples. This first one was an example of a system which was attractive but not stable. And then the second one was, of course, that of a very, very standard pendulum dynamics at return and state-space form. And this is actually a globally uniformly asymptotically stable system. So this is GUAS. And the phase plane portrait of this system sort of looks like a spiraling in curve in the X1, X2 plane. All right. Excellent. So this is where we were last time. And what we want to do is sort of look at a few more examples. We want to look at a few more examples today. So this is lecture 3.5. So this is the fifth lecture of week three. So we continue our excursion into examples illustrating the different notions of stability. So this one is this X dot is minus sigma X cubed with some positive sigma. And in this particular case, I can, because it's a scalar system, so X is in reals, of course. We can actually nicely compute the solution, look something like this. Of course, we are assuming that XT0 is some value X0. So with this initial condition description, the solution can be written in this very nice compact form. And once we have this, I mean, the conclusion is, of course, written right here that it is globally asymptotically stable. Let us sort of look at it a little bit more carefully. The first question is that of stability. The first question is that of stability. So what can I see from here is that this solution can, of course, be further maybe written in this form plus minus, I still say, retain this X0 divided by square root of twice sigma X0 squared T minus T0 plus 1, OK? Something that we can do. We can write it in this form. Or we can keep it in this original form. It doesn't matter. It doesn't matter. We can keep it in this form. Now, what do we want in a typical stability? For a typical stability condition, what do I usually require? I require that the absolute value, which is essentially, I'm going to keep my life a little bit simple. Absolute value of this is simply obtained by deleting the plus minus sign and putting in absolute value symbol here, if you will, OK? Or simply writing it as 1 over square root of twice sigma T minus T0 plus 1 over X0 squared. All right? Now, what do we want? We usually want this to be less than epsilon, right? Because 0 is the equilibrium, right? So it should be obvious that Xe is equal to 0, right? Even without doing any change of coordinates, I directly have my Xe to be at 0 itself, right? Now, suppose I want this to happen, right? So this is sort of what we want to happen, right? And I want to find some delta. I want to find some delta. So what I would do is, of course, try to simplify this expression and so on and so forth, right? So what would it be? So what I will do is that I will sort of, you know, first of all, I will make this bigger so I can have more space to write. So what I'm going to say is that I know for sure that, let's see, let's see, what do I know for sure? I know for sure that 1, so let me just say it up front. I know that as T becomes greater than T0, this is a positive number. So as T becomes greater than T0, this is a positive number. And so the entire thing is, of course, positive. And therefore, this is adding some positive quantity and therefore it is reducing the inverse. The inverse is getting smaller as T becomes greater than T0. So the largest value that this entire quantity can take is when T is exactly equal to T0. Because as soon as T becomes greater than T0, this is going to contribute a positive quantity which is going to reduce this inverse. So one thing that should be sort of evident to you immediately is that 1 over, or me just write it like this. So that this is in fact less than, so this is in fact greater than, so this is x0. Because what happens? So in fact, absolute value of x0. Because let's see what happens. If I plug in, so what did I say? I said that this quantity attains its largest value at T equal to T0. Because as soon as T becomes greater than T0, the numerator becomes larger and therefore the inverse becomes, sorry, the denominator becomes larger and therefore the inverse is smaller. So a largest value of this entire fraction is at T equal to T0. And if I plug T equal to T0, this guy goes away. And all I'm left with is absolute value of x0. So if I ensure, that's the cool thing. So if absolute value of x0 is less than epsilon, then I am ensuring that 1 over square root of 2 sigma T minus T0 plus 1 over x0 square is also less than epsilon. Because I will write it in completeness, this happens. This is less than normal absolute value of x0. That is less than epsilon. So what does it mean? It means that I can means that choose delta exactly equal to epsilon. And then we are done. And if I choose delta exactly equal to epsilon, it means that my initial conditions will start within the epsilon ball. And if initial conditions start within the epsilon ball, I am guaranteed that the solutions which are smaller than the initial condition are also within the epsilon ball. So I can, in fact, choose delta equal to epsilon. So note that this is independent. I mean, we didn't even have to work for it. This is independent of T0. So what does it mean? It means that, yeah, it means what? It means this is uniformly stable. This is uniformly stable. Now what can I say about attractivity? Well, I mean, attractivity is rather straightforward. I mean, as I keep increasing T here, this is going to go to 0. It doesn't, in fact, matter what initial condition I choose. Not at all. It does not matter what initial condition I choose. It is going to contribute here. Sure, it's going to contribute here. It's going to contribute here. But irrespective of what is the size of this, irrespective of the size of this quantity, this is definitely going to blow up to infinity. And you're going to converge. This solution is going to converge to the origin. So in attractivity, it's, in fact, obvious that it is globally uniformly attractive. Yeah, it's globally attractive, sure. And the fact that it is uniformly attractive is also evident by the fact that you have a T minus T0. It doesn't matter. So this bound on this x0 is, first of all, global. In fact, so whenever it is global, uniformity does not have to be thought of because delta is obviously all of Rn or all of R in this case. Therefore, it is obviously uniform also. Because the whole idea of being uniform is that the bound delta depends on T0. But if the bound is all of R, then it depends on nothing. So obviously, it is globally uniformly attractive. Now, if I combine these two properties, global, uniform, attractive, and uniformly stable, then what do I get? I get globally uniformly asymptotically stable. Not just globally asymptotically stable, but it is globally uniformly asymptotically stable. So this is a rather nice example. Because we could actually solve it, first of all. And of course, it has a rather good set of properties. It is globally uniformly asymptotically stable. So the origin is globally uniformly asymptotically stable for this system. Excellent. So let's look at the next example. So we are looking at a bunch of examples to get a good theme for what stable system, what asymptotically stable system is what I mean. And then so on, what attractive system, what do these look like? So this is a sort of a second order system. x1 dot is x2. And x2 dot is minus x1 minus 2 over plus dx2. So this is a sort of second order system. Now, if you look at the solution, the solutions look something like this. So this is, of course, I mean, let me be clear again. Yeah, let's see. I think the solution is created in this case, assuming that x10 is x10 and x20 is x20, like this kind of a notation. All right. And if you have these initial conditions, these prescribed initial conditions, you have this sort of a solution. This is what your solution looks like. I have to sort of look at this example again. I will verify this later on. But this sort of a system is non-uniformly asymptotically stable. It should be evident to you that it is definitely converging because, again, 1 plus t is going to go to infinity as t goes to infinity. And therefore, all these terms are going to go to 0. So attractivity is definitely rather easy. And it also has stability. We're not really proving it here because it's not going to be very easy to prove in this particular case. But you can sort of look at the phase plane portraits, try to draw the phase plane portraits. And you will find that this is non-uniformly So it is, in fact, stable but not uniformly stable. So you don't have uniform asymptotically stable. So this is another special case. And this happens usually when there is a time dependence here. Then this sort of tends to happen. When there is a time dependence in the right hand side, you sort of get some kind of non-uniformity in your asymptotic stability properties. And this is pretty standard. All right. So what about the next system? This is the sort of a really favorite linear scalar system example, very, very simple. So this is sort of x in reals. And x t0 is x0. So this is the sort of very, very simple scalar linear system example. And the solution of this is, of course, very well known to everybody. And it essentially looks like this. It is x0 e to the power minus kt minus t0. And we, of course, claim that this is exponentially stable. Because if you simply look at what happens with your, just look at your definition. What is the definition for exponential stability? In fact, let's look at global exponential stability in a linear system. So rather straightforward. So global exponentially requires the existence of just some constants AB such that your solution satisfies this exponential dk for all dt0 greater than equal to 0 and for all initial conditions. And so you can see that if I choose A equal to 1 and B equal to k, then satisfies global exponential stability condition. Satisfies the global exponential stability condition. And so this is, of course, globally exponentially stable. So this is, in general, true for linear time invariant systems. You have some really nice properties, of course. The first, anyway, the first property for general linear systems is that asymptotic stability is equal to stability plus the state transition matrix. So in case, folks, you all of you have forgotten the notation. So suppose I have a system of the form x dot is AT x with some initial condition. So this is what is a linear system. It is time varying, but it is a linear system. Then in this case, the solutions written as using a state transition matrix as phi t comma t0 x0, where phi is, of course, the state called the state transition matrix. So all of you are expected to have seen this. If you're not, please revise this terminology and notation for linear systems. So all linear system solutions can be written in this form, where this is a state transition matrix. Obviously, again, it should be evident to you that phi tt0 belongs to n by n. So if x belongs to Rn. So this is an n by n matrix. This is an n by n matrix map, which maps initial conditions to the current value of the state. So as you plug in different p, here you get the state at that particular value of time. This is what is how linear system solutions are written. So for asymptotic stability, notice that we require stability and attractivity. And in this case, attractivity is denoted by this. And stability is just written as stability here. But in fact, there is an equivalent characterization of this also, which is some written here for linear system. Stability is actually something like this. So this is actually what it means. Sorry, let's go here. So this is actually equivalent to having absolute value or norm of phi tt0 less than kt0 and limit as t goes to infinity phi tt0 is 0. So this is the stability part, and this is the attractivity part. This is the stability part, and this is the attractivity part. So these are sort of equal. There are simpler conditions, if you may. I mean, if you want to call them simpler conditions. So this is actually LTI in this case. So slightly simpler conditions for linear systems. So stability is not just a generic epsilon delta sort of condition, and neither is attractivity, but it is sort of codified in terms of the state transition matrix. So all these properties can be sort of codified in terms of the state transition matrix. So the other thing we say is for LTI systems, that is linear time invariant system. So what is a linear time invariant system? That is LTI. I'm going to make this bigger again so I can write. So LTI system is just x dot equals ax. And here we don't even care about writing an initial time. Well, fine. I will write it just for the sake of it. And in this case, the solutions, x of t, can be written as x of a t minus t0 times x0, where, of course, this is the exponential of a matrix. This is the exponential of this matrix. And I hope all of you know what is a matrix exponential. So in this case, if I get uniform asymptotic stability, if I get uniform asymptotic stability, so of course, I have two things. I have stability. So I need a couple of properties. In this case, you know that UAS, in this case, is equivalent to e to the, because this is, of course, the state transition matrix. So this is equivalent to e to the power a t minus t0 less than some kt0. And also limit as t goes to infinity e to the power a t minus t0. Again, the norm is equal to 0. OK? All right. And if this is, it's not difficult to show. I mean, this is not really linear systems, intensive course as such. But it is not difficult to show that. And this is something you should know also from your typical frequency domain knowledge that this can happen if and only if the real parts of all eigenvalues of a are strictly negative. And so solutions, all solutions are exponential decay. So this is all, I mean, there's a lot of, I mean, linear systems theory in this that comes in. If you are so interested, I can even tell you about it. So any matrix A can be, of course, written in its Jordan form. So real Jordan form, if you may, p lambda, p inverse, where lambda is, of course, the Jordan block, called the Jordan block. Yeah. And so e to the power a t minus t0 can actually be written as p e to the power gamma t minus t0 p inverse. OK? And if real lambda is less than 0, right, this is equivalent to real lambda gamma less than 0 because eigenvalues don't change under this sort of a similarity transformation, again, something that you should know from linear systems theory. You have something like this. So all your solutions actually look like x t is p e gamma t minus t0 p inverse x0. So if you actually re-denote your states as z equal to p inverse x, then the solution is zt is exactly e to the power gamma t minus t0 z0. And for certain, I mean, just to keep the discussion simple, you know that we know that for certain cases, gamma is a diagonal matrix. It's just the eigenvalue matrix. I mean, just to think of it simply, if there are no complex eigenvalues, this is just a diagonal matrix containing the eigenvalues. It's just a diagonal matrix containing the eigenvalues. For the exponential, so diagonalizable cases, what happens? Your zt looks like e to the power minus lambda 1 t minus t0 e to the power minus lambda n t minus t0 z0. For the diagonalizable cases, you have something like this. And even for the non-diagonalizable cases, only the real part matters. So diagonalizable cases, of course, this is real. So for the non-diagonalizable cases also, only the real part matters. The complex part really just contributes to sinusoid and all. It doesn't change the magnitude of the solution. So z is, of course, converging exponentially to 0, as you can see, because all of these lambdas are, sorry, this is actually positive. The way we have denoted it is it is lambda 1, lambda n, 0 here and 0 here. So we know that all the lambda i's are negative by our assumption. And all of these are negative. Then, of course, z is going to 0 exponentially. So implies z goes to 0 exponentially, because you can look at the expression. So implies, of course, that by this transformation, that x is going to 0 exponentially, because x is just z scaled by some constant matrix P. Therefore, we are done. We just show we started with asymptotic stability. And we took this sort of convergence property. And from this convergence property, I know that all the real parts of all eigenvalues have to be negative. And then I can reduce a to its Jordan block form. And then we know that it's diagonalizable with this eigenvalues lambda, of which the real parts are even negative. So therefore, z is going to 0. And x is simply P times z. Therefore, that is also going to 0 exponentially, because of this expression. These are all exponential decays. So uniform asymptotic stability for LTI systems is actually just exponential stability. LTI systems cannot do anything but exponential. OK? Excellent. So what we saw today, there are a few more examples of these notions of stability. And in the end, we also try to understand how these conditions simplify or become a little bit more specific for linear systems and linear time invariant systems, where solutions can be written using the state transition matrix. So we will continue further on this discussion of stability next time. And we stop our discussion for today here. Thank you.