 Hello, everyone. Welcome to yet another session of our course on non-linear and adaptive control. I am Srikanth Supumaran from Systems and Control, IIT, Bombay. We are, as always, in front of our motivating image of this rover on Mars, for which we hope to be able to design algorithms that can drive such systems autonomously. So last time, we had started our discussion on the Lyapunov stability theorems. So as we had stated already, these are the rather seminal results in the field of non-linear control. And without the advent of these, it would definitely have been impossible to prove stability of non-linear systems such as these rovers on Mars, quadrotors, drones, electrical power grids, and so many other networks of oscillators and things like that. So these are very, very critical and fundamental results. So we saw, of course, the first two of these results. We said that we first start with what is called a candidate Lyapunov function. And what is a candidate Lyapunov function? It is a function with two primary properties, that it is a C1 function, because, of course, we need to take first partial, and we want it to be continuous. And second, that it is positive definite in some domain around the origin. So if these two properties hold true, then we said that these functions are candidate Lyapunov functions. And for candidate Lyapunov functions, if these two bullet points are considered, the first one says that if the derivative V dot, which is defined using the standard directional derivative that we saw, and if this V dot is less than equal to 0, or V dot is negative semi-definite, then the origin is stable. And on top of this V dot being negative semi-definite, if V is also decrescent, this is the third function property that we had discussed, if V is also decrescent on top of V dot being negative semi-definite, then the system is uniformly stable. And then we, of course, started to look at some examples. We first looked at the simple standard harmonic oscillator. We saw that it has a phase plane portrait, which essentially just circles around the origin. And we proved using this V equals x1 squared plus x2 squared by 2 that V dot turns out to be exactly 0, and hence it is negative semi-definite. And this is essentially enough to claim that the origin is, in fact, stable. We also saw that this V was motivated by the phase plane portrait. It is essentially circles. That is, if you start at any point, you just follow a circle around the origin, starting at that point. So then we started to look at a slight modification of the system, in fact, a very slight modification of the system. And we started to encounter some serious issues. The first thing to observe was that we tried a couple of different candidate Lyapunov functions. So this was the second one, in fact. This particular choice was the second candidate Lyapunov function. But we saw that with this also, we were actually getting V dot is x2 squared by 2, which is positive semi-definite. It's not positive definite. Remember, because it doesn't contain x1. And we had very explicitly mentioned that if all the states of the system do not appear in the function, then it cannot be definite. And therefore, it is only positive semi-definite. But still, it's really bad because we only care for V dot being negative semi-definite or negative definite. So we were not able to claim anything, because this was probably not the correct Lyapunov function. We don't even know. Now, it turns out that even for simple modifications of this harmonic oscillator, which is just division by some function of time here, as opposed to this guy, it's really, really difficult to solve the system. Turns out that it's not easy to actually analytically solve the system in order to conclude stability either. So you can see how life can become really complicated, even with simple nonlinear systems. And hence, the rather difficult question of analyzing nonlinear systems for stability. So this has always been a rather rich area of research and continues to be, simply because of this reason, but that every single nonlinear system poses a new challenge, as far as control design is concerned, as far as stability analysis is concerned. And because of this, it continues to remain a rather interesting challenge for researchers such as us. Now, what can be said about this system is that if I look at this system very carefully and I look at this particular term, as time becomes really large, this denominator becomes really large. Therefore, irrespective of what is this x1, this quantity starts to inch closer and closer to 0 for very large values of time. This quantity starts to inch closer and closer to 0. And what is this quantity? This is actually the derivative of x2. So if I look at the phase plane plot, I've actually made a picture here. You can look at this picture. And on the x-axis is x1. On the y-axis is x2. This is how we've always done it in the phase plane portrait. The x-axis displays x1. And the y-axis is x2. Now, what happens for very large time is that x2.becomes 0, that these lines that I've drawn are essentially the velocity lines in some sense. These are actually called the vector fields. And these are called the vector field. And essentially, it is plotting the right-hand side of this equation. And this right-hand side actually indicates how the states are going to move. So this is the velocity line. So if I look carefully at this, so in fact, let me first mark this lecture so I know that I'm actually restarting here. This I believe is lecture 4.5. So if you look at this vector field, what we are trying to do is that we are trying to plot this vector field, which indicates how the states are going to change in this phase plane portrait. So this essentially gives you the velocity lines to see how the states will move. Now, what I have done is I've only tried to plot it for large values of time. Because what happens at large values of time, this becomes 0. So the derivative of x2 does not is 0. So x2 does not change. And the derivative of x1 is exactly equal to x2. So if you look at this picture, that's exactly what it is. On this side, on the top side, x2 is positive. Therefore, the derivative is the x1 velocity is positive. So that's what it is. All x1 velocities are positive. And in fact, as you go up, up, up further, these velocity lines are longer and longer. But x2 dot is 0. Therefore, it's actually orthogonal to x2. There is no change in this direction. There is no change in this direction. And similarly, if I go downwards, x2 is negative. Therefore, all the velocity lines are this way. And of course, it's like there is no change in x2 again. Now, what does this indicate? This indicates that as x2 becomes, as time goes to infinity, what you have is that your states keep moving in this direction or in this direction. Depending on how far you are from the origin in the vertical direction, you move accordingly faster but you keep moving in these directions. So if I start here, I will just move away from 0. Again, I start here, I move away from 0. So it's clear that this system is not stable from this particular phase plane construction. But again, this is not conclusive evidence or anything. Because like I said, phase plane should not be used for concluding stability because I cannot possibly consider all the cases. However, in this case, I have sufficient evidence to see that it is not stable because I can at least see some initial conditions, some states which will end up at this x2 and then they will keep moving towards x1 equal to infinity. So in the x1 direction, they'll keep moving to infinity in the positive or the negative direction. Therefore, the system is not stable. Because whenever your system states just start to go to infinity in any direction, in this case, in the x1 direction, maybe in the x2 direction, they remain bounded. But in the x1 direction, they move to infinity so the system cannot be stable. Because if I give you any epsilon, you need to be able to find a delta such that you remain within the epsilon ball. In this case, if I draw any epsilon ball, like this say, I will not remain within it. Because my states are trying to go outwards like this. Yeah, and this is true for large time. OK, for small time something else happens. We don't even care what happens for small time because I can always keep increasing time and see that this doesn't hold true anymore at large time. So the system is not stable. However, I could not find any conclusive evidence to the Lyapunov construction. And it is not very easy to solve this analytically. So even for a really, really tiny modification, this is not even a nonlinear system. This is in fact a linear time varying system. We were really stuck. So this should help you understand the enormity of the stability question. So next, we constructed a concocted modified version of this. And that's this. You constructed a modified version of this. And what is this modified version? In this case, you have x1 dot is x2. x2 dot is the same minus x1 over 1 plus t. But I have now added minus x2 over twice 1 plus t. This is sort of a trick, if you may. And I choose the same candidate Lyapunov function as here. It's the same function. What happens? I take the derivative, x1 x1 dot plus 1 plus t times x2 x2 dot plus x2 squared by 2, where I've taken the derivative of this with respect to time. Now if I substitute for the derivatives from my dynamics, this is x1 x2 plus 1 plus t times x2 x2 dot. So this is minus 1 plus t. So this is a minus x1 minus x2 over 2 plus x2 squared by 2. And what happens? You can see that these two cancel out. So I'm left with this is x1 x2 minus x1 x2 from here. And finally, minus x2 squared by 2 from here. And then plus x2 squared by 2, which is exactly equal to 0. So I have v dot is less than equal to 0. So from this, I definitely have stability. xc equal to 0 is stable. Because I took a candidate Lyapunov function, how is this a candidate Lyapunov function? Notice that v is positive definite. It's not difficult to see that v is positive definite. Why? Why? Because if I take t greater than equal to 0, so for all t greater than equal to 0, of course. So for all t greater than equal to 0, what do I have? If I plug in any non-zero x1 x2, non-zero state. So if x is non-zero, at least x1 or x2 is non-zero. Therefore, v is always positive. This is a positive definite function, right? So if v was positive definite, it is c1 continuous, and v dot is negative semi-definite. So I've satisfied all the requirements for stability. So now the question is, is the system uniformly stable at the origin, okay? So we notice that v is not negative definite. Why? Notice that this is a continuously increasing function of time. Okay, so if I claim, I cannot ever claim that v is less than equal to phi norm x, because where phi is a class k function. Why? Because phi is just a function of the states, right? So if you give me any such phi, what I will do is for that fixed phi, I will fix a state x, okay? Some very small value of the state, it doesn't matter. I fix any particular value of the state. So the right-hand side becomes a constant. Once I fix x, the right-hand side is a constant. But notice the left-hand side is an increasing function of time, right? x1, x2 are constant, no problem, just as the right-hand side. But this is an increasing function of time. So I will keep pushing up time, right? I will keep pushing up time, right? I will keep pushing up time so that the left-hand side can never be less than the right-hand side. Because the right-hand side was some constant. It doesn't matter how large it is. I can always push the p larger and larger to achieve v greater than phi norm x, okay? So therefore, v is not decrescent and therefore implies, implies xc equal to zero, not uniformly stable, all right? It's not uniformly stable. This is a rather interesting result, right? So we have a system here, which is being proven to be stable. It is actually a modification of the harmonic oscillator, but not just one term, one time modification, but I also added some additional state term, right? So it turns out that this is stable, but not uniformly stable, okay? At least I cannot claim uniform stability with this particularly upon a function, okay? So let's be careful. We cannot prove uniform stability with this particularly upon a function. It may be possible to prove it with something else, but in general it is, it's not too difficult to see that the stability will not be uniform, yeah? Because what happens is that this term, that is all these nice terms, for example, this term, see if you notice, this system was unstable. So the stability is obtained due to this additional term. This is like a damping term, yeah? Because until this term was missing, this system was unstable. So this damping term sort of helps us get stability in this case. Now this damping term becomes smaller and smaller with time increase, okay? So therefore the stability property is sort of time dependent, yeah? It will not be possible in general to prove uniform stability in this case, all right? So anyway, so that's an aside, but the point is with our Lyapunov theorems, we cannot prove uniform stability. We can only prove stability for this particular system, all right? Excellent. So I think we have some idea of stability and uniform stability and we've seen some examples. So now we can move forward to the next sort of results, and so in fact, I should actually write here, yeah? Should write here, lecture 4.5, right? So the next result talks about local asymptotic stability or just asymptotic stability, yeah? So the next two results are local asymptotic and local uniform asymptotic, okay? So what do you require? Earlier we had only talked about semi-definiteness of V dot. Here we make it more stringent, right? So we have stronger and stronger properties as we go downwards in these bullet points, all right? So in this case, we have, we require that V dot is negative-definite. So if V dot is negative-definite for a candidate Lyapunov function, then the origin is locally asymptotically stable. Similarly, if I add the decrescence property to the negative-definiteness, then I get uniformity, yeah? So if V dot is negative-definite and V is decrescent, then the origin is locally uniformly asymptotically stable. So we also use acronyms, right? Very commonly to denote these properties because these get rather long set of words. So we don't always want to write them. So we call uniform stability US, asymptotic stability AS, uniform asymptotic stability US, okay? So we have specialized from stability to asymptotic stability by adding negative-definiteness in place of negative-semi-definiteness. And similarly from uniform stability to uniform asymptotic stability. And just this negative-semi-definiteness gets replaced by negative-definiteness, okay? So this is the difference, all right? So let's see, again, some examples, yeah? Let's see some example. So if I take, again, this system, again, this system, yeah, let me try to construct a nice example, which will, of course, help us some nice properties. Let's see. So suppose I have a system which looks like this, right? And I want to do something, I want to take an interestingly app on our function. vx1x2 is equal to, say, of x2 plus x1 squared. Now, let me see this. So, vx1x2 s of x1 plus x2 squared x1 plus x2 squared plus r of x2 squared. No, I think it should be x1 squared, right? So then I take the derivative of this thing. So it should be evident that this is positive definite. Why is it positive definite? All right, so question is why is it positive definite? So if I want to make this, let's look at where this can be zero. This can be exactly zero if and only if x1 exactly zero and x1 plus x2 exactly zero, which means x2 exactly zero. So this origin is the only place where this function can be exactly zero. Everywhere else, it is strictly positive. So therefore, this is a positive definite function as per our definitions. So if I compute v dot, I will get x1 plus x2 times x1 dot plus x2 dot and plus x1 x1 dot. And so this becomes x1 plus x2. Let me see if this works yet. And this x1 dot is x2 and x2 dot is minus x1 minus x2 plus x1 x2. So let's see. So this is equal to x1 plus x2 times minus x1. And so let me keep it as this, x2 minus x1 minus x2 plus x1 times x1 plus x2 minus x1 squared. So I have basically written this guy as this just by writing x2 as x1 plus x2 minus x1. And then I will club this guy to this. So this will give me x1 plus x2 times x1 plus x2 minus x1 minus x2. Okay. So let's see, let's see. I want to make this slightly different. So what I will do is let's see. I will make this, I'll make my life a little bit easier. Let me say this is, I mean, I can always change the candidate function here. That will give me, what will that give me? Yeah. I think I should have taken something like half x1 or something. So that would have helped. So what I will do is make a slight modification here. All right. I'll make a slight modification here. I will add a K here. And I will see where all the Ks propagate. K is here, K is here. Similarly, K is here, K is here. Again, the K continues here and the K continues here. Yeah. Just I'm adding this K just so I can have a little bit more of a control on what shows up here. So in this case, what I will do is instead of, I will take a K here and I will subtract a K here. And then I will have a K again here. So this becomes x1 plus Kx2, right? Minus x1 minus x2. Okay, so this is still, let's see, this is still a bit of a problem because I still want a negative in the x1. I would still like a negative in the x1. So I'm just trying to manipulate some of this numbers so that we can get an appropriate, so that we can get an appropriate sort of equality here. Okay, so that's the idea here. So suppose I make it, I apologize. So this is say, one by four. So this becomes one by two. This becomes one by two. This becomes one by two and this becomes divided by two. So this also becomes divided by two. So what I get here is Kx1 plus x2 times minus x1 by two minus one minus Kx2. Okay, and this is, and I want this to be actually equal to Kx1 plus x2. So let me sort of write this as one minus Kx1 plus x2 minus, this is x1 over twice one minus K twice one minus K plus x2, right? And all I now need is that K be less than one so that this is positive and I need this quantity to be equal to K. So I want one over two one minus K is equal to K, right? So if I satisfy these two, right? If I can satisfy these two, this is simply equal to two K minus K squared equal to one implies K squared minus two K plus one equal to zero, right? So can I actually satisfy these two is the question? So this is, but this is not, I'm not sure this is possible because this is going to give me K equal to one. So this is not good, okay? So this is not good yet, okay? So anyway, what I will do is I will, so this construction is right in spirit, yeah? You can see that this construction is right in spirit. The only thing is I have to be careful about choosing these constants, okay? So what we will do is we will continue next time and we will actually choose these constants appropriately so that we can claim asymptotic stability, all right? So anyway, so what did we do today was to sort of look at the next two definitions which is asymptotic stability and uniform asymptotic stability. And we are in the process of working out the example to prove asymptotic stability using a Lyapunov construction, all right? This is what we will continue next time. Thank you very much.