 Hello. Welcome to yet another session of our NPTEL on non-linear adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. We are again in front of our motivational image. We are already well into sort of learning about tools and techniques that will help us analyze algorithms that drive autonomous systems such as this rover on Mars. So let's see what it is that we spoke about last time. So until last time we were sort of beginning to look at the notions of stability in the sense of Fliyapunov. I hope I was able to impress upon you that this contribution by Amliyapunov Russian mathematician in 1900s sort of changed the world of non-linear control. And honestly there would not have been this sort of serious curriculum in non-linear systems and control if it was not for the contributions of this general. So what did we do? We sort of began by looking at what the system in consideration looks like. And we carefully stated that this system will be always defined using a differential equation such as this along with an initial condition at a given initial time. So this was important to specify the system in completeness. Now we of course also spoke about the assumption of existence of unique solutions. So we actually gave some examples where we sort of saw that there is a possibility that solution may not exist for a given differential equation with initial condition beyond a certain point in time or it could also be that there is non-uniqueness in solution given an initial condition. So that could be multiple solution trajectories such as this picture here for a given differential equation and a particular initial condition. So we wanted to avoid these sort of pitfalls at least in the discussion that we are having in this course. So we make use of a rather nice assumption which helps us to sort of evade these pitfalls if you may. So the assumption is that this function f is Lipschitz continuous in the state x and it is continuous in time. So this is the assumption that we sort of ended up with. We even saw that this is a sort of a sublinearity assumption and we saw that the cases that we considered well at least I hypothesized that the cases that we considered not fall under this Lipschitz assumption and well I asked all of you to verify whether that is the case and I hope at least some of you put an effort into verifying whether these functions are satisfying Lipschitz condition or not. Except. So now that we have the system setup in place that is this sort of differential equation with an initial condition once we have this setup in place we are able to speak about what is an equilibrium. So equilibrium is any point that is any state xe such that f of txc is equal to 0 is identically equal to 0 for all time greater than equal to t0. So I will repeat the equilibrium is a state particular state xb such that f of txc is identically equal to 0 for all time t greater than equal to t0. Now why is this a valuable or an important sort of point? It is because if you see if f of txc is equal to 0 what do I have? I will have x dot equal to 0 or all t greater than equal to t0. And if this happens what does it mean? It means that my state never moves from xe. So if I start in xe if I start my initial condition at xb then I remain at xb. So this is to say that the point xb is in fact the solution of the system a trivial solution of the system if you may. And therefore this is called the equilibrium. Makes sense. A lot of you might have actually seen the notion of equilibrium in physics. So you talk about the equilibrium of a pendulum. So if you do this sort of settles here. It settles in the downward position and this is an equilibrium. Why? Because if I start here it never moves from here. This is what it means to be in an equilibrium. So we've spoken about you've definitely seen equilibrium in your high school physics classes. So this is the notion of an equilibrium. Not very different from what you've seen in physics. So it is a state from which the system does not move ever unless there is a disturbance. We didn't assume any disturbance. Therefore the state never moves from the equilibrium. And that is what this condition is aimed at ensuring. Excellent. Then there is the notion of an isolated equilibrium. An equilibrium is said to be isolated if no other equilibrium exists arbitrarily close to it. So this is from definition 342 in the book by Ianu and Sun. This is one of your references. Now I can of course make it more formal. If I want to make it more formal what would I say? I would say that there exists epsilon positive such that for all x belonging to b since we know the notion of a ball in a metric space. So this is the ball of radius epsilon around xc. So f tx not identically, I will say f tx is not equal to 0 for all t greater than or equal to t0. What does it mean when I say something like this? It simply means that x, let me be more careful. I want to exclude of course, I of course want to exclude the point xe from this ball itself. Because xc is of course an equilibrium. What am I saying? I am saying that any x in this small ball around xc is not an equilibrium. This is what this condition ensures is not an equilibrium because it moves away from this. So it is not equal to 0 for all t greater than equal to t0. So what does it mean? It means that there is a ball of radius epsilon around an equilibrium and within this ball every point other than the xc is not an equilibrium. So that is what it means to be an isolated equilibrium. So it is very easy to construct examples of isolated equilibrium. So I will actually give you one such example of an isolated equilibrium. So what is an example of an isolated equilibrium? So isolated equilibrium. And what is an example? Let us see. I can construct something on the fly. So this is x1 dot equal to x2 and x2 dot equals minus x1. If you do not like the linearity of this, I will just make it x1 squared. No problem. I will just make it x1 squared. So if you look at this system, what is the equilibrium? So if I write it formally, what would be the equilibrium? The xe is equal to the set of all x1, x2 in R2 such that x2, minus x1 squared is equal to 0, 0. So there is no time involved. Therefore I am not saying for all time greater than equal to t0 because there is no time in the right hand side. No time appears here. So whatever I do is always true for all time greater than equal to t0. Time is not there. So now if I equate these two separately to 0, 0, this is equivalent, if you may, to x1, x2 equal to 0, 0. All right. So these two are in fact equivalent. All right. Therefore the equilibrium xe is just the point 0, 0 in R2. So this is an example of an isolated equilibrium because there is no other equilibrium anywhere nearby. Okay. If I want to make it slightly more complicated because here it looks like there is only one equilibrium. So why even talk about isolated and non-isolated? So let's look at one more example. Yeah. So let's see. It's x1 dot is say sine of x2 and x2 dot is, well let's see. I am not going to do that. I am going to say x1 dot is x2 and x2 dot is sine of x1. Okay. What are the equilibrium here? It's again the set of x1, x2 in R2 such that x2, sine x1 is 0, 0. Okay. So what is this? Yeah. This is not that obvious. Right. So this is actually sine x1 equal to 0 happens at all n, y. Right. So if you actually have, so if you actually compute sine x1 is 0 for all n, y. Right. So this is actually of the form n, y, comma 0 with n being an integer for any integer n, sine n, y is 0. Right. Therefore all of these are allowed. But you see these are still isolated. Right. Why are these isolated? It should be obvious to you. Right. So if I draw this. Right. So on the y-axis this is 0. So all the equilibria are on the x-axis. So one of the equilibria is here. Second one is pi. Another one is minus pi and so on. Right. So that does exist. You know this ball that we want. Right. This mythical ball of radius epsilon. Right. This ball of radius epsilon. Right. This ball of radius epsilon does exist within which there are no other equilibria. None of these are equilibria except this gun. Right. So 0, 0 is an isolated equilibrium. Okay. Or minus pi 0 is an isolated equilibrium. Right. All of these are in fact isolated equilibria. And this is nice. We like this property. Let's look at the other case of an example of a non-isolated equilibrium. Right. And look at this example of a non-isolated equilibrium. Right. So what is it? It's a system which looks like x1 dot is x1 x2 and x2 dot is x2 squared. So if I write out carefully, xe is the set of x1, x2 in R2 such that x1 times x2, x2 squared is 0, 0. All right. Now, if you look at this carefully, this in order for this to be satisfied, I definitely need, from here, I definitely have x2 to be 0. But if x2 is 0, I also see that x1, x2 is 0 irrespective of what is x1. Irrespective of what is x1. Okay. So this is x1 is arbitrary. Right. Because if x2 is 0 because of this second guy here, but if x2 is 0 already, then x1, x2 is always 0 irrespective of what is x1. So x1 can be arbitrary. So this is actually not correct. This is not correct. So your xe will be of the form x1. Or let me just call it some alpha, comma, 0. Okay. So this is all alpha in reals are allowed. Now what is the problem? Let's look at this equilibrium. Let's look at this set xe. Because xe is not one point, but it's a set. All right. So let's look at this set xe. What does it look like? So I have the axis. Okay. So this is the x axis and this is the y axis, if you may. I mean in the notation of this paper, sorry, in the notation of this example, this is the x1 axis and this is the x2 axis. Right. And what do my equilibria look like? They have y equal to 0 in both cases. Okay. And x can vary, x can be anything. So my equilibria actually look like this. Okay. My equilibria actually look like this. They span the entire x axis. Okay. See this is not isolated. Right. So this is all of x axis and this is not isolated. Okay. Right. Why is this not isolated? Because you see the equilibria really exist arbitrarily close to each other. Right. There's no way I can separate them with a ball of epsilon. No way. You cannot draw a ball of any size of epsilon and miss other equilibria here because you will catch the other equilibria on the x axis. Right. No way you miss them. Okay. And this is a problem. Okay. Why do we care about isolated equilibria is because all of our Lyapunov stability definitions are for isolated equilibria only. Okay. So this is the critical point. Okay. This is the critical point for isolated equilibria only. Okay. And why is that? Why is that? See all the notions of stability are defined using norms and metric spaces. Okay. So suppose I have an equilibria say denoted by xc. Okay. And I take a comparison point x minus xc. And I take the norm because all the notions of stability in everything is defined using norms. Right. This is the notion of norms. Yeah. Because so that we can compare points and figure out how far they are from each other. That's why we needed the notion of norms. All right. So all the notions of stability based on norms. Right. So stability notions based on norms. Okay. And what does this mean? It means that whenever I evaluate x minus x sub e, right, and the equilibria is not isolated. Okay. Equilibria is not isolated. Then you can very well see that x minus x sub e plus epsilon, yeah, is almost equal to this. Yeah. Where epsilon is arbitrarily small. Yeah. If epsilon is arbitrarily small. Yeah. If epsilon is arbitrarily small, then this norm and this norm are almost the same. Yeah. And so what happens is that because my equilibria are really close to each other. In fact, there is no space between them. Yeah. There is no way I can talk about stability of a particular equilibria because there is another equilibria really close to it. And when I try to compare between, you know, of compare x with some point here and some point here, it's almost the same. In fact, they will be the same if your epsilon is zero or if your epsilon is, you know, 1e minus 10, 1e minus 20. All right. So you can see that in the absence of isolated equilibria, it is difficult to even make sense of, you know, which equilibrium I am comparing with. Okay. And therefore, Lyapunov stability notions do not work here. And therefore, we always assume that there exists an isolated equilibrium for the system. Okay. So I hope this makes sense to you. Yeah. This is systems like these with equilibria like these are very difficult to define stability for. Okay. And we will see immediately what these stability definitions look like. And therefore, we will see that the norms appear. Right. And because the norms appear, there is no choice but to assume some kind of isolated equilibrium. All right. So let's see. Let's see the first stability definition. Okay. This is called Lyapunov stability or stability in the sense of Lyapunov. Okay. What does it require? It is a test condition. Right. It says that I get something and then I give something in return. It's sort of a test condition. So what is this test condition? For all epsilon positive. So if I'm given any epsilon, so the user has to give me epsilon. Remember, I cannot choose epsilon. Epsilon is given to me by the user. So this is a common confusion. A lot of students have whenever they try to prove Lyapunov stability. Epsilon is given by the user and hence arbitrary and this symbol, therefore this symbol for all epsilon positive, I must be able to find a delta which can depend on the initial time and this epsilon, which is also positive such that for all initial conditions with delta distance from the equilibrium, the solutions remain at an epsilon distance from the equilibrium for all time. Okay. So let me sort of draw a picture to help us understand. Okay. So let's see. So this is my axis. This is again a phase plane portrait. Okay. So you get used to this phase plane portrait thing. Right. All right. So this is my axes. Okay. Now let me draw two circles. Right. So this is, I hope this looks like a circle. I think so. Right. So the red one is the epsilon circle. Right. So this is the epsilon circle and the blue one, let me try to draw it again. Right. And the blue one is the delta circle. Oops. Sorry. The blue one is the delta circle. So what does this stability in the sense of Lyapunov say? What does it require? It requires that if I'm given the epsilon circle, so the epsilon circle is already given to me. This red ball is already given to me. And so this is the phase plane. Okay. So let me, this is X1. This is X2. Just for illustration, we are showing it with, you know, in two dimensions because that's what we can illustrate very easily in. Otherwise it's difficult. Yeah. But otherwise this, the same logic works for any dimension. All right. So what does it say? It says that if you have a trajectory, right, which if you're given the epsilon ball, that is this reds colored ball. Okay. Then as a challenge answer, I have to be able to give you a delta ball. Now this delta ball can, it has to be positive, of course. So therefore it's a actual ball, metric ball. Right. Containing some points in the state space. Right. And it is allowed to depend on T0 and epsilon. Right. So what are we saying that any trajectory which starts in the delta ball, yeah, any trajectory which starts here stays within the epsilon ball for all time. Okay. So never escapes the epsilon ball for all time. Okay. So that is, what does it say? It says that if I start in the delta ball, I remain in the epsilon. Okay. So that's the whole idea. So this magenta thing is the trajectory of the potential trajectory of the system. So any trajectory which starts inside the delta ball must remain inside the epsilon ball. And this is what it means for a system to be stable in the sense of Lyapunov. Now again, let's go back to this issue of non-isolated equilibrium because we said that we cannot deal with them and let's try to see why. Okay. Now we are talking about this XC. So I mean this, of course, it should be obvious to you that the point in the center here, you know, whatever is the origin in this case is XC. It's the equilibrium. Now, in the case of non-isolated equilibrium, you can see that very close to here, I can have another, you know, I can have another equilibrium or another equilibrium, very close. Right. Now if it's this close, what do you expect will happen? You can see that I can replace XC by any of these points in pink. Right. I can replace XC by any of these points, right, these things on the sides. Yeah. And nothing much will change. Right. So the problem becomes that, but the thing is this circle, of course, has to be centered on the other one. But because it's a norm, these values are not going to change significantly. Right. Especially as this pink thing comes closer and closer to the blue point. Okay. And that is a problem because I don't even know which equilibrium I'm talking about. Right. I may be talking about XC or I may be talking about these pink things. Right. So I don't even know which equilibrium I am trying to test the stability of. Okay. So this is why dealing with non-isolated equilibrium is a rather difficult challenge. So we, of course, deal with it in different ways. So we'll talk about it later than when we need it. Yeah. Right now, just remember that we require the equilibrium to be isolated. All right. Now, one of the things that should sort of be evident to you, right, is that delta, right, delta has always has to be less than or equal to epsilon. The way this definition is made, you require that delta has to be less than equal to epsilon. Okay. All right. What happens? Okay. Let's try to, you know, do a thought experiment. If delta greater than epsilon, what could happen? Right. If delta greater than epsilon, what could happen? So what I'm saying is that X0 minus XC is less than delta. Right. But this is greater than epsilon. Right. So when I say that I require XT minus XC to be less than epsilon for all T greater than equal to T0, this has to, of course, be true at T0 also. Okay. This has to be true also at T0 because notice, I said it has to be true for all T greater than equal to T0. So if I plug in T equal to T0, this still has to hold. But now notice, I said that this is less than delta, which is larger than epsilon. So there is no guarantee that at initial time, this is less than epsilon at all. It may be, but it may not be either. Therefore, the challenge fails at the initial time itself if delta is greater than epsilon. Okay. So by mistake, if you get an answer which says that delta is greater than epsilon, then you can safely assume that that was the wrong answer. All right. Okay. Great. So, excellent. So what did we talk about today? We sort of ventured forward to look at what is equilibrium, what is an isolated equilibrium and what is a non-isolated equilibrium and why an isolated equilibrium is rather critical for talking about stability in the sense of Lyapunov. And then finally we saw the first stability definition that is in fact stability in the sense of Lyapunov and we saw the first epsilon-delta definition in this class. All right. So this is where we'll stop, but of course in the future also we will see further stability definitions and more epsilon-delta definitions. All right. Thank you, folks.