 Hello folks. Welcome to yet another session of our NPTEL on on linear and adaptive control. This is Srikanth Sukumar from Systems and Control IIT Bombay. So we are again back with our motivational image of this rover on Mars. And we want to use this sort of background image to help us remember that the algorithms that we are trying to develop and analyze are eventually to support systems such as these to be driven autonomously. So without any further delay, let me sort of give you a little bit of recap on what we were doing until last time and what we plan to do now. So before today, we were talking about this notion of Babalat's lemma. We gave, you know, sort of two different versions of this lemma, right? So we sort of spoke about this integrability condition and we also spoke about a corollary which contains, which is in terms of this L infinity and LP functions and so on and so forth. All right. And then in order to illustrate how useful this tool is, we started talking about the spring mass damper example. The important thing to note in this example and the way we did this analysis was the sort of standardization of the steps. Okay. So the steps starting from this, you know, step number one right here where essentially very, very standard steps in our analysis. And what I sort of reiterated again and again is that these steps are going to remain identical every single time we try to use the Babalat's lemma to do the signal chasing analysis. All right. So this is what I was sort of hoping that all of you would remember for sure. Okay. So these are very, very standard steps. And then, you know, we also, you know, saw how you move from proving the terms in the V dot go to zero to proving that the other terms, that is the terms that are not in V dot also go to zero using the original Babalat's lemma. All right. So we sort of claim that this is a very, very generic or more general purpose theorem which can be used to prove convergence. Okay. So this was another thing we stated very carefully. At the end of this analysis, we can prove that our signals converge to zero and further that our signals remain bounded. Okay. So these are the two things we prove. We have not spoken about stability, but we will do so in the subsequent lectures. Yeah. And we also said that this is more general purpose in application than something like a LaSalle invariance principle in its, you know, classical form. Yeah. And in order to illustrate this is why I have given this exercise number four, where you have these time varying gains in the system now, and the Babalat's lemma is still applicable to these systems in order to prove convergence of signals, boundedness of signals. So systems such as 3.25 are not analyzable using the LaSalle invariance principle. Right. So now that we have seen the Babalat's lemma and of course the other associated very, very critical lemmas, we should keep in mind that Babalat's lemma was not the only result that we saw. Okay. We also saw these other lemmas on, you know, the existence of bound of functions and the lemma on uniform continuity of functions. Okay. So we had these two other rather critical lemmas also appearing. All right. So great, great. So now that we have seen the Babalat's lemma, what we want to do is we want to start looking at the notions of stability. So now please don't worry. This says V3 lectures, but we are still only in V2. Okay. So it doesn't matter. We are advancing a little bit into the V3 lectures also right now because in the future, we anticipate more, the material to get more involved. And therefore, if we advance further into the later week lectures, there is no issue. Okay. So what we want to talk about now is the notion of stability in the sense of Lyapunov. Okay. Now before we actually delve into stability, we sort of want to give a set up of what kind of systems we are talking about. Okay. So of course, we say stability in the sense of Lyapunov and this is due to, you know, this is due to AM Lyapunov. Okay. And of course somewhere 1800s, mid 1800s is when he came up with these sort of stability notions and also gave useful tests for stability. Okay. So rather critical, I would say rather critical without the contribution of Lyapunov, our entire field of nonlinear systems would be in a very, very primitive state. Okay. Without these results, we would not have been able to analyze the stability of nonlinear systems. We would not have been able to talk about tracking. So therefore, there would be no question of, you know, I mean trying to understand how systems perform under feedback. Okay. So this is a similar tool in nonlinear systems, any nonlinear system. So any nonlinear systems course, of course, on the adaptive systems course is incomplete without the study of Lyapunov stability notions and the theorems. Okay. And so that's of course what we intend to look at. So the typical system that is in consideration is a standard state space system. Yeah, it's x dot equals f of x t. So f is the, you know, vector field which is determining how the system evolves over time. Okay. So it's, and you expect this to be a function of the state as well as the time. So as you all of you know, I assume x are the states and t is the time. And the system is not specified unless and until I also give an initial condition. What is the meaning of an initial condition? It means that I specify at an initial time t naught and initial state x naught. Okay. So without this, without, you know, this initial condition, the system is not considered to be fully specified. Okay. The other thing to remember is that we always assume that this system that is this differential equation with these initial conditions has a unique solution. Okay. And we have not discussed this here at all. But what does it mean for a system to have a unique solution? It means that given this initial condition, there can be only one solution. First of all, that exists. Okay. So I have to write it properly. There exists a solution. Okay. So the first thing is that there has to be a solution. It should not be that I take an initial condition. There is no solution beyond a certain time. That would be a problem where there exists a solution. And secondly, this solution is unique. Okay. Secondly, the solution is unique. So this is rather, rather important. Okay. So one of the things to sort of, you know, I mean, remember is what is, what is the condition for existence of a unique solution? So this means that fxt is Lipschitz in x. Okay. We need fxt to be Lipschitz in the states. Okay. This is the requirement for solutions to exist and be unique. So now what is this Lipschitz condition? Lipschitz in x and continuous in t. Okay. We will actually Lipschitz in x and piecewise continuous in t is fine. But I'm going to say continuous in time. Okay. I'm going to make it a little bit more, you know, relaxed. Okay. So, so what is, you know, so of course we are saying so many things. So we want to understand where sort of things can go wrong. All right. So let's see. Consider an example. Yeah. Say this system of the form x dot equals x squared. Okay. And let's try to integrate and see what happens. So, so we of course give an initial condition of the form that is this. So if I try to solve the system, what do I get? Okay. So I get dx over x square equals dt. And of course I integrate from t0 to t on both sides. Yeah, or actually on the left hand side, this will be x0 to x of t. Okay. And so what do I get here? I will get something like minus 1 over x t plus 1 over x0. Yeah. So this is, yeah, this is just using the standard integration formula for 1 over x squared. Yeah. And this will be t minus t0. Okay. So if I actually solve for this, what will I get for x t? This will be, let's see. Let's look at this carefully. This will be 1 over 1 by x0 minus t minus t0. All right. I believe this will be the solution. So, yeah. So this will be the integration of the left hand side. And this is the integration of the right hand side. Then I take this guy to the right and this to the left. Right. So I get, and then I flip this. Yeah. Then I do this. Take the inverse. So I get something like this. Absolutely. Okay. Okay. Okay. So look at this. What happens to the system? Right. What happens to the system? Let's try to understand. Okay. If, let's see. So if t minus t0 is equal to 1 by x0, x of t blows up. Okay. x of t blows up. So basically back things start to happen. Okay. So back things start to happen. At t minus t0 equal to 1 minus x0, equal to 1 over x0. Okay. And this is called finite time escape. Okay. So the solution seems to exist until a certain time. That is until, until the time is, right. Less than, unless the time elapsed that is t minus t0 is less than 1 by x0. But at 1 by x0, the solution blows up. So solution doesn't exist anymore. Okay. So if I sort of change this a little bit, I mean, if I, if I instead, instead of doing this, if I even change this a slight bit, that is I make this say x of t equal to, sorry, if I make this x dot equals x cubed instead of x square with some initial condition as before. And I follow this entire sequence of steps. Okay. As before, then what I will get is my solution will be x t is, let's see, minus, right. So this will be square root of 1 over, let's see, I want to be careful here. So let me actually write it out so that I don't make a mistake. If I make this, then I will integrate. So I will get from here minus 1 over twice x t squared plus 1 over twice x0 squared equals t minus t0. And from here, I will get x t equal to square root of 1. Okay. So this will be 1 by 1 over x0 squared minus 2 t minus t0. Okay. So this is what you will have. So why did I make this slightly different? Why did I want to look at this slightly different case? Because if you look at this case, yeah, if you look at this slightly different case, what happens is that here in this particular case, yeah, if I cross t minus t, if t minus t0 is greater than 1 by x0, if t minus t0 is larger than 1 by x0, solutions still again start to exist. So the solution only doesn't exist at that one point that is t minus t0 equal to 1 by x0. Okay. But as soon as I change the power to x cube, for example, then what happens is I get a square root here. Okay. So if I look at what happens in this case is you there. So in this case, no solution beyond t minus t0 equal to 1 over twice x0 squared. Okay. There is no solution beyond this. Okay. Because after t minus t0 becomes larger than 1 over 2 x0 squared, this entire thing becomes a negative quantity. And so you start getting an imaginary number. Okay. So solution exists only until this point and nothing beyond. Okay. So, so this is a problem system. We do not want to consider systems of this kind. Okay. So this is a case where solutions do not exist or they have what is called a finite escape time. So we don't like these kind of systems. Okay. Another example of systems that we don't like is the ones that have nonunique solutions. Okay. What are such systems? So that is, so this is, this was non-existence. So then I have nonuniqueness. Okay. So what are such systems? So let me look at one example only. Okay. So let me take x dot equal to square root of x. Okay. There I had the square here at the square root. So let's look at what happens. Okay. And here I give x at 0 equal to 0. I give initial time as exactly 0 and initial state also as 0. All right. So let's see what happens. So again I do integration as always. Yeah. Now of course 0 to t and this is 0 to x. Okay. So what do I get? I get. Yeah. What do I get? So the here I get. So this is x to the power minus half. So when I integrate I will get to root x equals. Okay. So which implies x equal to t by 2 whole squared. Okay. So this was x to the power minus half. So when I integrate I get x to the power plus half divided by half. So that is 2 square root of x. Okay. And this is equal to t on the right hand side. So initial condition is initial time is 0 and initial state is 0. So I get this. So using this initial state 0 I have 0 here. Initial time 0 I have 0 here. Okay. So now notice that this is definitely one solution that I found out of, you know, legitimately integrating this thing. Okay. Okay. So I found out of legitimately integrating this quantity. Now, however, notice that x equal to 0 is also a solution. Yeah. So that is x staying exactly at 0 is also a solution. So if I make a picture, for example, is to illustrate, what will it be? So if I look at the t and I look at x of t, the amazing thing is that there are two solutions. Right. So x equal to 0 is a solution. Yeah. Fine. Okay. So this is a better way so that we can actually see things. Yeah. So on the x axis is time and on the y axis I have the x of t. Then there are two possible solutions. Right. So one is this, you know, x equal to 0 solution. Right. And the other is t squared by 4, which is some kind of a, like a parabola. Right. The other solution is like a parabola. So it will be something like this. Right. So you have two solutions, x equal to 0 and x equal to t squared by 4. Okay. So this is an example of a non-uniqueness of a solution. Right. So both these cases are rather bad. Okay. Because in that earlier case, there was no existence of solution at all. So that is, of course, very bad. Right. In this case, what's happening is that at the starting at the same initial condition I can possibly get two parts in which the system can move. Now, you can imagine analyzing stability for such systems is really complicated, really difficult. In the first case, because I don't even know if solutions exist beyond a certain time. So I cannot guarantee anything about system properties beyond a certain time. And in this case, you know, I cannot even talk about stability because, you know, at a particular initial condition, parts seem to diverge. So I don't know how many parts there are and what happens on those parts. Yes, there are modern notions of stability which do deal with systems like these. But in our case, we don't consider such cases. Okay. So what do we need? We need the, like I said, we need the Lipschitz condition. So what is the Lipschitz condition? Lipschitz condition is sort of a sublinearity condition. I am going to state only the global version, local version you can look up. There exists a positive L such that f of xt minus f of yt in the norm is less than equal to L times norm of x minus y. Okay. So this is what it means for a function to be Lipschitz in the x state argument. Okay. In the state argument. So this is what this is called the Lipschitz condition. This is sort of a sublinearity condition. So it's not very difficult to see that in both these cases, right, this kind of a sublinearity condition will be violated. Okay. I would strongly encourage you to check that this sort of a Lipschitz condition is going to be violated. Yeah. For the examples that I have actually spoken about. Excellent. So what is the setup? We have a system, a nonlinear system. We have an initial condition and we usually denote our solutions by this. So this is sort of a, I would say, abuse of notation, strong abuse of notation if I may. The solutions in general are always denoted by phi of t comma t0 comma x0. Why? Why do we do this? It's because the solution is depending on time. That's obvious. But it also depends on the initial time and the initial state, which is specified here. If I change the initial time and the initial state, this definitely changes. Okay. However, for the purposes of, you know, keeping our notation under control, we, what do we do? We actually just call it x of t, right? Just like we did here in all the examples, we called the solution as x of t. In fact, if you noticed, in all these cases, the solution was a function of t, initial time and initial state. So it was actually a function of all these three quantities. But we sort of suppress that in our expression just to keep things succinct. However, please do remember, please do remember that it is very, very critical to remember. And that we are always talking about a solution with a pre-specified initial time and the initial state. So the solution definitely depends on the initial time and the initial state. Okay. There is no two ways about it. All right. Great. Great. Great. So I just want to summarize, you know, what we spoke about today. So what we discussed today was really just the setup, okay, for discussing nonlinear stability. Okay. So stability of systems in the sense of Lyapunov is a rather critical notion. And we have simply just started discussing the setup of this. Okay. So what is the setup? It took us some time to talk about it because it is rather critical. Right. So in this setup, it is the idea that when you specify a system, first of all, you need to provide certain initial time and initial states. So what is called initial conditions. The second rather critical thing is that we need to sort of pre-suppose or pre-assume that there exist unique solutions to the system given this initial state and time. Now, in order for that to happen, we saw some counter examples of systems which either don't have a solution beyond a finite time or they don't have unique solutions given a certain initial condition. We want to avoid both these sort of systems and therefore we assume something called the Lipschitz continuity of the vector fields that is the fxt on the state x and we assume continuity in time. All right. And both of these conditions is what helps us to have existence of unique solutions. Of course, we have not proven why this is the case. That is slightly outside the purview of what we want to do. You can always look at the proof in standard references like nonlinear systems like Khaleel in order to find proof of the fact that if you have Lipschitz continuity, then you will get unique solutions, you will get existence of unique solutions. Okay. But we simply go ahead and assume it and this will help us to talk about equilibrium and stability in the sense of Lyapunov in the upcoming session. All right. So this is where we will stop. Thank you all for attending and let's meet again soon. Bye.