 Hello everyone. Welcome to yet another session of our NPTEL on non-linear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT, Bombay. We are again in front of this nice new motivating image for our course and this very nice SpaceX satellite which is hovering around the earth. And the algorithms that we are designing and analyzing are going to be applicable for autonomous flight of systems such as these. So we are in week number 5 and what we were doing was talking about a new notion of persistency of excitation which is somehow saying that even if the vector signal is, you know, even if the outer product of a vector signal is, you know, is not full-ranked. We can integrate it over a certain window of time to get full-ranked. In fact, get definiteness. So, and this is something that is rather useful in system identification which precedes adaptive control. So one of the things that I would like to sort of point out is that it is also possible to define this similarly for matrix signals also. Yeah, it doesn't matter. Even if it's a matrix signal, we can have an identical definition and nothing really changes. So, great. We also saw some examples last time. So what we want to do today is to sort of connect persistence of excitation to stability. So this is where we begin. So this is lecture 5.3. So, all right, let me fix this a little bit. Yeah, all right. So let's look at a very simple scalar problem. All right, let's look at a rather simple scalar problem which is that of a system which looks like this, where it's just a scalar system, but what we are told is that A and A dot are bounded. We are also of course told that, I mean, irrespective of what is A of t, it should be evident to you that A squared t is greater than or equal to 0. In fact, irrespective of what A of t is, A squared t is greater than or equal to 0. Now, notice that I only said it is greater than or equal to 0. I did not say anything about it being strictly positive. If it was strictly positive, then claiming stability of this is not difficult. If A squared of t was in fact strictly positive, then claiming stability is rather easy, because I can simply bound it and then compute the integral and so on, and then I'll get a standard exponential stability result. But that is not what we are saying. We will say A squared is positive and we are of course also going to say A of t is persistently exciting. This is standard acronym for persistently exciting. So, we are saying that A is not necessarily non-zero at all instance of time, but A is persistently exciting. So, we are moving in window average, something nice is happening. Great. Now, if you want to evaluate the stability of the system, what will I do? I will just integrate it. It's a scalar system, not difficult to integrate at all. So, this is what we will do. We will simply integrate the scalar system. And what will I get? I will get just by taking x to this side and then integrating this over time and this over x. I will get something like x t is x0 exponential of minus t0 to t A tau squared theta. So, this is what I am going to get. Very standard integration, nothing too difficult or too unusual. So, of course what do we need for exponential stability sort of a result? I need that norm of x t, in this case absolute value of x t has to be gamma x0 e to the power minus alpha t minus t0 for some gamma alpha positive and all time greater than equal to t0. So, this is the exponential stability definition applied to system 3.1. Now, what we do is we break this integral into these windows of time capital T, because we know that A is persistent over these windows of time capital T. So, we will try to break this time window t0 to t in these windows of time capital T. But you can imagine I can take only so many windows and then I will be left with some delta which is less than t. So, it should be clear that some delta, I can break the overall time from t0 to t into windows of capital T, k windows of capital T and then I will be left with some small reminder and that is basically this delta. So, this I try to evaluate this integral and that integral is in fact greater than equal to this integral. Why is that? What I have done is I have ignored this last piece, I have ignored this last piece. But what do I know? The integrand A squared is non-negative. Therefore, if I ignore a piece in this summation, then I am possibly only reducing the value, not increasing, possibly only reducing. Therefore, this is greater than equal to this kind. This is in fact greater than equal to this kind. So, let's look at this. Let's look at how this will work out. So, you have this kind of a unique quality in equation 3.4. And then if we assume persistence of excitation as per our definition, then what would we have? We would have that integral of t to t plus cap T is of A squared tau is strictly greater than equal to mu. And this has to hold true for all t greater than equal to t0. And we are going to use it here. You can imagine I am going to use it here because each of these are windows of time capital T. And this is where all these uniform bounds help us. This is not depending on the small t. Otherwise, we would have some trouble. And so, if I just substitute, the left hand side is just the integral from t0 to t, but it is greater than equal to k times mu. Because this is just k times this. And I know that each of these is greater than equal to mu. And therefore, if I just take over summation over k, it is greater than equal to k times mu. And that's what I get here. And this k times mu, I am just writing k using the definition that is of t. That is, I split t into t0 plus kt plus delta. And that's what I am using to get substitute for k from here. All right? Great. And why did I substitute so that I get something like a t minus t0? So, what do I have here? If I substitute this in equation 3.2, which is what? Which is this guy. Then, here I have a negative sign. So, the signs flip. And the signs flip. So, this was greater than equal to, but the exponential of negative, it will become less than equal to. So, from here I will have minus t0 t, a squared tau, d tau is less than equal to minus t minus t0 minus delta by t mu. And e to the power of that will be less than e to the power of that. Because it is a increasing function. All right? Great. So, what did I obtain? All right? So, what did I obtain? So, what I have obtained is this kind of an expression right here. Right? Because what do I do? I take this delta and mu and tp separately outside. And then t minus t0, piece inside this. So, this starts to look like a gamma. This starts to look like a alpha. And I have the very, very established exponential stability definition coming out here. Okay? So, pretty straightforward. I have the standard exponential stability definition coming out here. All right? Great. Great. With this gamma and with this alpha. Right? So, therefore, what have I proved? I proved that if I started with a scalar system which looked like this. Right? And here the a is not necessarily positive, strictly positive. Right? So, I could in fact have something like say a of t is say sin t. Yeah? Which is, you know, going through zeros also, not always positive. And therefore, I cannot use my conventional integration to conclude exponential stability. But what I'm saying is that this particular case, if a is not necessarily strictly positive. But in fact, it's persistently exciting, which is what the signal is. It is persistently exciting for some window capital t. Then this system can be shown to be globally exponentially stable. Right? So, this is just the scalar case, of course. I mean, just showed it for a scalar example. Right? So, we sort of connected persistence of excitation to stability for a scalar example. Right? Obviously, if you, we want to extend this to a vector case. Right? So, that's the more general case, of course. And for that, we need few additional results on exponential stability. Okay? And so, that is what we are going to look at now. Right? We want to state a few more results on exponential stability. Okay? So, this is rather interesting though. All right? So, using this new definition of persistence of excitation, we are already able to conclude stability. All right? And so, we want to see, you know, how we can do the same for vectors. You know, that is at standard state space dynamical systems. Right? So, like we said, in order to proceed in this direction, we need alternate versions of stability theorems. Right? So, this is the alternate exponential stability theorem. Okay? So, what is this alternate exponential stability theorem? We've already seen a standard Lyapunov theorem for exponential stability. In fact, I'm not going to write it again, but I'm going to just go back to where it is. Right? I mean, this is the local version. This is the local version, no problem. The global version is just with the radial and boundedness. So, if you have a Lyapunov function, which is decrescent, and then you have the same order class k functions, 515253, says that V is lower and upper bounded by these 5152, and V dot is lower bounded by a, is upper bounded by a minus 53, then you have local exponential stability. So, notice, notice that you require V, this is V being positive definite, this is V being decrescent, and this is V dot being negative definite. Yeah? So, this implies negative definiteness of V dot. This implies positive definiteness of V, this implies decrescence of V. Yeah? The only additional things here are that these are all same orders of magnitude. Okay? So, this is what was critical for exponential stability. Okay? So, this is the result we have already seen for exponential stability. We want to give a sort of what looks like a weaker result, but it's not. Yeah? So, we want to give an alternate exponential stability theorem. Alright? So, what is this alternate exponential stability theorem? It is essentially saying that if I have a system like a non-autonomous system, X dot is f dx with some initial condition. Alright? What I want is a candidate Lyapunov function and some constants alpha 2, alpha 3 delta positive such that for all X in a local domain, yeah, we have these three equations to be satisfied. First is something like this, alpha 1 norm X squared less than equal to V dx less than equal to alpha 2 norm X squared. This is pretty much like the first statement that we saw, except that in this case, I apologize, except that in this case we are exactly writing the structure of this phi 1 and phi 2, right? Yeah? But it's very much like the first statement, not very different. Yeah? Because this gives me positive definiteness and this gives me decrescence. Very similar. The second statement is where things start to differ. Yeah? V dot is negative semi-definite only. We don't require V dot to be negative definite. This is not the case here. Here it's already requiring it to be negative definite. But then we need an additional condition. Yeah? This is 4.4, which says V dot is not negative definite, but the integral of V dot on a sliding window again, yeah? Now I have a sliding window of delta, yeah? The integral of V dot on a sliding window has to be negative definite, okay? So this is sort of, you can think of it as a relaxed exponential stability theorem. It says that V dot itself does not need to be negative definite, but a moving window average of V dot needs to be negative definite and that's enough for exponential stability. It should sort of philosophically make sense anyway. Yeah? Because after all, what are we saying? We are looking at analyzing a system over infinite time after all, right? We have infinite time. Therefore, even if at a particular instant in time, I don't get a dip in my state. That is, I don't get my state going towards the origin. But over a window of time, I guarantee that it's going down, right? I mean, yeah? So instead of, so one possibility is that I do this all the time. But this is very unusual, right? I will almost never have a signal like this, right? So the other possibility is that I do something like this, right? So I may be increasing, but over a window of time, I sort of dip, okay? And this is what is sort of codified here in a more formal language, that V dot itself is only semi-definite, but the integral over a moving window, that is a moving window average is negative definite, yeah? Very, very nice, very, very interesting and very, very powerful result, yeah? This is a very nice alternate exponential stability theorem, right? Great. So, yeah, so the important thing to remember is that if I was just looking at equations 4.2 and 4.3 using a standard Lyapunov theorems, I would just have uniform stability and nothing more, because this is just what we have, right? It will just give me uniform stability because I have V to be positive definite and decrescent and V dot to be negative semi-definite, right? So I would just have obtained uniform stability out of these two equations 4.2 and 4.3, right? But of course, because of 4.4, I get exponential stability, right? So, yeah, so we are not using Babel at Slema, we are not using Lassar invariance, yeah? But we are able to show with a negative semi-definite V dot only that you have exponential stability, yeah? So remember, we had several techniques for working with non-strict Lyapunov functions. What is a non-strict Lyapunov function? It's a candidate Lyapunov function which leads to a negative semi-definite V dot, but we know that the system is stable, right? Like for example, for the pendulum case or for the simple harmonic oscillator case, if you took half x1 squared plus x2 squared, you know that the simple harmonic oscillator with damping is in fact a stable system, right? But we get V dot to be only negative semi-definite, right? And so this was a non-strict Lyapunov function. So for non-strict Lyapunov functions of this kind, we had two techniques until now for proving asymptotic stability, right? One was the Lassar invariance, which is the classical method, of course. The second was the Babalat's lemma, which is what we stress on significantly more in this course. And you will continue to see users of this Babalat's lemma, right? In this course over and over again, yeah? But here we have a third way, yeah? Theorem 4.1 here actually provides a third way of proving exponential stability without invoking the Babalat's lemma or Lassar invariance, all right? So, of course, we are not going to prove of this. This is available in the book by Shastri. And so, you know, you can always refer to it if you are interested in looking at a detailed proof of this result, of this very interesting result, I would say, yeah? All right. So, another notion that we sort of want to use is the notion of uniform complete observability. Remember, we showed the connection between persistence and stability for this scalar case, yeah? Now we are trying to generalize it to the vector case. Now, the idea is or the question is how to do it, right? So, we are moving progressively in steps. So, the first thing we saw was a sort of exponential stability result, the new exponential stability result. And the next thing that we are going to look at is the notion of uniform complete observability, right? So, what is uniform complete observability? So, I hope all of you have already done a linear systems course, you are expected to have that background. And you would have seen the notion of observability, right? You basically look at basic the observability matrix, which is, you know, something like C, C A, C C A, C A squared. Let's see, did I get this correct? This should be C, C A, C A squared and so on. Yeah, this is the observability matrix. But we also had the observability Gramian, right? We also had the observability matrix is this. And we had the observability Gramian, which allowed us to do, give like a Lyapunov-like equation or Riccati equation which corresponded to observability, all right? So, if you have not seen, I strongly encourage you to look at what is an observability Gramian. You will of course look at it here, but it was used to be something like this. You know, it was phi transpose tau 0, let's see, phi transpose tau 0, C transpose C, phi tau 0, d tau, okay? This is what was the observability Gramian. And the equivalent condition is that this is a max rank or this is a positive definite, yeah? These were sort of the equivalent conditions for observability for linear time invariant systems, okay? So, of course, the Gramian was also valid for time varying systems, right? So, of course, it's a little bit more general notion, right? So, we look at the notion of a slightly stronger notion, if you may, and that is the notion of uniform complete observability. So, what is that? So, now we are sort of specializing to linear systems. So, let's look at this linear time varying input-output system. So, well, it's just an output system, there's no input because for the notions of observability, we don't really care about the input as such. So, its x dot is a of txt and yt is ctxt, right? So, this is the input-output system, sorry, this is the output system, 5.1, where, of course, the states are in RN, the output is in RP and AC are piecewise continuous. So, when do we say that the system is uniformly, completely observable? The largest observable, but uniformly, completely observable, is, and this is the acronym, of course, is UCO, if there exist positive constants, beta 1, beta 2 and delta, such that for all t greater than or equal to 0, this integral, the Gramian integral, which is exactly this guy, almost exactly this guy, is lower and upper bounded by this beta 1 and beta 2i. Again, the left-hand side indicates positive definiteness and the right-hand side just indicates boundedness. Here, capital phi is, of course, the state transition matrix corresponding to AT. This is the state transition matrix, which corresponds to A of t. So, again, if you look at what is inside here, that is the integrand, this is, of course, a symmetric product. So, this is greater than or equal to 0 at each instant in time, right? So, what is the dimension? So, the state transition matrix is, of course, R n by n, C is what? R, let's see, it is P by n, right? So, the whole thing is, in fact, R what? n by n, right? The whole thing is R n by n, correct? Yeah, but the thing is because P is typically less than n, therefore, C is not full-ranked, therefore, this entire product is not necessarily full-ranked. So, it's only possibly positive semi-definite, right? But again, just like the persistence of excitation, so it's a lot of similarity with the persistence definition. And in fact, we will connect the two. Just like the P definition, you are expecting that the moving average be strictly positive definite. So, it's some kind of, again, a rotation condition, right? Just like before. So, very quickly, we want to sort of see what's the difference with conventional observability. The first thing is, in conventional observability, you take the range as some 0 to capital T, some initial time tf. So, this is what is your conventional observability gradient. You just say that there is a finite time tf such that if you take 0 to tf, this is strictly positive definite, okay? Here, we don't have 0 to tf. We say that there is a sliding window of delta t of size delta such that it is positive definite and bounded on all these windows, okay? So, this is significantly different. And also, there's, of course, bounds on both ends. And importantly, very importantly, just like persistence of excitation, the bounds are, in fact, independent of the small t. It's a uniform bound. These bounds cannot depend on the small t, right? So, this is, again, something that's rather critical. So, this is different from conventional observability, right? Therefore, this uniform complete observability is, in fact, a stronger result, right? It's a much stronger result. Why? Because if we put t equal to, say, 0, then obtain observability. Then we obtain observability. So, why is it complete observability? Because it is valid for all t. It's for all t until infinity, right? And why is it uniform? Because these bounds are independent of t. So, this uniform complete observability is a notion which is stronger than observability itself. And this is the notion that will help us prove stability of parameter identification systems, all right? Excellent. So, let's sort of summarize what we did today. We looked at these alternate exponential stability. First of all, we sort of tried to connect or we made a connection between stability of a scalar system and persistence of excitation, yeah? Then, in order to extend this notion to vector systems, we are looking at different tools. The first was an alternate exponential stability theorem, which seems to be, you know, a very interesting alternative to Babalatz and Lassal invariance. And then, we looked at the definition of uniform complete observability, which is, again, a notion which is stronger than conventional observability of linear time varying systems. So, for linear time varying systems, we are subsequently going to connect the UCO and then the exponential stability theorem and then persistence of excitation. And all this nice mix is going to give us the stability that we desire for vector systems with persistently exciting gains in some sense, all right? Great. So, this is where we stop today. I'll see you again next time. Thank you. Thank you.