 Welcome to yet another session of our NPTEL on non-linear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. We are again in front of this nice motivational image of the SpaceX satellite orbiting the Earth and we are already well on the way of being able to analyze and soon being able to design algorithms that will drive systems such as these to perform autonomously without intervention from the Earth. So we are almost at the end of our discussions in the week 5 and we have been looking at the topic of persistence of excitation and how it's connected to stability of time-wearing linear systems specifically. The context of these time-wearing linear systems is as I mentioned from parameter identification algorithms. We are of course yet to see these parameter identification algorithms themselves but we will do so rather soon. So let's sort of look at what we were doing last time specifically. So at the end of the last session we had already proved for the vector case that is this problem. We had proved for this dynamical system that if phi is persistently exciting then this is in fact globally exponentially stable, so uniformity is anyway included. So we got this rather nice result for this dynamic where phi is now a vector in Rn. And we also just stated we didn't actually complete the proof of it. I encourage you to complete the proof of it. But we stated a sort of nice exponential stability result for a system of this kind which is a very, very standard structure in model reference adaptive control where E was the tracking error and theta delta is the parameter estimation error. We usually get something, a system of this kind in the model reference adaptive control context. And we of course under certain assumptions we claim that this system is in fact uniformly globally exponentially stable if and only if this signal phi which is this signal is persistently exciting. So this is sort of where we were until last time. So today we want to state a general integral lemma or nonlinear parameter varying systems. So that's where we are going to begin today. So let me sort of mark this as lecture 5.6. So what is this integral lemma? So if all of you remember you would already, we have already seen an alternate version of exponential stability theorem here. Now this is restricted to systems which have state and time dependence, which is already pretty general. So one would wonder what more do you require? But if you are looking, if you start to look at what is called parameter varying systems, nonlinear parameter varying systems, that is basically systems which have a structure like this, smart here. Yeah, these are called nonlinear parameter varying systems. Why? Because you can see that there is this lambda which is a parameter, right? So when we say what is a parameter, parameter is just some constant value. So you keep plugging in different values of the parameter lambda and you get a sort of slightly different or slightly modified dynamical system. This parameter could be anything. For example, when we look at the van der Paul oscillator, if you remember, there was a parameter, right? Which affected the behavior of the van der Paul oscillator, right? So similarly, there are many, many parameter varying systems, especially in adaptive control, right? And so it's rather critical to be able to talk about stability of these parameter varying systems when for a certain range of parameters, right? I mean, I don't want to claim stability of a parameter varying system for one particular value of the parameter, but I want to be able to claim that for a certain range of parameter values, I am guaranteed to have nice table performance, for example, right? So this integral lemma, what it actually says is something like that, a stability result of parameter varying systems. And this is along the lines of this alternate exponential stability theorem, right? Why do we say that? So let's look at this integral lemma first, right? It says that if there exists constants, RCP positive, right, such that you have, you know, you have that this max of infinity, the L infinity norm and the LP norm for some P and some X0 in BR is upper bounded by C norm X0, right? Then X equal to 0 is to be lambda uniformly locally exponentially stable, right? And further, if, you know, this R is infinity, that is, if C exists for all X0 and Rn, then it is lambda uniformly globally exponentially stable, okay? So what is, when do you say that lambda is, you have lambda U less? So what is the purpose of the lambda? The point is that it is uniform local exponential stability holds for all lambda in some domain D, okay? So for a certain range of values of lambda, uniform local exponential stability holds. So it is sort of lambda independent in some sense, therefore it's called lambda U less or lambda U less, okay? So now why do I say this is similar to our alternate exponential stability theorem? All you need to think about, all you need to do is recall what was the L infinity norm, oh sorry, all the LP norm. L infinity is just the bound in some sense. So what is the LP norm? It's basically something like a zero to infinity norm of Xp, right? It's, you have to put the time argument here because this is just the vector norm, yeah? So this is what is the LP norm for some value of p and you would remember that this is sort of an, and imposing a bound on this is some sort of an integral condition on the signal, right? Therefore it is called the integral lemma. And similarly here too, when we look at the exponential stability, the alternate exponential stability theorem, here too you have something like an integral requirement, some kind of a bound on the integral of v dot, right? Which is sort of a function of the state and time, right? So therefore this integral lemma is somehow to be seen as an extension of this alternate exponential stability characterization, right? For parameter varying systems, right? So this is what is this result? There's a very nice exercise that I want you guys to attempt. If you remember, we already looked at this simple harmonic oscillator and we in fact did prove the stability of that, right? In this class, in this previous one of these lectures, you see we start, use this simple harmonic oscillator and we constructed a, you know, lay up on a function in order to be able to prove exponential stability. In fact, we could prove exponential stability for this case, right? So now what we are saying is, let's modify the system and add a parameter. In fact, two parameters, lambda 1 and lambda 2. Notice lambda here can be any dimension, right? So it's not necessarily scalar or something like that, right? So here it's lambda 1 and lambda 2. So lambda is basically a vector in R2, right? And further we are saying that we have a domain for lambda, which is that it is strictly positive. That each component that is lambda 1 and lambda 2 are both strictly positive quantities, right? So what we are asking you to do is to use the integral lemma to prove that this is in fact u, g, e, s, lambda, u, g, e, s and lambda, u, l, e, s. So one of the two in fact, whichever one you can. So this is rather nice, right? I mean, intuitively, again, this all makes sense, right? That if these gains are any, take any positive value you are fine. But what we want you to do is to prove this for all possible values of lambda 1, lambda 2 is positive, right? And therefore we want you to invoke the integral lemma, right? So this is rather nice. Now, associated with this integral lemma, there are also another couple of results, yeah? And we are stating these results here, yeah? And then I'll work a problem, so not in exact same sequence, but anyway. We are stating these results here because we want to show the utility of this new integral lemma also to prove convergence of parameter identification systems, all right? So as always, this entire discussion on persistence this entire week has been on using persistence to claim some kind of a stability, all right? For parameter identification systems. So we want to do the same for parameter varying systems also, yeah? So of course, we have very good motivation for it. We will talk about it soon. And then I want to define another notion and also talk about one more lemma, right? Which is for parameter varying systems, right? All right. So let's look at this. So this is lambda uniform persistency of excitation. So we've already defined persistence of excitation. But now for parameter varying systems, we are defining a new version of persistence of excitation. And that is the lambda uniform persistence of excitation. And it's also shortened as lambda UVE, okay? What is it? It's pretty straightforward. You have the same kind of function. The only thing is now the function also depends on a parameter, right? Because your system depends on the parameter. So you can hope that this function also depends on some parameter, right? And we have this outer product of this five, which is now lower bounded by mu i. Notice we don't use the upper bound here. Yeah. So I think we had, I had already mentioned when I did make the definition of persistence of excitation that a lot of people don't use the upper bound. Because that is just really a codifying the boundedness of the signal, which is sort of okay. Yeah. So we are not really using the upper bound here, but the lower bound is the critical one, which cannot be avoided. That's what is most critical for persistence. Yeah. So persistence of excitation, this expression look exactly the same barring this lambda parameter appearing here. So what are we saying? We are saying that we satisfy a similar condition for all t. So over a sliding window of time capital T and for all lambda in the domain. So this is the additional requirement here, right? That this happens for all possible values of lambda in the domain. But therefore it is a lambda uniform property, right? It's a lambda uniform property. Notice that mu doesn't depend on small t as always, the usual things, right? So this is what is lambda uniform persistency of excitation. Just an extension of persistence of excitation property to parameter varying systems. Now, once we have this property of lambda UPE, there is something called a measure lemma, which says that if the function phi is also upper bounded, right? Then the length of time for which it exceeds a certain magnitude is lower bounded by a positive quantity. Yeah. So this is sort of intuitive again, sort of intuitive again. So if you think about it, so if you think of persistent signal, so that we make some actions. Yeah, if you think of persistent signal, right? Whatever it would be, signs, sinusoid or something like that, right? So even if you say you have something like this. Okay. So if I have something like this, not sure why this is happening. This is a sort of persistent signal. The point is if I make, take any window of persistence, say whatever I mean, I think this will be like a persistent window, right? So if I keep sliding over this time t, say the period is t, if I take capital to be larger than this time period, then it becomes a persistent window, right? So if I slide over this time t, there is of course some kind of a persistence of excitation, that is this sort of a condition will be satisfied. Now the important thing to remember is that if a signal is persistent and it's also bounded, yeah, this is a bounded signal as you can see. Then we are saying that the signal cannot remain very, very small for the entire length of time. Yeah. Otherwise you cannot have a uniform persistence type property. All right. So that's what we're saying. So there exists some finite lower bound on the time. This is the length of time I muti, if you may. I muti is actually the length of time you, we actually denote it in terms of measures of the set and so on. But for the purpose of the discussion that we are having, I muti is just the length of time, right? So this is like the length of time. Why we don't call it length of time is because this I muti could be split in several pieces, right? So the idea is that this signal, phi, if you try to draw a horizontal line of this size, say this, yeah, say this is the line. Yeah. So this is phi, of course. Yeah. The y-axis is phi. So if this is the line and you try to look at when this is greater than this value. So it's this, this, this, this, and so on. So what the claim is, is that this length of time over which it is positive, that is this time. I should use a different color. This time plus this time plus this time. Yeah. In the x-axis is in fact lower bounded by strictly positive point. In fact, given to be this value. So basically we are saying that if you make a, you know, sort of a horizontal line above which you want your curve to lie. And if it turns, if it is persistently exciting, if it is upper bounded, then you are saying that the length of time over which it exceeds this value has to be strictly positive. Yeah. Otherwise the signal cannot be persistently exciting. So this is like a, if it's like a one way result, of course, if it is lambda up, then there is a strictly positive length of time over which it exceeds a certain value. Yeah. So that value is given by this. Yeah. I mean, you can choose a different value and you'll get a different answer here. But the point is as long as you choose a value here, it will be a strictly positive quantity here. Okay. This is important. Right. So this is the property that we can actually use to prove the asymptotic convergence of a system like this. Notice this is our standard scalar system that we have been considering until now. Yeah. But now the important difference is the presence of this parameter lambda. One might ask again, why do we care? Why do we have a parameter? Right. So again, in parameter identification systems, it is very common to have this kind of a setup also. Where your gain with this, where your term connected to the X or basically the theta tilde, that is the parameter error, in fact, would also have a parameter dependent term attached to it, not just a time dependent term. Right. And therefore, we, and we would like some kind of a stability property which is uniform with this parameter. It cannot depend on this parameter. Yeah. This is the usual requirement. I mean, we will try to point it out again when we get to it. And therefore, it is rather important for us to also look at stability of these systems. All right. So, how do we go about it? As usual, we are saying this is greater than equal to zero. Right. We're also saying that 80 lambda is lambda uniformly persistently excited, not just P, but lambda uniformly. Okay. And then we, earlier, we actually integrated this, right, in order to try to prove that this is like, this has some nice properties. Now we don't integrate, we take a Lyapunov function. Right. Lyapunov like function if you may. As one half X squared. And if you take a V dot, it's pretty simple. You get minus A squared X squared, which I know is negative semi definite. Right. And now I want to start invoking the integral lambda. What is the integral lambda? The integral lambda simply requires that the max of the infinity and a P norm should be upper bounded by some scaling of initial condition norm. And this is the vector norm, by the way, just the vector norm on the right-hand side of X zero of the initial condition. And the left-hand side are both signal norms. All right. So if you have such a problem, you have uniformly local or uniform global exponential stability. Right. So now what do we have? We have that VT is less than equal to V zero because this is less than equal to zero. Therefore, it's non-increasing. From that, I can simply write X squared T is less than equal to X squared zero. Therefore, absolute value of XT is less than equal to absolute value of X zero. And what is the infinity norm? Infinity norm is just the largest absolute value. Right. So we already know that for all T, this happens. XT is less than equal to X zero, which means the largest possible XT, is the infinity norm or the supremum norm, if you may, is also less than equal to X zero. Right. So done. So one is done. Right. So I've already bounded the infinity norm by some scaling of the initial condition. I mean I can write it as absolute value of this basically. You can write it as a norm. It's a scalar quantity, so it doesn't matter. Absolute value is the norm. Right. So the next thing that I want to do is I want to prove that some infinity, some P norm is also upper bounded by the initial condition. Okay. How do I do that? So I know that we start using our Barlert's lemma type arguments. Right. We know that V is lower bounded and it is non-increasing. Therefore, the infinity, the infinity exists and is finite. So if I integrate both sides of this equation now, just like one of the steps in our signal chasing analysis. I hope all of you remember it. If not, I would ask you to go revise. If I integrate both sides of this from zero to infinity. So the left hand side yields V infinity minus V zero. I can do this only because of this guy here. And otherwise I cannot actually integrate. And so this I can do this and this is V infinity minus V zero. And the right hand side here is simply I've just reproduced it here. Zero to infinity a squared t lambda x squared t dt. Okay. Then I'm going to invoke all these measure lemmas and the integral lemmas and so on. So what do I know? So I just look at this quantity here. I will produce it here. What do I know? I know that I can split this time into intervals of time capital T. I can keep splitting this time into intervals of time capital T. Right. That's what I do. I just take summation from one to infinity integral from K minus one t to KT of the same thing. Okay. Why do I do that? Because I know that my persistence is in this interval. All right. Yeah. And then K of course goes to infinity. Yeah. So summation all the way to infinite time. Right. Now let me be careful here. Right. Now let me expand this so we can read this better. So if you look at this, I come here and here I invoke my measure lemma. My measure lemma says that by the persistence property or lambda up property, I know that this integral, sorry, this, this pi will be larger than this for this much time. Yeah. And that's what I do. So I actually, because all the quantities here are non-negative. So I can only do part of the integration. I don't need to do the entire integration. Right. So what will I do? I will just do part of the integration. Right. And what is that part of the integral? And I know. So basically what do I know? I know that for this part only, I just integrate this piece. So I pull it out. I can pull it out because this is a constant, constant lower bound on this. Okay. And that is this guy. Okay. So the constant lower bound on this is this guy. I've just taken a square here. Right. And this is the time interval. Right. Okay. Okay. Because there's a constant lower bound on this for this, this much time. Yeah. I can actually pull this out and have this much time here multiplying. Right. Let me actually think. Is this correct? So I assume I can basically assume that it is larger than a signal, which is exactly this value here within this interval and, and in fact, zero equivalence. Right. So, so basically what I'm saying is, yeah, because I'm integrating it, I can say that a squared is larger than the square of this for a certain interval. And beyond that, I don't care. It might as well be zero. Okay. It might as well be zero because I don't really care. Right. Right. Right. It might as well be zero. I'm just wondering if I can write this as k minus 1 t or 2 k t or I have to actually consider this sort of integral only. Right. That is sort of what I am wondering. So I think you cannot have this sort of an interval anymore. What you will have to do is this integral will be over I mu comma t set. And I don't think there will be this multiplication anymore because this is already a bound. So this will be I mu, I would say k capital t because it is the kth interval or I can, I can even, yeah, I can even index this with k. Right. And this will be basically just x squared t dt. Right. This is just basically x squared dt. Right. Right. And this is, so what I'm claiming is this is actually equal to some norm, the two norms squared from zero to infinity. So I'm claiming that this is alpha x to the two signal norm of x whole squared. All right. So this, all right. So, but, but I think there is a little bit more of a step to be done here. So, so anyway, let me, let me continue with what will happen. Right. So, so you see that I take this integral only over this much time. I still need to sort of answer whether it should be from k minus one t to kt or just this piece. Right. It looks like it will have to be for just this piece. Correct. And if this is equal to in the limit, this alpha, which is this quantity, right. This is alpha is just this guy. Right. And this summation can go back here, right here, because k is appearing only here. Then this, if this becomes equal to the two norm, two signal norm, right, then I can plug this back, right, into this inequality here. Right. Because this negative sign goes here to become v zero minus v infinity. And this is known to be greater than equal to the alpha times two norm of x. Right. So, therefore I have alpha times two norm of x to be less than equal to v zero minus v infinity, which means that the two norm of x, well, actually this is like a squared. Ah, this is a squared. So, this is also squared. Square is less than equal to two norm, less than equal to v zero minus v infinity. And since v infinity is positive, this is less than equal to v zero by alpha. Right. And what is v zero by alpha? This is actually equal to non x zero square by two alpha. Right. This is non x zero square by two alpha. Right. So, I am done. Yeah. So, I have two norm, the L two norm by some, I can take square roots on both sides. So, I bounded the L two norm by some constant multiplying x zero. And I bounded the L infinity norm with a constant multiplying x zero. And that's what we need for the integral lemma to be satisfied. I mean the assumptions of the integral lemma to be satisfied. Yeah. Both are upper bounded by some constant multiplying x zero. So, just take the larger constant and you have this and it's done v to p equal to two. All right. The only thing I need to sort of verify, which I will do at a later stage is how this translates to the two norm. It's not difficult because we are taking sort of limit as k goes to infinity. So, I think this is going, this is fine. But this is how you can prove that for the parameter varying case also you have lambda uniform global exponential or lambda uniform local exponential stability. All right. So, we also have another tiny result on output injection, which is a parameter dependent version. I mean, this is part of the notes. We don't use it directly as of this stage. This is useful when we are trying to prove the vector version of stability for the system like a five five transpose type of a thing. This is like we did for the purely time dependent case. But I'm going to leave it be as of now. Yeah. Because we don't really use it directly. Yeah. So, if we do need it later, we will invoke it. All right. Great. So, this is, this was sort of the conclusion of our lectures for this week number five. And we looked in detail on persistence of excitation and how it's connected to stability of parameter identification systems. We learned a lot of technical tools, alternate exponential stability theorems, uniform complete of observability and you see you under in output injection. We also looked at a more general integral lemma and corresponding definitions for parameter varying systems. So, it was a very technical, technically intense week, I would say. I would really urge all of you to carefully look at the lectures from this week and try to understand as much as possible. All right. I'll see you folks again soon. Thanks.