 Welcome to get another session of our NPTEL on non-linear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. We are into the fifth week of this course and I really hope that our journey together has been interesting. We have this very nice new background image of this satellite that is orbiting around the Earth. And as we had mentioned before, the algorithms that we derive and analyze in this course are frequently used to drive systems such as these. So, what we were, we have been doing in this week and in the last class also is basically looking at persistence of excitation and trying to connect it with stability of parameter identification algorithms. So, we have not yet seen any parameter identification algorithms, but I assure you that the dynamics that we sort of see in these lectures appear in such systems very frequently. So, these parameter identification identifiers structures are very similar to the sort of systems that we are going to see. So, we of course established the definition of persistence of excitation. We looked at an alternate exponential stability theorem. Then we, for the scalar case, we sort of connected persistence, excitation to stability. There was a very nice neat connection. So, it was very easy to establish in the scalar case or relatively easy to establish in the scalar case. But now we had moved forward to trying to do something or trying to do something similar for the vector case. And in order to do that, we had started by stating, first of all, an alternate exponential stability theorem, right, which sort of seemed like a, you know, nice variation over Lausanne variance and Babel at Slema, which we have been using to analyze stability when we have a non-strictly appen of function, yeah. So, here too it was the case that there was a non-strictly appen of function. But then because of this additional integral condition, we could in fact claim that there is exponential stability, all right. So, then we went on to establish more, you know, definitions which we are going to use subsequently. The first is that of uniform complete observability, right. So, we define what is uniform complete observability. And this is again, now we have specialized to linear time varying systems. So, these are all only for linear time varying systems because we use a state transition matrices and such. And we define what is the uniform complete observability, where we use something that looks like a observability Gramian, but is of course, you know, a more strict requirement if you may, than observability itself, right. And using this in fact, we now sort of want to state like an exponential stability theorem for linear time varying system. So, this is where we were until last time. The final thing we talked about was what is uniform complete observability and how it is connected to observability itself, right. And we of course, want to use this in order to analyze time varying linear systems, all right. So, this is where we are today. So, we are basically going to start here. So, let me mark this as lecture 5.4, all right. So, we want to now look at an exponential stability theorem for linear time varying systems. Assuming that, excuse me, assuming that, you know, you have a UCO type of condition on the system, all right. So, we again start with the same sort of input, sort of output system, where x is the state and y is the outputs. And the exponential stability. So, basically we say that the origin is exponentially stable if and only if for some C such that this AC pair is UCO, there exists a symmetric matrix P such that the two equations 5.4 and 5.5 are satisfied. What is 5.4? 5.4 is simply saying something about the boundedness and positive definiteness, right. So, if you remember, we had actually connected for symmetric matrices, we had connected x transpose Px, basically the positive definiteness of this function, right, with the positive definiteness of the matrix P, right. I hope you remember, we did this as an example, right. And this is exactly what's happening here when we specialize to linear systems, right. Because all our Lyapunov candidates always look like x transpose Px. We no longer talk about the positive definiteness of the function itself, but we just look at the positive definiteness of this matrix because it's sufficient, it's equivalent, yeah. For this sort of a construction, positive definiteness of P corresponds to positive definiteness of this function, x transpose Px, yeah. So, since they are identical, we just start talking about just the matrix P itself, right. So, of course the equation 5.4 like I said, this gives us positive definiteness, this talks about decrescence. My equation 5.5 says something about the derivative, right. Why does it say something about the derivative? Because just look at this carefully. If I take V dot of x, in fact it is V dot Tx because P is a function of time. So, I will get x transpose Px dot plus x dot transpose Px plus x transpose P dot x, right. And I just substitute for the dynamics and I will get x transpose Px. Notice that I have sort of excluded the time arguments, but they are there. Plus x transpose A transpose Px plus x transpose P dot x, alright. So, this is just obtained by substituting the dynamics here, here, yeah. And this is simply just x transpose PA plus A transpose P plus P dot, alright. So, this is exactly what it is, right. This is exactly what it is. This is basically saying V dot. So, what we are essentially saying is that we are now talking about definiteness of V dot. But not by looking at the entire function, because we don't need to. We only look at this matrix in between. That's what this is, right. That's what this is. So, what the second relation says, so the left-hand side is basically coming from the derivative of V dot. Like you can imagine. And the right-hand side is just the output matrix in the product. So, it's minus C transpose C. Notice that typically Y is in RP. So, this is a P by N matrix. So, NP is less than N. So, typically C transpose C, which is basically an N by N matrix, right, is going to be a singular matrix. It is at best, it is going to be negative semi-definite. So, the negative of that is negative semi-definite, right. So, using this result in 5.5, what we have claimed is V dot Tx is less than equal to minus X transpose C transpose Cx, which is at most negative semi-definite. Which is at best negative semi-definite, not negative definite, because C is at most rank P. P is usually less than N, right. And so, the product can be at best rank P. But it's of a dimension N, right. And again, P is less than N. So, it's a singular inner product. But because it's like an inner product still, it is like quadratic. So, therefore, it has to be negative with the negative sign. And so, it's at best negative semi-definite. So, what we are claiming is that just with this negative semi-definite sort of a structure, we can claim exponential stability, right. And how do we claim this? We invoke our alternate exponential stability theorem, right. Because you notice that all you need is negative semi-definiteness. But of course, you need some kind of a nice integral term happening, right. So, we of course have required that the v dot may be negative semi-definite, but we want that integral of v dot. So, I am saying some t to t plus delta should be positive definite, right. So, that's what this is, alternate exponential stability theorem. Sorry, sorry. It has to be negative definite, I apologize. It has to be not positive definite, negative definite. So, it has to be less than some minus alpha 3 norm x squared, okay. Has to be less than minus alpha 3 times norm x squared, right. But that's not too difficult, right. Why? Because let's say I do integrate this, right. Let me try to do this, right. So, let's actually write this out properly. So, I save some space. We want this. We want it to be, we want the integral to be negative definite. So, how do I claim this? I simply integrate both sides of v dot txt dt. Well, I need to use a different variable. v dot s xs ds t to t plus, I don't know, some cap t, right. And this is actually equal to integral from t to t plus cap t minus x transpose s cs c transpose s cs xs ds. Now, using the state transition matrix, I can write this as minus x transpose at small t t to t plus cap t. This is phi transpose s t c transpose s cs again phi s t ds times x at t. Now, if you look at this integral here, this is exactly the Gramian in UCO. You can verify by looking here, exactly this Gramian in UCO, right. And what do I know by the UCO condition that this is in fact greater than some constant, greater than equal to some beta 1 i. That is, this is positive definite, yeah. So, I am going to simply use that, right. This is some greater than some beta 1 i. So, this is then less than equal to, because there is a negative sign. So, the greater than becomes less than equal to minus beta 1 x transpose t x t, which is basically minus beta 1 non x t squared, right. And this is exactly what we wanted. This is exactly what we wanted for proving exponential stability. That even though v dot is negative semi definite only, so which is a weak thing, right. It is not negative definite, right. But the integral over some window is negative definite. That it is bounded, upper bounded by some minus beta 1 non x t squared, right. This is rather cool result, right. So, this was possible only because we assumed that the AC pair is UCO. So, we could actually compare with this definition here and use this set, alright. And of course, because of the alternate exponential stability here. So, I hope you sort of appreciate that we are actually using every piece that we are introducing in order to come up with the result that we want, alright. So, great. So, with this you can have this exponential stability which is slightly weakened sort of requirement, right. In a usual linear system Lyapunov equation, you would want this right hand side to be negative definite, right. But here it is only negative semi definite, yeah, alright. So, the next sort of ingredient that we want to introduce which is again something that we will use subsequently is that of UCO under output injection, alright. So, the idea is that if I, so the basic result says that if I inject function of the output, a linear function of the output, yeah, into the state, then it is not going to change the UCO property. So, the UCO property is invariant under output injection, yeah. So, this is what we claim, that is what is this theorem, yeah. It says that if you have some matrix K, a time varying matrix of course, because we are looking at a time varying linear system, yeah. Of course, all of this will hold for constant linear time invariant systems also because all these bounds are obviously very easy to obtain in those cases, yeah. So, if you look at a time varying game K of t, if it so happens that over a moving window, so if you take a moving window, average or moving window integral, yeah. It remains bounded over all such windows, yeah. Then AACUCO is equivalent to saying A plus KCC is UCO, alright, okay. So, you see that this is actually an output injection, right, because if you look at equation 6.2, this is what it is. The dynamics now contains an injection of the output, yeah. So, what are we saying? If we have the AAC system to be, that is only this system to be UCO, then if K has this nice boundedness property, then this entire system is also UCO, okay. So, this is rather nice. And of course, there is a, I mean, we just use different notation for the bounds. The bounds for AC were denoted beta 1 and beta 2 in equation 5.1. But here we denote with beta 1 bar and beta 2 bar just to differentiate because the bounds are going to be different, of course. Yeah, the bounds are not the same, they are both still UCO, alright. And of course, here we are assuming that, you know, A and C matrices are piecewise continuous and so on and so forth. Yeah, so basic regularity assumptions, of course. Alright, great. So, this is sort of, I would say all the mechanism that you need in order to look at, you know, exponential stability of parameter identification. Alright, at least the sort of linear parameter identification systems that we are used to, that we will be used to dealing with soon enough. Yeah, so what are the ingredients that we have? One is we have this nice exponential stability theorem for the linear system, which is like a weekend version of the typical exponential stability theorem, which will have some negative definite right hand side and so on. And then we also have this notion of uniform complete observability under output injection. So, what it says is that you, if you have like a nice bounded input bounded in this bounded gain k, which is essentially bounded in the sense of a moving average, then injecting an output with the gain in the dynamics is not going to alter your uc property. Alright, so this is what is rather nice. Yeah, alright. So, we will try to do a little bit of, or maybe one piece of this proof, yeah, today. And then subsequently we will try to wrap it up in the next session. Yeah, because we may not have enough time today to complete the entire proof. But anyway, so let us first begin by stating the result, right. What do we say? We say that look at, let phi be a vector signal now, that it is taking time and mapping to some, you know, some Rn, right. So, let phi, in this case Rn is exactly the dimension of the dynamics because of how the structure of equation 7.1 is, right. So, suppose phi is mapping time to Rn, yeah, which is the dimension of the state space and is piecewise continuous, yeah, basic. If phi is persistently exciting, yeah, as per our definition, then the origin is globally exponentially stable for this system, where alpha is just some positive constant gain and there is some initial condition x0, alright. So, this, so the first thing that you want to sort of notice is that equation 7.1, yeah, that you see here is not too different from the scalar example, right. Here we had just a minus a squared because it was a scalar. But now we are looking at minus alpha phi transpose, right, just because it is a vector case, just because it is a vector case. So, right, right. So, of course, you notice that this phi phi transpose is an n by n matrix now, yeah. And also you notice that instantaneously it is only rank 1, yeah. So, phi, phi transpose, in fact, this is the small phi, I apologize. Phi phi transpose instantaneously is at most rank 1, why? Because phi itself is a vector, so rank 1 at most, right. And therefore, phi phi transpose, the product cannot have rank more than the constituent matrices, right. So, that phi phi transpose, it is an n by n matrix. Imagine I have 100 states, yeah, and I, but phi phi transpose is just rank 1, just rank 1. So, remember this. So, this is why P is such a nice condition because it sort of says that, you know, if you remember the P condition, it is going to say that if you have integrated this from t to t plus cap t, and you have phi tau phi transpose tau d tau, then it is bounded on both sides by some constants, mu 1 and mu 2, yeah. So, basically, although instantaneously it is obvious that it is only rank 1, therefore, there are many, possibly many eigenvalues at 0, right. But if I integrate it on this moving window, then we are saying that the rank becomes full, all right, all right, great. So, now we understand the setup of the problem. So, the next thing that we want to do is to obviously just analyze the stability. And how do we do this in this course? We take a Lyapunov candidate function, or a candidate Lyapunov function, right. So, of course, this is a candidate, not a Lyapunov function, but a candidate Lyapunov function, yeah. Why is it a valid candidate? We should remind ourselves, like every time we see a function v, that it is a valid candidate function. One, it is c1. This is once continuously differentiable, no problem, it is just a quadratic, right. We just have to use vector principles to take the derivative, that is all, right. And two, it is positive definite. And in fact, it is radially unbounded, right. So, this is in fact, radially unbounded. Because again, it is just a square. It plays the role of x square. Whatever x square is signifying in scalar systems, x transpose x is signifying the same in vector systems, right, when x is in RL, okay. So, then we of course, carefully take the derivative along trajectories, right. And it is very simple. It basically is just x transpose x dot. So, basically I get something like twice x transpose x dot. And if you substitute for x dot, right, which is minus alpha, yeah, you start to get something like this. And this is of course, the square of phi transpose x, right. Because it is like a, again like an inner product, right, of phi transpose x with it, of phi transpose x with itself, right. So, this is just that. So, I can of course, write it in this way. Nice compact form, right. Now, of course, we write the integral because of course, we want to use this alternate stability theorems, right. So, we want to of course, compute the integral of this guy, yeah. Notice that we are still in the regime of linear systems, right. So, this is still a linear system, although it is nonlinear in phi, but it is just a function of time. And it is linear in the state. So, whenever we are talking about linearity in a state space system, we are talking about linearity with respect to the states. So, with respect to the state, it is linear. So, it's a linear system, all right. Maybe nonlinear in time, but we don't care about that, okay. Great. So, now we just compute the integral because remember that this alternate exponential stability theorem required us to do so, right, that is compute the integral. And that's what we are trying to do here, right. So, if I integrate from t to t plus cap t, I think there is a bracket missing here, right, of v dot d tau. We just write the expression right here. And now, all our subsequent analysis is essentially going to be trying to bound this guy, yeah. Our subsequent analysis is to be bounding this person, this particular quantity, right. And why are we doing that? Again, the alternate exponential stability theorem requires that we have like a nice upper bound on this quantity on the left, yeah. So, that's exactly why we are going to try to bound this quantity. All right. How do we begin? We begin from the P E assumption, right. Because phi is P E, therefore the system x dot is phi trans, 0 comma phi transpose t is u c o. What is this system? Whenever I write this kind of a notation, this system is the first piece is indicating the A matrix and the second piece is indicating the C matrix, ok. So, the system is x dot equals to 0 and y is phi transpose t x, this is the system. Now, we are saying that this system is uniformly completely observable, ok, y. So, we know that this is the P condition on phi, right. So, this is the P E on phi, yes, I am going to nicely mark it, ok. So, this is the P E condition on phi and what is the u c o of this? So, u c o condition will be, I have to take the integral from t to t plus cap t, I have to do phi say S t with the transpose. So, phi is the state transition matrix, then I have to take C transpose, right. So, which is basically phi tau and then C which is phi transpose tau and the state transition matrix again, yeah. Please do not, unfortunately, we are using the similar looking notation for the state transition matrix and this function, but please do not get confused, this is the small phi, yeah and this is the capital phi, ok. So, this, there is a bar here, yeah, this denotes the state transition matrix, it is standard notation and unfortunately, we have chosen this notation for the P E functions, yeah and this was a sort of standard, of course we could have gone with something but it is done now. So, yeah bear with it, do not get confused between these two, this is thus the signal that we are saying is P E, right and this is the state transition matrix. Now, the question is what is phi S t for the system, yeah, x dot is 0. So, what do you know, there is x t is equal to x 0 for all t greater than equal to t 0, right. So, therefore, the state transition matrix is in fact identity, ok. So, I hope all of you understand this, yeah, if not please revise what is the state transition matrix, yeah. So, state transition matrix is in fact identity and if I plug it here, I will simply get t t plus cap t phi tau phi transpose tau d tau, yeah. So, this is exactly the matrix involved in the UCO or basically the UCO-Gramian matrix and this is exactly identical to this guy, yeah. So, by virtue of persistence, I have UCO of this system, alright, great. So, what have we looked at today? So, what we looked at is basically now we have looked at few more additional concepts which is UCO under output injection, yeah and of course, one exponential stability theorem for linear time varying systems and now we are starting to connect it with our you know, proof of exponential stability of parameter identifier systems under persistence, yeah. So, we have just started to prove and of course, we are going to subsequently finish it up also, yeah in the upcoming sessions. Alright, that is where we stop. Thank you.