 Welcome to yet another session of our NPTEL on non-linear and adaptive control. I am Srikant Sukumar from Systems and Control, IIT Bombay. We are now in the fifth week of this course, which I hope all of you are finding rather interesting and rather useful. And starting this week, we have a new motivating image, which is that of a spacecraft from SpaceX specifically, that is hovering around the earth. And the algorithms that we seek to design are also driving systems such as these, so that they can reorient and move autonomously in space. So, now starting today, we will look at a new set of course notes. And this is just sort of preliminary, not a preliminary, but something that precedes adaptive control, adaptive control literature was preceded by identification literature. And it was rather well known that for any parameter identification to happen successfully, there was a rather a certain requirement for richness of signals. And this richness of signals was regularly codified in terms of persistency of excitation. So, until last week, we sort of completed our discussion on, well, I mean, until the last lecture, we completed our discussion on, I mean, the Lyapunov theorems and the different variants. And now today, we are ready to delve into topics on persistency of excitation. So, like I said, this precedes what we did in, what we will do in adaptive control. And so chronologically, it in fact came before the adaptive control literature. Of course, you will also see that there is a healthy connection to stability, because we are talking about parameter convergence, which is a sort of stability problem. So, there is a healthy connection to stability. And so we will look at some alternate exponential stability theorems and so on. And so this is what we will discuss for the rest of the week. So, let me warn you, we are already starting to get more and more mathematical. So, the topics are getting more and more mathematically involved. So, I would expect you to spend more time trying to understand the material, going through the material and be comfortable working with the material and use it in different problems and application contexts. All right? Great. So, let us begin. So, the first thing we sort of talk about is what is the definition of persistency of excitation. So, vector signal phi of t is said to be persistently exciting. So, this is a more how would I put it, a definition in words. And then of course, we will define it more mathematically. So, the notion of persistency of excitation is that if the signal wobbles around enough in every window of length t, right? So, if I have a sliding window of time, so I keep sliding the window of time, right? I keep moving forward, forward, forward, forward, forward, forward, forward, right? In time, over each of the each such window of, you know, length t, I want the signal to wobble sufficiently or move around sufficiently enough, right? And it is made a little bit more formal saying that the integral of the diode associated with it is positive definite. Again, you will look at a more mathematically precise version just down below. So, we do not have to worry too much about understanding this, but diode is an operator which is a combination of two vectors, okay? So, here in this case, it is basically referring to an inner product, all right? In this case, it is referring to an inner product, right? So, what are we looking at? What are the elements here in the definition? There is a sliding window of time of this constant width t, capital T. So, we have kept a capital T window of time. And on each such window, we are expecting the signal to, you know, sort of wobble sufficiently, right? So, that is some kind of positive definiteness of a diode is achieved, all right? So, without, again, all elements of this are not very clear just by this definition in words. So, we actually look at the more formal mathematical definition for our purposes, that is, when the signals are in, when we are looking at vector signals in Rn, right? So, that is, if there is an integrable signal phi from R plus to Rn, then it is called persistently exciting. If there exists three constants, mu 1, mu 2 and capital T, all of them positive, such that this kind of integral inequality holds. What is this integral inequality? And this has to hold for all times small t. So, this is what it means to have a sliding window, right? Because as I change T, my windows start to slide, okay? So, changing T means I am sliding the window because this capital T, that is the range of this integration remains the same. It remains capital T, okay? But it is just that I am changing the initial time from which I began, all right? Okay. So, what does this inequality sort of mean? Let us just try to interpret it, right? So, the first thing to remember is that this matrix phi phi transpose is belongs to Rn cross n, all right? So, it is an n cross n matrix. That is the first thing to remember. The second thing is this is in fact positive semi-definite, right? Because it is a symmetric, first of all, it is symmetric because it is just product of phi phi transpose. So, if I take the transpose, it is still phi phi transpose. So, therefore, it is a symmetric matrix and it is like a quadratic, right? So, because it is phi and phi transpose, therefore, it can never be negative, it can never have negative eigenvalues. We already know if the matrix is symmetric, then the eigenvalues are real, right? So, we can talk about positive and negative eigenvalues. But we also know that this matrix cannot have negative eigenvalues because it is a, it is like a square, right? If you have any confusion in understanding this, you can just think of it like if I multiply both sides by some constant vector, say V transpose phi tau phi transpose tau V. Suppose I do this because I can move this in and out of the integral if I do this, then you know that this is actually a square, right? This is actually T, T plus capital T, phi transpose tau times V square d tau. And this is the norm square. This is the norm square and this is like something like an X transpose X, X transpose X, which is a norm square and then the two norm square, right? And the two norm can never be, the norm can never be negative, right? Therefore, this entire quantity has to be greater than equal to 0, right? Now, you know that if the quadratic form is non-negative, then the matrix has to be a positive semi-definite at least, okay? So, that's really the argument. It's like a square. Therefore, it is positive semi-definite at least. But we are claiming something more, something more. We are saying that it is in fact lower bounded by mu 1 i, which is strictly greater than 0. So, therefore, somehow we are saying something about the eigenvalue of this integral, all right? Because if you remember for any symmetric matrix, we had this nice inequality, right? What was it? It was something like X transpose AX less than equal to lambda max A norm X squared and lambda min A norm X squared, right? Which means that the quadratic form is lower and upper bounded by the largest and smallest eigenvalues. Right? Therefore, if I write it in this nice form, I'm just, if I write it in this form, all I'm saying is that the smallest eigenvalue of this matrix with the integral of course is, has to be greater than equal to this mu 1. So, the smallest eigenvalue of a symmetric matrix being greater than a, being greater than equal to a positive constant means what? It means that this is a positive definite matrix, okay? This is a positive definite matrix. The other side is rather simple. It just says it is some kind of a bounded matrix, right? So, there is an upper bound on the eigenvalue of this integral, okay? It's like a, it's a simpler condition. In fact, many textbooks define this without, define persistency of excitation without this right-hand side. So, I would not put too much of a stress on this side. So, it's more of a question of how you want to do the math that's about it, right? So, so what's the idea? What's the idea? So, we are saying, so the first thing we know that the integrant is greater than equal to 0. What else? We know that phi, phi transpose singular for each t, yeah? Why is that? Why is phi, phi transpose singular for each t? See, phi itself is just a vector. What is the maximum rank of a vector? 1. Now, if I take product of two matrices, then the rank of the product is less than or equal to the rank of the, each of the matrices, right? So, now what am I doing? I am taking two matrices, that there is also a matrix, right? And I take that product. Therefore, the rank has to be less than equal to the rank of each one of them, which means the rank of this whole thing is less than equal to 1, can be at most 1 is what I am saying. So, the rank of this product is at most 1. So, this is rather interesting, right? I am saying the rank of this product inside the integral is at most 1. But I am saying that when I integrate this quantity over this window of time, then the rank, in fact, becomes n. Because if the rank doesn't become n, this cannot become positive definite, which is what is indicated by this inequality, right? Because, or even simpler idea, this is less than equal to the smallest eigenvalue, which is positive. So, we are saying that the smallest eigenvalue is greater than or equal to mu 1, which means what? Which means that all eigenvalues have to be strictly positive, which means I am somehow saying that although the integrand has ranked at most 1, the integral over this time window of capital T has ranked in, has maximum rank. So, this is the rather cool property that we are looking for. And this is the property of persistence of excitation, right? So, we therefore, which is what is mentioned in this note, the matrix itself phi phi transpose is singular for all time. So, the P conditions somehow requires that the phi rotates sufficiently, right? Such that phi phi transpose is uniformly positive definite. Why do we say uniform? Because these bounds don't depend on the small t. Yeah, mu 1 and mu 2 are independent of the small t. You remember that in all our definitions, uniformity has always got to do with the time argument, alright? So, in this case, that time argument is the small t. If mu 1, mu 2 are independent of it, so you keep sliding, it doesn't matter where you are, your bounds remain the same, yeah? Because if they don't, then you are sort of, it's rather troublesome, yeah? If you don't have, you know, a uniform bound, yeah? Then you will not be able to complete the sort of analysis that we try to do, alright? Excellent, right? So, we start with a singular vector, we make an outer product, phi phi transpose is an outer product. We know that it is symmetric, we know it's positive semi-definite, we also know it's singular with at most rank at most 1, yeah? But the cool feature that we are looking for is that when I integrate it over a window of time capital T, I want it to become positive definite. Of course, I also expect it to remain bounded. It's fine, not a big deal, okay? Not asking for months there, alright? Great, great. So, that's what it's sort of, we are saying, we create an outer product, this condition has to be valid for all small t and we slide a window of size capital T and phi phi transpose is symmetric positive semi-definite, yeah? But when we do a moving window average, it is positive definite. So, if we do moving window average, it's always positive definite. So, this is rather strong, alright? So, let's look at some examples of what kind of systems are in fact, you know, persistently exciting or what kind of signals are in fact persistently exciting. So, the first example obviously is a scalar signal. Well, we, most a lot of our examples are scalar signals, but whatever you can construct the vector counterparts without too much of a trouble, right? The first very, very easy example is a constant signal, right? The constant signal is of course, persistently exciting, yeah? Obviously. And it's lower and upper bound are exactly this value itself and the lower and upper bound will be exactly C and therefore, it is a trivially persistently exciting, yeah? If you integrate this over C d tau, t to t plus cap tau, you will always get C t irrespective of what is your small t, alright? And therefore, the lower and upper bounds are can be exactly capital T, C times capital T. So, one of the things you can remember, so I mean, so I can actually mark this to be my mu 1 and mu 2, yeah? So, one of the things to remember is that these mu 1s and mu 2s can depend on capital T, okay? So, a lot of times again, one of the things that is done is, this definition is again taken with the 1 over capital T to do sort of an averaging out, right? To do sort of an averaging out, so that T doesn't appear in these constants. So, this is another thing that is standard, okay? Definitely not non-standard. So, so that you average it over this time capital T, yeah? And without this, this is not an average, it's just a summation. I mean, if you think of, you know, breaking the integral, of course, yeah? But with this 1 over capital T, it is a sort of average, right? So, that's why we call it, you know, this is a word moving when the average is being used, alright? So, here also, if I do 1 over capital T instead of just, yeah? So, in fact, sorry, this is C squared, right? So, this is just this guy, okay? So, mu 1 and mu 2 are, in fact, the C squared. C squared because it is phi phi transpose. So, in this case, it's C squared, right? Great, great, great. So, this is like we said, this is trivially persistently excited, no problem, okay? So, let's look at another example, yeah? Of course, this is sort of example where the signal can possibly hit zero, right? This was trivially persistent because this never hits zero anyway, yeah? So, we are not worried about this signal so much. So, next one is something like this is a periodic signal with a period capital T itself, which is this window size, right? And what is this function? It is a max of sine T and zero, okay? And of course, we take capital T to be 2 pi, okay? Because that's the period of this signal, right? So, we take the max of sine T and zero, right? And this is also, so basically, the purpose of this is to somehow make sure that you don't go below zero, that's all. I mean, not that it matters, yeah? Not that it matters so much. In fact, in fact, I could just take instead of this something like a f of T is sine T, right? And T is zero to 2 pi, right? And now if I take capital T equal to 2 pi, then I'll be doing integral from sum T to T plus 2 pi and sine T, sorry, sine tau d tau, sorry, sine squared tau d tau, okay? Right? Something, it's basically sine squared tau d tau, right? So, what is, let's see. So, now I want to basically use, see if I can integrate this, right? Let's see, yeah, let's try to do this, okay? So, this is something like this is equal to T T plus 2 pi sine squared tau d tau. And I do want to see if I can use some kind of trigonometric formula. Can I do that? Yeah, I think 1 plus cosine 2 tau will be equal to 2 sine squared tau. I believe this is an, let's see, I believe this is an inequality because cos 2 tau is cos squared tau minus sine squared tau. Now let's see. Yeah, I think this, I think we can use something like this because cos 2 tau is cos squared T minus sine squared T. So, this becomes sine squared T plus sine squared tau. Yeah, yeah. So, this is 1 minus cos 2 tau. So, this is this guy. And if I integrate this, I will get something like tau minus half sine twice tau. And this is T 2 T plus 2 pi. So, the first term will just give me 2 pi. And the second term will be minus sine 2 T plus 4 pi minus sine 2 T. So, what is sine 2 T plus 4 pi? Because this is just periodic with 2 pi. So, this is actually going to just cancel. So, I am going to be left with 2 pi. Yeah. And if I actually took a 1 over 2 pi here, or a, so 1 over, or 1 over T here, which is 2 pi itself, 1 over T here. So, this is 2 pi by 2 pi and that is equal to 1. So, this signal is also persistently excited. So, pretty straightforward. I just integrated the square of the signal because the outer product, if the signal itself is a scalar, then the outer product is simply the square of the signal. And that is what I did. And it is not difficult to compute that this is also persistently excited. Now, if you look at this kind of a signal, again, I mean I do not need the square here. If you look at this kind of a signal, f of T is e to the power minus T sine T. So, this is actually, we are not going to compute it, but I mean, let us see. So, if you look at how the signal evolves, this will really start to decay very fast. All right. So, if you look at this, we make this envelope, it can e to the power minus T. If you make this envelope, then what will happen is that the sinusoid will lie between this. And it will start to decay very fast. Yeah, very fast. It will start to decay very fast. Now, this kind of a signal is not persistently exciting. Yeah, I am not actually computing the integral itself. I would leave it to you folks to try to compute this, right. But this is going to be, this is not going to be persistently exciting, right. Why? Because this is sort of decaying. And now what happens if it is decays, especially exponentially? Yeah, what happens is that this integral over this window of capital T, right, I mean, I make a window, then I move the window, move the window, move the window and so on and so forth. You can see I get smaller and smaller pieces, smaller, smaller amplitudes. And in fact, this amplitude will somehow be scaled by like an e to the power minus T. And because I get smaller and smaller amplitudes, you will see that my mu 1 and mu 2 for a particular window start to get smaller and smaller. And the thing is if the mu 1 and mu 2 get start to get smaller and smaller, I get to choose only one mu 1 and mu 2, right. It has to be uniform in time. Mu 1 and mu 2 cannot depend on T. So, if I keep moving the window and this amplitude becomes smaller, the corresponding mu 1 and mu 2 come out to be smaller for each of these windows, then the only thing I can choose is the smallest mu 1, right. I can only choose the smallest mu 1 and the largest mu 2. So, the largest mu 2 is not a problem because I will just choose it from the first one. But the smallest mu 1, if you keep going on and on, on and on and on, you can see that this is going to become really, really tiny. So, the smallest mu 1 I can pick, it is just like that uniform stability argument. The smallest mu 1 that I will be able to pick is in fact, 0, yeah. And if the mu 1 that we pick is 0, then this is not persistent because we require this. We require mu 1 to be strictly positive, okay. Because you already know it is semi-definite, right. We already noted the outer product is semi-definite, no magic there. What we need is that the outer product be, the integral of the outer product be significant, it is strictly positive definite. That is the, you know, if you take an average over this window of time t, it has to be strictly positive definite. And that is why we need a uniform window, right, uniform mu 1, yeah, independent of the small t, all right. So, this kind of a signal is not persistently exciting, yeah. These are for the purposes of identification, these are not good signals, yeah. So, if I may, these are not good for identification, all right. These are not good for identification. So, anyway, so what, so I will sort of summarize what we did today and what we are going to do subsequently, right. So, what we started today was a discussion on persistence of excitation, right. And this is a rather, rather important notion in parameter identification, all right. And we will sort of try to connect it in basic ways to parameter identification in the subsequent lectures. But today we saw the definitions of, definition of persistence of excitation. And we also saw what kind of signals are in fact persistently exciting, right. So, it really just involved computing a simple integral of an outer product, right. And in fact, for the more complicated cases, it is not difficult to verify this numerically also, all right. So, it is not that difficult to do a numerical verification. And so, like I said, subsequently we will start looking at how this is connected to your adaptive identification, all right. Great. This is where we stop. Thank you.