 Hello everyone, welcome to yet another session of our NPTEL on nonlinear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. So we are well into the 11th week of our lectures on nonlinear and adaptive control. And by now we've learned quite a bit of adaptive control. We've covered a large breadth of the theory of adaptive control design and analysis. And I really hope all of you have learned enough material by now to be able to apply and design algorithms to drive autonomous systems such as the SpaceX satellite that you see in our background. Now, in this week, we have started to look at a new paradigm in adaptive control until the week before this. We have always been looking at persistence of excitation for learning of parameters in adaptive control. We never really promised parameter convergence in our typical analysis. We always were more or less concerned with only tracking of the errors, sorry, tracking of the system states to the true or the desired states. But of course, parameter learning is also a key aspect. And we did spend a decent bit of time looking at persistence of excitation, uniform, complete observability, integration, lemmas and things like that, which sort of are what help us to prove convergence in the persistence of excitation domain. But now, starting this week, we have been looking at initial excitation based adaptive controller, where you have essentially two layers of filters on your regressor and your control. And because of this somehow, not somehow, but interestingly, I would say, the update law design becomes simpler and also has nice negative terms. In theta tilde itself, which is not usually present in typical P-based adaptive control, right? Typical certainty equivalence adaptive control does not contain any theta tilde term. Of course, there is a hat term in sigma epsilon modified adaptive controls, but you know that those have deteriorated performance even in absence of disturbances. All right, so this is rather nice, yeah, rather powerful. And you also, of course, show that V dot become negative definite in the presence of initial excitation. Of course, if there's no initial excitation, there's no persistence excitation, then you land up in the same kind of trouble in the presence of disturbance, yeah? So I wanted to make a few comments on the results we have, right? So anyway, so this is where we start. I will mark this as lecture 11.3, yeah? And what I want to do is make, of course, a few comments, right? Several comments, right? The first thing which is written here is, of course, that the initial excitation is a weaker condition than persistence excitation. That should be obvious from here itself. We spoke about it in the previous lecture, all right? That here you need excitation to happen on all sliding windows time. And here the excitation is required only at initial time, all right? So that should be obvious, yeah? The second thing is that the original regressive y being initially exciting is sufficient for yf to be very exciting, yeah? So therefore, you don't need to verify initial excitation property on the filtered signal, but on the original signal itself is sufficient. This is rather useful. A few other comments, like I said, I am going to add a page in here, yeah? Right, a few other comments, one, right? So comments on what we have obtained, yeah? One, like I said, in the absence of persistence or in the absence of excitation of anything, of excitation, robustness still an issue here, yeah? So please don't think that just because I did some new method, I got rid of persistence, I have initial excitation. Yes, initial excitation is easier to obtain than persistent excitation, right? So if you have some excitation initially, it's okay, more than okay, yeah? But if you don't have even that, then you still have a robustness issue in the presence of disturbance, yeah? You robustness still an issue here in the presence of disturbance, yeah? So you will have to resort to the same kind of methods, yeah? I mean, you will not have a big advantage here. So remember that, remember what happens? I mean, if you go back here, I mean, this becomes more evident if you look at this. Yes, there is theta tilde terms in this equation, which is nice, just like, you know, sigma epsilon modification. But these terms are scaled by yif and yf transpose yf, right? And these are only semi-definite at best if you don't have excitation. If you don't have any excitation, then these are only semi-definite, not definite. Therefore, these terms may not contribute to giving you robustness, okay? So this is again a problem. So I mean, if you look at the Lyapunov analysis, essentially you have this term, right? I mean, this is your actual term. This, you get this term only after you have excitation. If you don't have excitation, you are left with this term, yeah? And then you have some serious issues still, right? I mean, because this term is not necessarily, this is not necessarily positive definite, right? It's only a negative definite, right? This is only semi-definite, yeah? So you still have this non-strict kind of Lyapunov function, which therefore means that you still have a robustness issue in the presence of disturbance, yeah? So you have not resolved robustness per se, okay? Great. So I hope you understand that. Now, the second thing is of course that it's an easier condition. So if you have, for example, right? So if I try to draw some axes and what I'm going to do is try to draw two signals, right? So if this is a signal, for example, yeah? So this is a signal, say I call it f1 of t, yeah? And then I have a signal which is, say, the same, but then after this it still continues. And this signal is f2 of t. So it should be obvious to you that, so it should be obvious to you that f1 of t is initially exciting, not persistently exciting. And f2 of t, both IE and PE, all right? So again, of course it should also be evident to you therefore, I mean, so of course a larger class of signals are initially exciting. So it should be evident to you that persistent excitation implies IE. But IE, initial excitation does not imply persistent excitation, right? So this is important, right? So therefore, this indicates that initial excitation is a weaker requirement, okay? So therefore, easier to satisfy. We just need some excitation at some initial time and you are done. It is enough to identify things. Few more things. If you look at this condition for excitation, initial excitation, this is sort of true even for persistent excitation, right? Although these conditions are written as a function of time, all right? These are written as a function of time, notice. But in reality, your yf, yeah, your yf is obtained how? Your yf is obtained by integrating y or filtering y. And y contains the states, right? So yf, in fact, I have even written the expression, right? So the expression for yf is something like this, where h is also depending on the state, right? So in reality, and this is the case for most problems that you will ever consider, in reality, yf t is actually yf x x dot t. It depends on the states and its derivative. Or if you don't want to put the derivative because we got rid of this using whatever some integration by parts, it's a function of state and time, right? And if you see the definition here, just like in case of the persistence also, it's only a function of time. So then it begs the question, what happens if yf is a function of the state? So we come back to the same issue that we talked about when we discussed the integral level, right? We have to talk about notions that need notions like lambda uniform initial excitation, right? So what was lambda uniform initial excitation? It essentially says that there exists t, sigma 1 positive such that for all lambda in some domain, integral 0 to t, yf transpose t lambda yf t lambda dt is greater than equal to sigma. 1 identity, right? So basically we need lambda agnostic initial excitation that condition. Why? Why can we talk about a parameter and not a state? Because if you see, if I plug in the solution here, so this becomes a function of actually plug in solution. If I plug in the closed loop solution, this is actually a function of, I get yf x0 t0 t, right? It's a function of just the initial time state, yeah, and time. Initial time, initial state and time, right? And therefore, these are parameters because these are constant values, right? So until now, when we, whatever we talked about is sort of non-uniform results because we have to plug in a particular x0 t0, then we have to verify the integral condition like this, yeah, like this in fact, sorry, like this. And then we have negative definiteness of yif and all the Lyapunov analysis goes through and all that, yeah, but that is not very nice, yeah. The right way to go would be to define these notions of lambda uniform initial excitation, which would be something like this, in fact, which is very identical to when we define lambda uniform persistence of excitation, okay, and the, you know, integral lambda. All right, so very similar to that, okay. If you have lambda uniform, it means that for a particular set of initial time conditions, you will get, you have this kind of, you know, condition satisfied and then you will have your nice parameter convergence also along with packing results, okay. So this is a very, very important point. Remember, what we did here until here is not uniform with respect to initial conditions time, yeah. And if you want to have uniformity with respect to initial conditions and time, you have to define the initial excitation differently. You have to define it as lambda uniform initial excitation because in this case and in almost every case you will ever believe it, your regressor invariably has dependence on states. And this is almost unavoidable, yeah. So this is again another very, very important point. Then of course you notice that, I mean, we sort of talked about this earlier also, but if you notice this, there is a state dependent term here, which did not of course exist, right, which did not of course exist before this, right. So now how to deal with it is, you know, of course an interesting question. The way the authors are proposed to deal with it is of course adding this gain lambda, which is only in the analysis and doesn't appear anywhere else, yeah. And so of course, you know, you can sort of dominate this zeta. Now this is a time for the state varying quantity. So what does it mean dominated, right, because how do you talk about boundedness? So this is a sort of cyclical question, right. This works fine. It's not a big issue. If you choose a large enough lambda to begin with, then this is negative term, this is a nice negative term. So v dot is negative semi-definite, v is non-increasing. Therefore states are bounded, okay. And from that you get some bound on the states. And from that I get some bound on the z. Using that bound on the z, I will get another value of that, okay. I will get another value of lambda, all right. And from that is how I choose, I will choose my lambda. So this is the very interesting point. So in fact, I will probably put it more carefully, more precisely here, how to choose lambda. Again, may not be very important for you, but it is important for any theoretician, analyst to understand how this choice of lambda is made. So remember, what you end up doing by choosing lambda is proving vt is less than equal to v0, all right. So which means that you have your norm e squared plus lambda norm theta tilde squared, but in this case, e is just scalar. So I am going to just say e squared. Yeah, is less than equal to half e squared at 0 plus lambda theta tilde squared at 0, yeah, all right. And now with this, I of course want to get a bound on e. I want to get a bound on e, right. So if I look at it conservatively, I can say that e squared at t is less than, in fact, I will remove the square root. This is e of e is less than equal to square root of e squared 0 plus lambda norm theta tilde 0 squared, okay. Yeah, I hope you understand because I know that the sum is less than this. Therefore, each term has to be less than this also. I have taken a conservative value because that is the best you will be able to do in this situation. Then I have taken a square root on both sides, great. Now that I have this bound, what do I know about the z in this particular case? z is basically 0x, right. So then I will, you can say that norm of z is equal to x in fact, right, which is less than equal to, right, because r is the trajectory that I am trying to track. So this is of course, if I take a bound on the trajectory is bounded. So if I take the upper bound as rm, but this is less than equal to square root of e squared 0 plus lambda norm theta tilde 0 squared plus rm. This is my bound on norm z, right. And now what do I need? I need my lambda to dominate this, right. This is the interesting thing here, right. And now see this, what do I need? I will actually, I need that. So what do I need? I need that lambda nu i f sigma 1, yeah, should be greater than norm z t squared divided by 2, yeah, for all t, for all t, right. So in fact, this is I mean, because this is an upper bound, so it should be evident that, so because this doesn't depend on time, it should be evident that for all t this bound holds. The right hand side doesn't depend on time. So for all time t this bound has to hold, right. So obviously, you want this to be, if this is greater than square root of e squared 0 plus lambda norm theta tilde 0 squared, yeah, plus rm whole squared by 2, yeah. Now, if I want to keep this simple, I mean actually I can't keep this very simple to be honest. There is a little bit of a complication here, right. I mean, it's not that the solution doesn't exist here, right. The solution does exist. I mean, you should be able to find a solution here. The only thing is that there is a lambda on both sides, yeah. There is a lambda on both sides of this equation, yeah. So it's like solving a quadratic, yeah. So not super obvious how to get this lambda, but there does exist such a lambda, all right. So I hope you remember this sort of a slightly complicated sort of situation that we land ourselves in with this choice of lambda, okay. Now, if initial excitation is not there, then we are in a rather big soup, okay. Because then we are left with this kind of a term, okay. And this is not even positive definite, yeah. If there is no initial excitation, then this term could, is not positive definite, okay. Let's remember that this term is not positive definite here. Then it's not very clear how this kind of a term can be dominated. Okay, so to be honest, if you look at this behavior in the absence of an excitation is not evident, yeah. I hope this is clear. If there is no excitation, then I am left with this kind of a Lyapunov function derivative, candidate function derivative. And then this term is not positive definite. So I can't really use this term to dominate anything, yeah. And then I'm left with this term where this e can be managed here, sure. But then there is still some kind of a term in theta, tilde. And so at best I'll get some kind of a residual set type of, it starts to look like the sigma epsilon modification type of situation, okay. Unless I can somehow smartly use this term, not very clear, to be honest, yeah. That's not very clear whether it will be possible. It is obvious that this Yip is connected to this theta because Yip is connected to Y, yeah. Yeah, it's not super obvious, okay. It's not super evident how to use this. Of course, if you retain this term also, you still have a Y, again a semi-definite term. Again may have some connection to Z. So this may help you dominate this even in the absence of excitation, but it is not very evident how that would be possible. It is a little bit more of a complicated proposition, all right. So excellent. So as always when you get something, you also give something, right. I mean, of course, you already gave something in the form of adding more dynamics, right. I mean, you increase the order of the system, right. If you, you know, so Yif is typically the size of Yif is more like the size of the regress, right. So Y, the size of Y is basically the size of theta. So it is like, you know, in this case, one parameter. So one additional parameter. So two, so in this case, it became one by two. So therefore Yif is also one by two. And if Yif is one by two, then Yif became a two by two matrix. So you added significant number of states to your system, okay. So you already did ask for something more, but on top of that, you can see that there are some things that may still be a little bit complicated, especially things like the choice of lambda, and then what happens in the absence of this initial excitation, right. In the persistence excitation, in the absence of persistence excitation, you still know that you get, you know, tracking convergence, like zero tracking errors. In this case, this is not very clear whether such a thing is happening, okay. It may be possible for large values of K and lambda and such, but it's not very clear. Okay. So ideally, it would make sense to sort of combine the usual certainty equivalence, the adaptive law and this law is what would be sort of my recommendation. But of course, I have not done that analysis or shown it here. I leave it to you folks to try it out, all right. Excellent. So what we looked at in this session is that we had completed the analysis for the initial excitation based adaptive controller for a single integrator in the previous session. We wanted to have a discussion of what the properties are of this, okay. So we looked at a few rather interesting things, interesting features, the positives in the sense that it's a initial excitation and not persistent excitation and the fact that all these signals are usually exciting and not vice versa. But then we also saw some drawbacks such as how to choose lambda, what happens in the absence of initial excitation. So there are a few drawbacks also, how we do things, all right. So it is rather important to keep this in mind when we are using the initial excitation based design. In the end, like I said, I proposed that you sort of use a combination of the CE adaptation law and the IE adaptation law or the PE adaptation law and the IE adaptation law. But again, that's not something I have shown here in this discussion. I leave it to you folks to try it out and see how it works. In the upcoming session, we will look at again back stepping extensions for this. So we just want to integrate it with what we have learned. And I hope you see that it's not too difficult to do that, yeah. The most important thing to remember is that in the initial excitation based adaptive control design, the adaptive control law, the update law is design is completely decoupled from the dynamics. So dynamics almost plays no role because everything is in terms of the Y, YF and YIF. So whatever is inside this Y and YF and YIF is what captures the dynamics. The structure of the parameter update law remains the same irrespective of the dynamics. Therefore, it is very easy to stretch this to any dynamical system, all right. And so it has found use in many different dynamical systems and that's rather nice, all right. So great. I hope you enjoyed this session and I hope you will join me again next time. Thank you.