 Hello everyone, welcome to yet another session of our NPTEL on nonlinear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. We are into the eighth week of our lectures on nonlinear adaptive control and we are already well underway into designing algorithms that can drive systems such as the spacecraft orbiting the earth that we see in the background. So last time we started to look at model reference adaptive control. We already introduced the problem in the previous week along with the assumptions and of course the matching conditions which are very critical. Subsequently we started the design for the adaptive controller in the usual approach of charting with the known parameter case and then of course moving on to the unknown case via the certainty equivalence approach. The difference of course is how we chose the target system, right? So the target system was in this case chosen in a very, very smart way and was inspired from the reference model itself. So this was what helped us to design the controller for the known case and of course going from the known to the unknown case was it's always very easy because we just simply apply the certainty equivalence principle, right? So that's really the trick, okay? We apply the certainty equivalence principle in order to go from the known to the unknown case. Now the difference now that becomes apparent to us is that the parameters are not the scalars or vectors but in fact they are matrices, right? So it's k tilde and l tilde and k star and l star which are all matrices, right? We've already made an assumption that we know the sign of l star, right? So l star is symmetric definite and we know whether it's positive definite or negative definite, right? So using this fact that we know the sign of l star we define a matrix gamma which is positive definite symmetric which will appear sort of frequently in the Lyapunov analysis, right? So using this gamma we now write our closed loop dynamical system in this particular form, right? So this is the form that we would have obtained if there was no parameter error or for the case when we do know the parameter, this is exactly what would be the closed loop dynamics but because we do not know the parameter there are terms in the parameter errors which is what you have commonly adaptive control and we choose a corresponding Lyapunov function using the fact that a m is a full width matrix and just like you would have used the norm, a square norm for the vector case or you know just a square for the scalar case we use a sort of norm or weighted norm for this matrix case also and this is defined via the trace of the matrix k tilde transpose gamma k tilde and similarly l tilde transpose gamma l tilde, all right? So this is where we were so I am going to mark the beginning here as lecture 8.2, all right? So I am going to mark this as lecture 8.2. So this we already saw last time is in fact valid Lyapunov function. In fact it can be very easily claimed to be strictly positive definite also, yeah? So nobody is there. We also know what is the trace operation, it is essentially the sum of the diagonal elements of a matrix. We also have certain nice properties of the trace, like I mean it does not matter if you do the inner product or the outer product, the trace remains the same. In fact it does not have to be the same matrices either, yeah? So you can change the sequence of things and the trace does not change, yeah? And that is really what is important, right? And we will in fact exploit this property. So one of the very, very key properties of the trace, like I said here, in fact I can write it in a different color is that trace of A B transpose is equal to trace of B transpose A. Basically it is agnostic to the sequence in which I do things. So I can do A B transpose or I can do B transpose A and vice versa the trace is not going to change. So this is one of the very, very key properties that we will exploit soon, yeah? Now let us take the derivative just like we do, all right? Let us diligently take the derivative and plug in the closed loop dynamics, yeah? So we are yet to choose of course the update loss for the K-teldas and L-teldas and that is really what we are trying to do with this Lyapunov analysis, okay? Great. So V dot, now remember these are vector operations, we have to be very careful, right? So because it is a vector, so I take the derivative of all the terms very carefully. So first I have E transpose PE dot, basically taking the derivative of this and then using the product rule I have to take the derivative of this term, so E dot transpose PE, okay? And now in this one I just say twice trace of you know derivatives here, yeah? Honestly speaking I could have written this also in this way, yeah? That is I could have written, I mean let me actually sort of make this a little bit more clear. So d dt of trace of say K-teldas transpose gamma K-teldas would have been written, right? As trace of K-teldas dot transpose gamma K-teldas plus trace of K-teldas transpose gamma K-teldas dot, yeah? I could have done this, yeah? I could have done this, right? But the point is that because this entire thing, right, is in fact a scalar, right? So I am free to take a transpose, right? I am free to take a transpose and of course transpose of, I mean it is also true, which I hope it is also evident to you that transpose of m, sorry, trace of m is equal to trace of m transpose. So the trace doesn't modify under the transpose. So if I take the transpose here, right, I would get, if I take the transpose here in fact, I would get trace of K-teldas dot transpose gamma K-teldas, which is actually the same as, yeah, which is actually the same. And that's why I am directly writing this as twice trace this, okay? In fact, in fact, I could have pretty much done the same thing here too, yeah? Let's be clear. I could have done the same thing here too because p is a symmetric matrix, right? So I could have taken the transpose of this and I would have obtained E transpose P E dot, which is the same as this, right? But I am simply writing this separately because I'm going to use the Lyapunov equation, okay? That's it. That's the only reason. I'm not doing anything magical here. This could also be written as twice E transpose P E dot, yeah? Why? Because this is a scalar, this is a scalar. Transpose of a scalar is the same scalar. Therefore, I'm free to take transpose of any term, yeah? This is, this is a trick we use regularly in all Lyapunov analysis for vector equations, yeah? V is a scalar. So each term is a scalar, right? Therefore, I'm free to take the transpose of any scalar term because it yields the same scalar term, right? Because I can take transpose of a scalar and end up with the same scalar. Nothing changes, right? So if I take the transpose of this term, I get E transpose P transpose E. But because P is a symmetric matrix, I get E transpose P E dot, right? Sorry, yeah? So E transpose P transpose E dot is the same as E transpose P E dot, which is the same as this term. So I could have written this very well as twice E transpose P E dot. I simply am not choosing not to write it in that way because I want to write the Lyapunov equation or use the Lyapunov equation, right? So what is this? So when I substitute for E dot now, right, in on both sides here, so this remains as it is, right, for a while. Okay, this remains as it is. I just use the fact that k tilde dot is equal to minus k hat dot, okay? And that's what shows up here, okay? And that's it, okay? So again, I have flipped it here. This is flipped here. And this is also flipped here. Do not mind. Yeah, I can flip as many times as I want. It doesn't change anything, okay? Because trace of A is equal, trace of M is equal to trace of M transpose, all right? Great. So these terms remain as it is just with the negative sign because the k tilde dot has been changed to k hat dot. And similarly, n tilde dot has been changed to L hat dot. That's what is bringing up the negative sign, okay? Great. So let's see. Yeah, okay, okay. So then what do we do? We basically plug in for the dynamics E dot, that is a closed loop dynamics E dot from here. That's what you see here. That's it. AME plus signum L star BM gamma, this thing. And similarly here with the transpose, this guy, okay? Now these terms and this term, okay, gets combined to give me this term. This is just the Lyapunov equation. Yeah, okay? So if you may, this term, this term with this term, whatever, gets combined and can be written as E transpose BAM plus AM transpose BE, okay? And that's actually equal to minus E transpose QE from the Lyapunov equation corresponding to the AM, which is a full width or stable matrix, okay? All right? Excellent. Now, so that's how I get this nice negative term. This is the nice negative definite term that we're always looking out for in the Lyapunov analysis. So we already have that term, yeah? And why? This is because we chose a nice target system, right? So we chose this as a target system and that's why we get this nice negative term, okay? And AM is a full width matrix, okay? So then we, if you look at the rest of the terms, all the terms that remain, which is these guys, these guys, these are all parameter error terms, okay? So again, this term and this term can also be combined. Similarly, this term and this term can also be combined and that's what is being done here. These are combined to write them as 2 signum L start, E transpose P BM gamma and this, okay? Both these terms are exactly the same, okay? Both these terms are, again, using the logic that V is a scalar, so every term is a scalar, therefore I can take transposes as many times as I want, right? So this term combines with this term, right? To give us this two times term, okay? And we also have the same two coefficient here as you would expect and in this form, okay? Now if you see, we have a little bit of a funny situation, right? Earlier when we were doing all this parameter update load design, we would always get a theta tilde common on the left-hand side and all that, because it was a scalar, so we could move the theta tilde anywhere. Now the problem that we have here is that the K tilde, which we want to take common outside is somewhere in the middle, okay? And these are all matrices, so it's not like I can move it to the left and right arbitrarily, okay? So this is important to note, right? So this term and this term, right? So appears somewhere in the middle, okay? You would ideally want it to be on the left or the right-hand side so that we can take things common and whatever, cancel it out using k hat dot. This is what we've been doing. Now what do we do? Well, we use nice tricks from the trace properties, okay? So this is where we see another trace property. So we already know this, that trace of m is the trace of m transpose and trace of a, b transpose is trace of b transpose. In fact, from these two, I can derive this, okay? So here we are saying that trace of u transpose v, where both u and v are vectors are vectors, right? Is the same as trace of u m transpose. So basically, if you have vectors, okay, of the same dimension, of the same dimension, right? So u v has to be vectors in some rk. That should be the same dimension. Otherwise, inner product is not different. And you take the inner product, that is u transpose v, then this is the same as trace of the outer product, right? So notice that this is a scalar and this is an n by n matrix or a k by k matrix and because I take a trace, it becomes a scalar, right? So both sides are scalar. So this one needs to verify, of course, for our own sanity, okay? And then we can try to use these results. Let's try. I don't know. I mean, maybe it works, maybe it doesn't work, okay? So let's see. So this is actually equal to trace of u transpose v. It's a scalar. So this is also the same as trace of v transpose u because that's the same scalar. Take a transpose of a scalar, I say that get the same scalar. And the trace of a scalar is also the same scalar because the scalar can be seen as a trivial matrix, right? Of a one by one matrix, right? So the sum of the diagnosis is just the scalar itself. So u transpose v is the same as trace u transpose v and trace u transpose v is the same as trace v transpose u because again, we're just transposing it using this identity. And then finally, I can flip the order. Yeah, I'm allowed to flip the order of the operation with the trace, okay? This is the, from this property, okay? So this, this, whatever this property that you have can be obtained from the other properties that we already had, okay? So this is not a new property per se. It is in fact just something, something that's derived from the existing property. So these two are the most key ones. If you remember these, you can mostly derive the other important trace equalities, all right? So great. So now we have this very nice trace equality that the inner product is equal to the trace of the outer product of vectors, okay? And that's what we're going to apply to these terms, this term and then this term, okay? So let's look at each of them. So this term, the first term, so we don't write the sigma L star and all that, yeah? But you look at the rest of the term, okay? So basically, we ignore this two sigma L star and you look at e transpose pbm gamma k tilde x, e transpose pbm gamma k tilde x. So if you look at this term, I know this is a scalar. So what do I do? I think of this whole piece, right? So this, this entire piece until here, I think of as my u transpose and this is v, okay? So basically, this is of course u because I took the transpose and that's v and I know that the inner product here is the same as trace of the outer product, which is uv transpose. So what do I write it as? I just write this as u. So this whole thing is u, yeah? It's k tilde transpose gamma vm transpose p, e, yeah? And so this is the u and then this is v transpose, right? Okay? So I get something like this, all right? We do the same thing on the other term, yeah? That is also a similar term. It is, if I ignore this because this is just a scalar and can be moved around. So I don't need to worry about this. I look at the second term, which is e transpose pbm gamma l tilde r, e transpose pbm gamma l tilde r. So similarly, I think of this term everything, okay? Everything before the r, yeah? Everything before the r is my u transpose. So again, this is u and so this is u transpose v, right? And this is essentially trace of uv transpose, okay? So what does this do? Notice in this expression k tilde is somewhere in between of no use to us. But in this new expression, k tilde is all the way to the left and a trace appears, okay? So earlier there was no trace either, okay? So and why do we need the trace? Because the comparison term in k tilde contains the trace or is within the trace. So obviously, I need somehow the trace function to appear here. And I also want the k tilde to be somewhere in the left, right? Because here the k tilde appears in the left. Similarly for the l tilde, here it is somewhere in the middle with no trace. So I want to this to appear in the left inside the trace function. And that's exactly what happens. This appears inside a trace function. There is a k tilde transpose in the beginning and then again a trace function and then l tilde transpose in the beginning. So we essentially substitute back into this, this back into the V dot equation. I am again left with this nice negative definite term. Then I have a twice trace signum l star is remaining as it is. And then I've taken, because this is a scalar, I can move it around wherever I want. So I move it inside here. Then I take the k tilde gamma out. So I get a signum l star here. From this term, signum l star Vm transpose Be x transpose, okay? So I cannot change the sequence of the matrices. So this is fine. Yeah. And notice there is a k tilde transpose gamma here, which I take common k tilde transpose gamma here that's taken common, okay? So I am left with this guy signum l star and this and a k hat dot here, okay? Similarly there is an, notice that there is an l tilde gamma, l tilde transpose gamma here common, right? So again, l tilde transpose gamma inside the trace common. That's what I take out. And then the signum l star is here. Of course, that is a negative sign. And then I'm left with this term and then l hat dot, okay? So this is just very careful bookkeeping now. Once I have applied this nice trick, which gives me a trace and also pulls the k tilde and l tilde to the left most. I can now take common with the terms here where the k tilde and the l tilde are in the left. In fact, I take both k tilde and times gamma common and l tilde transpose times gamma common, okay? And once I have this, I'm in good shape. Yeah, because now I can use my k hat dot to cancel this guy. Similarly, l hat dot to cancel this guy, which is exactly these equations, yeah? It brings about, of course, the signum l star, yeah? So which is what we've already discussed that we do need to know the sign of l star, okay? So we need to know whether it's positive definite or negative definite. This condition, this requirement is exactly identical to knowing the sign of b in the scalar case. Nothing, this is something we cannot avoid, right? So great. So then you basically get rid of these terms and all you're left with is this guy, yeah? And we know that this is, because q is positive definite, v dot is negative semi definite, right? Not definite because we have additional states k tilde and l tilde. So obviously it's not negative definite. It's in fact negative semi definite, right? So what can we show? We can show that the error signal e goes to zero asymptotically as t goes to infinity. This we can do by signal chasing and using barbell at Slema, right? Of course, also you can show that these are bounded signals, no problem. Why? Because simply because v was chosen as a positive definite function containing both e quadratic, essentially some kind of quadratic in e k tilde and l tilde. And we've also shown that v dot is less than equal to zero. So v is a non-increasing function of time. So as time progresses, v is non-increasing, which means that whatever is the initial value of v, you cannot actually go beyond it. Therefore, these things cannot go unbounded. None of these terms can go unbounded. And if none of these terms can go unbounded, they're essentially norms, which means e k tilde, l tilde cannot go unbounded, okay? So notice that we've used gamma in the Lyapunov analysis, but gamma doesn't appear anywhere in the update law. It cannot, right? Because gamma is unknown. Although we constructed this gamma using this L-style inverse, this is unknown. Therefore, gamma cannot appear anywhere in the update law, right? So again, so this is also again similar to this absolute value of v that we introduced in the Lyapunov function for the scalar case, all right? So this is again something similar to that. So that's it. So we are done. We essentially have a nice model reference adaptive controller. This is what guarantees tracking, yeah? Of course, it doesn't guarantee any kind of parameter identification. Again, this is, again, I would say this is something that will, that requires persistence of excitation. And in fact, I strongly recommend that you write the, if you want to write the dynamical system in a very careful form with e, k tilde, l, k tilde and l tilde as states. If you write the dynamical system in this form with some matrix, yeah? So d dt with some matrix e. But I'm writing it as a tuple specific, very deliberately because k tilde and l tilde are matrices and e is a vector. Therefore, you cannot actually write it in this form. You have to vectorize k tilde and l tilde. But if you do write it, you will get appropriate conditions for persistence of excitation of a particular signal, which will guarantee that parameters also converge asymptotically, okay? So this anyway, we've seen this theory. I would recommend that you write it in this form and see how it looks, all right? But yeah, I mean, we have, you know, perfect tracking, which is what we typically guarantee in adaptive control anyway, all right? So, excellent. So this was model reference adaptive control. So what we did today was complete the Lyapunov analysis for the model of reference adaptive control problem. And in the process of the Lyapunov analysis, we learned a lot about trace and trace properties and how they can be used to construct Lyapunov functions for metric states. Yeah, we also learned about trace properties and how it can help us to manipulate the matrices that appear and move matrices from one location to another so that we can cancel some terms. So we learned some interesting trace properties, which are also very useful for a lot of different mathematical analysis. And of course, we were able to prescribe update laws for this matrix unknowns k tilde k star and L star. And we were able to prove perfect tracking, which is what we typically seek to guarantee in model reference adaptive control, all right? So that's sort of the exposition into model reference adaptive control for linear systems. We will now subsequently move into a new and different set of lectures where we will discuss some other and more novel topics, all right? Excellent. So this is where we stop. Thanks and I hope to see you again soon.