 Hello everyone. Welcome to get another session of our NPTEL on non-linear and adaptive control. I am Srikant Sakumar from Systems and Control IIT Bombay. So we are entering the eighth week of this course on non-linear adaptive control. So we are officially in the final quarter final innings or final few overs of this course. And I hope that all of you that have been with me during the course have already learned or started to learn algorithm design, which is the critical component or most critical part of being able to drive autonomous systems such as the spacecraft orbiting the earth that you see in the background. So what we have been doing and we are of course going to continue what we've been doing in at the end of the last week is model reference adaptive control. As we had mentioned, this is one of the most key paradigms in the field of adaptive control and this is the model reference adaptive control of linear systems. So unlike what we have done and what we had done until now, we use sort of a stable system model and we generated a reference out of the stable system model, which is what we try to track using our system states. Okay, so this is why it is called a model reference adaptive control because there is a reference model instead of a reference signal, which is what we were using until now. Now, of course, you have the usual assumptions of nice boundedness hoodwits of the model, AM and so on and all the good properties. But we also have a lot of matching conditions which we discussed in some detail last time and we know we understand that some of these assumptions are rather restrictive, but this is the nature of the game. This is what we can do. Those of you who can show that more can be done or better can be done. Of course, you are free to develop this and show this to the community and get a claim for it. So the first assumption is that this pair AB is controllable, which is pretty straightforward, which essentially meant that you can match eigenvalues of A minus BK with any matrix B. The second was beyond the matching. It actually said that A minus BK can match AM. Okay, there exists such a K star. Okay. The third was along the similar lines which said that there exists some L star, which is in fact also sign definite such that BL star is equal to VM. Okay, so we wanted a symmetric sign definite matrix, which had this property. And finally, we needed to know the sign. Yeah, even in the scalar case, we needed information of sign of B, which was the game connected to the control. This is actually a similar assumption that we need to know the sign of L star, which is defined as such. Yeah. Great. So this is sort of where we were in the model reference adaptive control. To be honest, we had essentially completed describing the problem. Yeah. And so what we are going to do is we are going to start with the design of the adaptive controller today. Yeah. So let me note it. So the lecture number 8.1. So this is the first lecture of the eighth week of this course. Great. So as usual, we first do the known parameter design. Okay. So and this is the control. Now we'll ask why the control structure is minus K star plus L star. So since we are assuming that we have everything known, so L star and K star are also known. Yeah. So that satisfy these matching conditions. So we assume the control to be of this form. All right. Once we have this, we actually plug it back into our dynamics and see what happens. Yeah, because you will try to, one of the questions you will have is that why the structure of the control and you will see very immediately why. Yeah. So once I substitute in AX plus BU, I get A minus BK star X and BL star R. And remember from matching, I have this and I have this. Okay. So the matching conditions that we have already assumed provide the existence of this K star and L star. And obviously because we assume everything is known, K star and L star is also known. And A minus BK star is AM and BL star is VM. All right. This is exactly the matching conditions. Now what is so cool? You notice that now I have a Woolwitz matrix. AM was assumed to be a stable Woolwitz matrix. Therefore, now I have a Woolwitz matrix connected to X by virtue of this feedback. Okay. And this is of course bringing in the art. All right. And if you remember what was XM dot, I will actually write it out for you again. Yeah. So if you try to match these two, you see that this BMR and this BMR are in fact the same. Okay. So just like in all the scalar cases, here too, we define our error as the difference of X and XM, which is but natural because I want to drive my X, that is the states of the system to the trajectory that I get from the model reference, from the reference model. All right. So once I do that, so E dot is X dot minus XM dot here. And if you just look at these, it's very easy to see what happens. These two cancel out. All right. So the first thing that happens is that the BMR cancels out in both the equations. And I can take AM common and I get X minus XM. And that's essentially AM times an E. Right. And you see again that we have achieved very nice performance. Why? Because AM is a Hurwitz matrix. So you could have also worked backwards in some sense. Okay. You could have also worked backwards. How would it be? If you remember, in the scalar case, we chose a reference system to follow. Yeah. We said that we'll take a, for the double integrator, we said even dot is E2 and E2 dot is minus K1 even minus K2 E2 and we'll try to match this. Okay. Now here, if I think about it, I'm actually choosing this as my guide system. Okay. I had to choose it smartly, of course, yeah, because it was asymptotically stable, of course, and it somehow matches with the system. Now, if I start here and I go backwards. Okay, what happens? So E dot is AME. So which means that X dot minus XM dot is AM X minus XM. Right. So therefore, I have no control on XM dot. So I plug it as it is so X. So X dot is AMX minus AMXM plus AMXM plus BMR. Okay. So these two are just XM dot. Right. Now, these two cancel out nicely. So what is it? I want X dot is AMX plus BMR. Okay. So this is the guide system, the good guide system for the X dot system, for the X dot equations. And here I will write it in terms of the variables K star and L star using the matching condition. Yeah. And now it should be easy to see that this is going backwards. This is equal to AX and I can take a B common and I get a minus K star X plus L star R. Okay. Again, this should remind you of the scalar case. Yeah, because even in the scalar case, there was a B common outside. And then we redefine the parameters by dividing by this B in some sense. This is exactly what's happening. I have taken the B outside and my new parameters are K star and L star. All right. Excellent. So this becomes my control. Right. This is my control. And now you see that this is the right choice minus K star X plus L star R minus K star X plus L star R. So if I start with this target system for E, I will end up with this by just working backwards and using the matching condition of from here to here, I had to use the matching conditions. Right. So I can even mark this. So yeah, I had to use the matching condition. So you see how valuable this matching condition was. All right. Great. So I have this nice construction for you using the same idea of target systems and so on and so forth. Yeah, just that I had to be very smart about how I chose the target system. Yeah, this it's simply motivated by the fact that there's already a Horvitz matrix in the reference. And so I use the same Horvitz matrix, same stable matrix. Yeah, I don't try to pull out a different matrix. Yeah, which would make no sense in this case. Okay. So yeah, as we said, we have moved from unknowns A and B to K star L star. This is very similar to the scalar case where we moved from AB to theta one star theta two star. Yeah, because we also to be common there. Yeah, very similar. It's just a one might say multi dimensional vector extension of what we did in the scalar case. All right. Great. So now, see, I mean, we didn't really talk about the stability analysis, but it is pretty easy. Okay. I will choose my V as a E transpose P E. Okay, where P, P, there is from because AM is Horvitz implies for all Q Q transpose positive definite, there exists P P transpose positive definite such that P AM plus AM transpose P equals minus Q. Okay, this is the Lyapunov equation, right? All of you are supposed to know this from linear systems theory. Yeah, so this is the well known Lyapunov equation. Okay, this is the well known Lyapunov equation. You are all supposed to know this from linear systems theory. If you do not, I again urge you to revise this. All right. So once I have this kind of a condition, I choose my V as E transpose P that I get from here. And if you take a derivative, you will see you get E transpose P E dot plus E dot transpose P E, which is actually equal to E transpose P AM plus AM transpose P times E, which is equal to minus E transpose Q E, which is negative definite. All right, which is negative definite. Okay, so this is how the Lyapunov analysis will be, right? So I will mark this as a Lyapunov function for the known case because we do need it, right? We need a known case Lyapunov function because in the unknown case, we simply extend this Lyapunov function, right? By adding terms corresponding to the unknown parameters. Okay, excellent. Excellent. So now we are ready to move to the unknown parameters. Okay, so we are ready to move to the unknown parameters. What do we do? We apply the certainty equivalence principle. The control was minus K star X plus L star R. So I replace it with the estimates for K star and L star, which is minus K hat X plus L hat R. Now, remember, these are, this is where things get maybe messy but also very important that this is now, yeah, what is the dimension, right? This is no longer a scalar, right? That's the important thing to remember. It's N, well, it is the dimension of the control. So it is M cross N and L hat is M cross, I would say M, because I think R is assumed to be the same dimension as, let's see, yeah, R is assumed to be the same dimension as the control, yeah? We are not saying it, yeah, it has to be, has to be. So R of T also belongs to Rn, right? And of course Xm belongs to Rn is also evident. Okay, Xm belongs to Rn because otherwise I cannot create an error X minus Xm and R has to belong to Rm because otherwise Bm, one of the matching conditions is that Bm and P have to be connected by a positive definite, sine definite matrix, which is a non-singular matrix. So if Bm and Bm are different dimensions, you cannot connect them by a non-singular matrix L, right? Because it's a non-square matrix, which is naturally single, okay? So B and Bm are also the same dimension. Therefore R is the same dimension as the control, Xm is the same dimension as the state X, great, okay? So the important point for us is that now the unknown parameters are matrices. So we started to get into complicated, more complicated domain. I mean, we knew this was going to be the case because the original unknowns were also A and B, right? So what was the dimension of A and B? Same, right? So it was the same dimension, right? So A and B, well, I mean, A was an N by N matrix, right? And NB is of course a M by N matrix, sorry, B was an M by N matrix. So now it's slightly different, okay? The number of terms are different, but it doesn't matter to us. The point is the unknowns were matrices and the redesigned unknowns are also matrices, okay? The number of parameters that we are identifying may be less or more, yeah? They're not so worried about it. The point is the unknowns are now matrices, no longer vectors or no longer scalars, okay? Great. So remember, so I will actually write it down. Unknowns are now matrices, all right? Excellent. So now what do we do? We plug this into the system dynamics now. This is again something similar to what we have been doing, yeah? Nothing new. When we plug this into the system dynamics, I also write the matching conditions smartly because so now I can write A as AM plus BK star using the matching condition and B as BL star, sorry, B as, wait a second. Okay, so as of now I'm only writing A as AM plus BK star. Let's continue. And then in the B, I substitute the control here. And then I know that BM is equal to BL star. So I just introduce a zero term. So this is, you can see, is actually equal to zero. Yeah, so I'm not introducing anything. I'm just like adding and subtracting terms. This is just adding and subtracting. We're not introducing anything new here. Okay, great. So once I have this, why we do this is because you get these identical terms. So you want to sort of compare them. That's about it. So this BK star X and BK hat X can be combined to get BK tilde X, right? So where K tilde is K star minus K hat, right? And similarly, this BL hat R and BL star R, right? So this BL hat R and BL star R can be combined to give you BL tilde R. Okay, so let's see. I want to see that I want to make sure the sign is correct. I have a BL hat R minus BL star R. So that is a negative BL tilde. Correct. Correct. This is a negative BL tilde R and then there's already a BMR, right? So this is exactly what we get. Okay. Now, what do we do? We define, we want to compute the error dynamics. So the error dynamics is E dot X dot minus XM dot. So this is the X dot. And you know what is the XM dot, which is just this guy, right? So what do we have? We have cancellation of this. Actually, we don't have any cancellation. Wait a second. Well, we do have a cancellation of this term with this term. All right. So these two terms get cancelled. That's it. These two are combined together. These are combined together to give me AME and I'm left with BK tilde X minus BL tilde R. Okay. So this is exactly what we had in the known case. And now we have two terms in the unknowns. Again, very natural, very much like the scalar case, right? I mean, let's see. If I look at the scalar case, I will actually pull it out. Wait a second. Yeah. Here you go. Yeah. Whatever you had in the known case plus a B multiplying two terms in the unknowns. Exactly what you had in the known case plus the B, that is the control gain, multiplying the unknown parameter terms. Exactly similar is what we get here also, right? Exactly similar. This is what you would have in the known case. And you have B, the control gain, multiplying terms in the unknowns. Okay. The terms look slightly different, but in structure identical. Okay. Now, remember we are dealing with unknowns which are matrices. So the method of analysis and design of the parameter update law is also slightly different. Okay. That is the idea. In keeping that in mind, we define a few other new matrices. One is this matrix called gamma which is signum L star times L star inverse. Okay. So remember that signum L star depends on the sign definiteness. So signum L star is positive if L star is positive definite. Signum L star is negative if L star is negative definite. Okay. So this implies that first of all, this is symmetric. Why? Because L star is symmetric. It's assumed to be symmetric and further it is positive definite because I multiplied the signum with the L star inverse. Okay. So the sign of L star and L star inverse are the same. If L star is positive definite, L star inverse is also positive definite. Similarly, if L star is negative definite, L star is inverse is also negative definite. It's just standard. Yeah. It's pretty simple and standard idea. Right. So what we have essentially done is we have constructed out of L star using the inverse a positive definite matrix. And remember that this matrix gamma is known. Okay. We've constructed a known. Oh, I'm sorry. Wait a second. This is not known. Oh, sorry. I apologize. Not known. Yeah. So I'll be careful. I will say gamma is unknown. Why? We don't know L star. That is what we are trying to find. Gamma is unknown. Now, how we use this is we know that Bm is B is equal to Bm L star inverse, right? Because of the matching condition. Right. Now this L star inverse can now be written as gamma times signum L star. Okay. So basically this is simply using this idea, right? Something similar to this. B is equal to signum B times absolute value of B. Okay. Simple as that. Yeah. It's basically using this similar notion for matrices. Just using a similar notion for matrices. So now I know by matching condition that B is equal to Bm L star inverse. And this L star inverse can be written as signum L star times gamma. Okay. Signum L star is a scalar. So I can pull it out to the left. But gamma remains where it is. Yeah, because we are working with matrices. Remember, we have to be very careful about the order. I cannot move things back and forth and all that. Yeah. So remember, if you look at how we've been doing the analysis, we have not changed the order of things. That is really what you need to be careful about when working with matrices. Okay. That's it. Okay. Now, when we substitute this in the dynamics, this is relationship that B equals to signum signum of L star times Bm gamma. Right. So that we substitute in the dynamics. And that's it. I mean, we just replace B by this quantity. And we'll see why. Yeah, we'll just replace B with this quantity. Okay. So, like I said earlier, because AM is a Hilbert's matrix, it satisfies the Lyapunov equation, which means that given a Q positive definite, there exists a P positive definite such that this is satisfied. Okay. So now we choose the Lyapunov function. Remember, we already had this choice, which was positive definite. Okay. Now we act to it terms in the, in terms of the unknowns. Now, in the scalar case, we would have just picked things like theta one tilde squared by two gamma one, the theta two tilde squared by two gamma two and so on and so forth. In this case, that is not possible. Okay. In this case, that is not possible because we are not dealing with scalars. One possibility is you convert this K tilde as two vectors and do the same. Yeah. But in equivalent and a nicer and a more elegant ways to use trace of matrices. Okay. So the, so trace of something like M transpose. M is actually equal to the trace of M M transpose is actually connected to the Frobenius norm. So it's a norm. Okay. And norms always form good candidates for Lyapunov functions. Okay. Because if you remember, we take norm x squared as a Lyapunov candidate. Yeah. So norms always form a good candidate for Lyapunov function. So what is the trace? Trace is simply the trace of M is of course the sum of I equal to one to N M subscript ii, which is sum of diagonal elements. Okay. And trace enjoys a lot of cool nice neat little properties which we use. Okay. So we didn't, we have not used exactly the norm, but it's like a weighted norm that we have used. Okay. It is K tilde transpose gamma K tilde. Right. And this is why we define this gamma with very, with some important particular reason. Okay. So this gamma appears here. Okay. So this is K tilde K tilde transpose K tilde. K tilde transpose gamma L tilde. Okay. Now this, remember that this is also positive definite. So, well, I mean, I'll actually say the whole thing is positive definite. Okay. Okay. Because if I wait it, wait by a positive definite matrix like gamma, this is still a not. So and then trace will of course distribute. So this is trace of this plus trace of this. So each of this is in fact a norm. It's a weighted norm, but it's still a norm. Yeah. And therefore this is positive definite. Any norm is positive definite. Okay. That is, it is zero only at zero value of the state and non zero outside. Okay. So it's easy to show that we K tilde L tilde equal to zero if and only if E K tilde L tilde equal to the zero matrices else V E K tilde L tilde is strictly positive. Okay. Therefore it is a positive definite function. These are the conditions for a function to be positive. All right. So the rest of the analysis is a little bit long. So we will continue it in our next session. All right. So just to summarize, what did we do today? We continued our discussion on the model reference adaptive control. We have started with the control design. We looked at the known case right to identify the similarities with the scalar case. Right. But it was some rather nice interesting construction of a target system, which gave us the structure of the control actually. Yeah. And then we also identified a Lyapunov candidate function using the linear system Lyapunov equation. Okay. And then we started looking at the unknown case and how we get a we just propose a certainty equivalence controller. And now we finally looked at the structure of the Lyapunov equation. So Lyapunov function for the system which is rather different and unusual and it uses trace functions, which is the common choice of Lyapunov functions when matrices are involved because this is a Frobenius law. Okay. So this is where we stop. And I hope you followed what we did today well. If not, please look at the video very carefully because we will continue this next time. Right. Thank you.