 Hello everyone, welcome to get another session of our NPTEL on non-linear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. So we are pretty much at the end of week nine of our lectures and we have hopefully done a pretty good job of designing and analyzing algorithms that will drive autonomous systems such as the SpaceX satellite orbiting the earth that we see in our background. So what we did last time was essentially the tuning function method for designing adaptive back stepping laws, right? So we essentially saw that having an ECLF for a system of the form 2.1 was implied immediately that we could construct an ECLF for system of the form 2.2 which is essentially an added integrator layer, right? Now we of course had particular methods of doing that, yeah, we essentially took the ECLF of system 2.1 and added to it a back stepping error term, right? We added to it a back stepping error term, right? And that was how we constructed a new control law and of course we got an adaptation law and so forth, right? Now of course in order to be able to do all this we need to have an ECLF for the original system also and how one can come up with an ECLF for this system 2.1 again goes back to the same idea as the adaptive integrator back stepping type methods, all right? So what I want to do is to sort of do a slight modification of this as an example. If you remember we had considered this particular example for adaptive integrator back stepping and also extended matching but I want to try to at least use the ECLF based back stepping method on this problem. Now remember until now that for the ECLF method and the ECLF back stepping method the control was always assumed to be a real number. Again Psi was also a real number because it had the same dimension as the control. But here our control and this X2 which is a Psi state is in Arthur, right? So we are still going to try to do this try to use the tuning function ECLF method and hopefully it works, all right? Let's give it a shot. So this is the beginning of lecture 9.6 we are getting a little bit adventurous here. We did the theory on scalar control or a single input system but we are actually trying to solve a multiple input order vector control problem, all right? So in this case we know that P is unknown and omega is a known constant, okay? Now the first question is what is an ECLF for this system? So the step one is find ECLF for this, yeah? How do we do that? What exactly was an ECLF? An ECLF essentially guaranteed that this system was adaptively asymptotically stabilizable. Now so what I would do is I would take notification from how we solved the unmatched case using the integrator back stepping method, right? Because in the integrator back stepping method we of course had a play up of candidate function, right? And that's what we'll try to use, right? And that function was this V1 function, right? That was this V1 function, okay? Because we had an X2 desired and of course we had, you know, whatever we had this which was this which is this alpha if you may in the tuning function setting and with that you can actually show that now you have some nice properties, all right? So what we propose is that our VA, right, assuming again I mean that that everything is known. So P is known. So I'm going to call this VAX1, P is equal to one half, let's see, is this correct? Is this correct? Is the question, yeah? One half X1 squared, is that what we are saying here, right, right, right? So we have to be careful to figure out what is going to be the appropriate ACLF. So one term is of course one half norm X1 squared. This is one piece. Now will it have any dependence on theta is what I'm wondering, right? So what we had done to begin with was, right, right? And then in order to prove, right, and then in order to prove that we have, you know, adaptive asymptotic stabilization, we added to this VA theta tilde term, right? So now the question that we have is if this term is in fact, this V1 is in fact ACLF or not. So let me just think about this a little bit carefully, all right? Yeah, I believe this is fine. So this V1 is an ACLF. This VA is an ACLF. Why do I say this is because if I take VA dot, that is del VA del X1 times f X1 plus gamma, sorry, times P plus del VA del P times gamma, sorry, gamma del VA del P transpose because that is the additional term. So the modified system is gamma del VA del theta transpose. So if I take this here plus whatever X2 is the control, I'm going to call it say U, right? Then this term is equal to zero because this term does not depend on, this VA does not depend on theta at all or P at all. So here we will have X1 transpose times f X1 P plus U. So if I choose U as minus f X1 P minus X1 or basically I will call this U equal to alpha. So you're choosing alpha as this guy, right? So I'm going to choose this alpha as this guy. Apologies. Give me a second, right? If I choose my alpha as this guy, then I get VA dot is equal to X1 transpose minus X1 transpose X1, which is negative definite for all P, okay? So done. So this is in fact, this VA is in fact an ACLF, right? Implies VA is an ACLF, okay? So VA is an ACLF, all right? For the first state, right? So remember, I replace the first state with the control here, okay? I replace the first state with the control here because that's how you would find an ACLF for the first system. Now the question is how do I find the ACLF for the second system? You know the method, right? Now V1 is X1 X2 P and this is simply VA X1 P plus the backstepping error term, right? Now in this case, I cannot just do a square, so I'll take a norm squared. So backstepping error is of course just Z equal to X2 minus alpha, which is X2 plus FX1 P plus X1, right? So this is just the standard backstepping error term, right? And so now once I know that, so this is just norm of Z squared. So this is what is my ACLF for the new system. So for corresponding to this, how do I find the control? Just take the derivative, right? I mean, so this is how we did it. So V1 dot is basically VA dot, which is X1 transpose X1 dot, right? So X1 transpose X1 dot is just, you know, FX1 P plus X2, right? And plus Z transpose Z dot, which is basically X2 dot, which is omega cross X2 plus U plus Del F Del X1. So this is Del F Del X1. This is going to be more complicated, I think, because FX1 is a matrix, FX1 is a matrix. So this has to be written in a nicer way, this has to be written in a nicer way. But this is, let's see, this is a function of one state. So I have to sort of write this in a smarter way. So I think I will not get into the detail that much for this. I am trying to wonder if I should get into the detail here. I will just call this Del Del X1 FX1 P plus X1 times X1 dot, which is FX1 P plus X2, alright? So that is what is the expression, alright? Now what do I know about this guy is that it has X2. So I can write this as Z plus alpha. So this is X1 transpose Z plus alpha, which is Z minus, so this is FX1 P plus Z minus FX1 P minus X1 plus Z transpose this whole quantity, yeah, again I am writing this just as such, right? Times X1 dot, I am writing this just as the whole thing and this is basically just going to cancel out these quantities, right? And I am going to be left with minus norm X1 squared plus Z transpose. So this guy will move here. So sorry, so this quantity will come inside this bracket. So this is X1 plus omega cross X2 plus U plus Del Del X1 FX1 P plus X1 times this is just again FX1 P plus X2, right? And that is it. So now my control will, U defined as alpha, I apologize, as alpha 1 of X1 X2 P will be minus X1 minus omega cross X2 minus Del Del X1 FX1 P plus X1 times FX1 P plus X2, right? And minus a nice negative term. So this gives me V1 dot is minus norm X1 squared minus norm Z squared which is negative definite and of course we have obtained our ACLF. So V1 is an ACLF for the entire system, for the X1, X2 dynamics, yeah? In the process what is our feedback is this guy, all right, the feedback law is this. Now there's a little bit of complication here, okay, in the sense that again this is because we have vector control and all that stuff. If you look at this FX1, right, so this FX1 if you remember was a matrix. That is why I have to think carefully before taking a Jacobian of this, right? That's the whole point, right? So I have to, because I have to take the partial of a matrix with respect to X1. So actually what you would have to do, you have to write this FX1 as a vector, right? And multiply it as such, right? So essentially what would happen is that, I mean what I would have to do as a separate sort of problem is write FX1P as P1F1X1 plus, this is a summation, so P is in say R, well P is R3, okay, so it's not so bad. So plus P2F2X2, sorry F2X1 plus P3F3X1 where FX1 is being written as three columns, FX1 is being written as F1X1, F2X1 and F3X1 basically the three columns, okay, the three columns and P1, P2, P3 are the three parameters, right? And now if I take a partial of this, so I'm sorry, so del del X1 of FX1P is going to be P1 del F1 del X1 plus P2 del F2 del X1 plus P3 del F3 del X1, alright, so this is what it's going to, okay? So that is what is del del X1 of FX1P and that's get substituted inside here, it's better that I keep it like this. So basically my alpha comes out to be alpha 1 comes out to be minus X1 minus omega cross X2 minus, so the partial of X1 with respect to X1 is essentially the identity matrix. So we'll have identity plus P1 del F1 del X1 plus P2 del F2 del X1 plus P3 del F3 del X1 and this is multiplied by FX1P plus X2 and this is multiplied by FX1P plus X2, okay? So this is what you have for your feedback. Now again one has to be a little bit careful here, yeah, one has to be a little bit careful here and see if even this works out. Why I'm wondering if this will work out is because now this guy looks quadratic in your unknown, right? Because there is an unknown here, there is an unknown here. So somehow if I multiply this with that, it looks like I will get a quadratic in my unknown, right? Yeah, yeah, yeah, yeah, I will start to get a quadratic in my unknown and this might put me in a soup. Not this might, this will certainly put me in a soup or this will certainly put me in a soup. So I think this sort of a setup more complicated, yeah, and may not work, yeah, may not be able to figure out because see, we always require linear parameterization, right? So now what is happening is that with this kind of a control law, if I replace for the unknown case, I replace the P with a P hat, then there is a quadratic here in terms of the P, right? So yeah, this may be more complicated, but let me actually go back and look at the expression that's here. If you look at how you move from alpha to, you know, how the alpha 1 gets constructed here, we can see clearly that alpha is a function of both x and theta, right? And so in this alpha 1 expression, here you could have a dependence on theta, here you could have a dependence on theta and here you already have a theta. So the idea is these two can also combine to give you a quadratic. I guess this does not matter that therefore that the alpha 1 contains quadratic in theta. This is not of our concern. What will happen when we design the adaptive law is that we just replace the theta with the theta hat and we'll still get a nice negative definite term in the x and theta hat, right? So all we're doing is replacing the theta with the theta hat. So I guess this is absolutely okay. Yeah, this is going to be fine. I will not say this is complicated. This is okay. This is the alpha 1 for whatever it is worth. This is in fact the alpha 1 and we have our V dot like we desire, which is an ACLF. And now that we have an ACLF, the expression for the tau is this guy, right? So I just have to compute this, which is now del V a del x. So I'm going to actually try to copy this. Great. So I'm going to just copy this, right? So this is copied. And so this is the expression that I'm trying to compute for the tuning function because I already, so my feedback is of course alpha 1, x 1, x 2 and a p hat. So I replace p by p hat everywhere. So if you want, I can even reproduce that expression here. This is going to be this whole mess, right? And all I'm doing is adding the hats to these quantities. This is my alpha 1, x 1, x 2 p hat, right? And this is going to be my tau, right? And so what is my tau? My V a was simply half, you know, norm x 1 square. So del V a del x is basically just x 1 transpose minus z del alpha del x, which is the first feedback. So alpha was this guy. So I will again have something like this expression. So alpha was essentially this. So partial of, so alpha was essentially this guy with a negative sign. So the partial of this is simply going to be this entire thing plus an identity with a negative sign. So this is going to be plus identity plus p 1 del f 1 del x 1 plus p 2 del f 2 del x 1 plus p 3 del f 3 del x 1. And this whole thing multiplied by f, in our case what is f, f is what multiplies the unknown. So that is just f x 1 multiplied by f x 1 and the whole transpose of this guy. So let me see if we are consistent in the dimensions here. So this is my control law, right? And this is my tau. I am just trying to verify if my dimensions are correct. So x 1 transpose is a 3 by 1 by 3 vector, z is a 3 by 1 vector. I am trying to see if we got this correct. So this is a 3 by 3 vector, so this is all 3 by 3. So I am trying to see if this is correct. I think this should be a z transpose. So this is 3 by 3, so this is 1 by 3 times 3 by 3. So this is fine, 1 by 3, this is 3 by 3, so this is 3 by 1. So this will be a, so implies tau belongs to R 3 by 1 and that should be fine. I think this is correct, this should be z transpose. I think this is fine. So my update law will be p hat dot will be equal to the gamma matrix times the tau. Gamma is just a symmetric positive definite matrix. So gamma can be anything in this case, so any gamma is okay because if you remember the gamma actually came from this expression and here because del V a del P is 0 any gamma would be completely okay here. So this is what would be your update law. And of course this is not a p, I will have to replace the p by the hats again, correct. I will have to replace the p by the hats because of course they are unknown. So this is what you have, this is essentially what you have, you have this kind of an expression and all the, all the tau's get replaced by the hat variables and that is of course coming from this. You can see that there is a hat quantity here, hat quantity here. So the tau is essentially coming from this guy, absolutely. Because the V a contains only the hat quantities here and therefore the tau will also contain only the hat quantities because you replace the theta's by the theta hats here. So I will actually mark it here. So when you do this x theta hat, therefore you are correct. This is absolutely right, yeah. So you can see that, well the interesting thing is that if you look at the control law here and you try to compare with say unmatched case control law and all that here, this is the control law that you get in the, I mean actually let's try to compare it with the extended matching case, right. So this is the extended matching case where the control law is something like this. In fact it looks rather simple and our control law looks rather complicated. In fact it has quadratic terms in p hat, rather complicated controller and you also see the adaptation law which also looks relatively simpler maybe and our adaptation law looks like this with some gamma tau where tau is this guy with the hat terms again looks complicated, right. I mean slightly more complicated of course. In this case the adaptation law did not contain the p hats on the right. Here we do have the p hats on the right also but the cool thing is the control law does not contain any p hat dots whereas the control law does contain p hat dots and there is only one update law, right, there is only one update law, one parameter estimate per parameter. Alright, so this is the idea, okay. So yes the outcome looks rather complicated, in fact I was also baffled because I saw some quadratic but that is not a problem, yeah, we are obtaining a feedback law and an update law which is a divide of p hat dots and of course divide of over parameter transition, alright. So I hope that this example helped you to understand how to apply the tuning function method. Please do not get worried like we did get worried and that's good. So we understand how we can get worried because of we because we start seeing some quadratics but the idea is that everything is nice no problem because of how the derivations go. So you start to see p hats appearing in the update laws also and quadratics and p hat appearing in the feedback law but this is not a problem, we can, we have already proved asymptotic adaptive asymptotic stability with the tuning functions, alright. So I really hope that you got an idea and also we didn't really get any serious issues because of the vector control as opposed to the scalar control that we considered in a theory, alright, great. So this is where we end our week 9, so I really hope that you have learnt a decent bit about adaptive control tuning functions and integrator back stepping and so on, we will continue with more interesting material in the next week. Thank you.