 Hello. Welcome to yet another session of our NPTEL on nonlinear and adaptive control. I am Shrikan Sukumar from Systems and Control IIT, Bombay. So we are almost meeting the end of the sixth week and we have already looked at adaptive control design for a first order scale system and we are well into the discussion of looking at adaptive control design for second order scalar systems and the rather cool thing is that a lot of mechanical systems including the spacecraft by SpaceX that you see in the background orbiting the earth fall into the category of systems that can be modeled by some kind of nonlinear second order systems. And so what we are already looking at in the sixth week of this course will very much be useful to design algorithms and develop controllers in order to sort of control autonomously or drive autonomously systems such as what you see in the background. So this is where we were last time and what we had realized was that for the second order system we sort of hit a detectability obstacle because we started with a non strictly upon a function for the known system case. Yeah and what it meant was that we could not even prove that the position error in fact converges to see all right. So one of the solutions that was one of the obvious solutions is to obviously construct a strictly upon a function which may not always be easy however that's definitely one viable choice. However one of the in fact more simple solution was this Ortega construction which was proposed by Romeo Ortega in the 90s and this is he in fact proposed a function which was not even a Lyapunov function so it was like a Lyapunov like function and we are starting to look at it from in the context of this simple spring mass damper system it's not the it was e the error system but the spring mass damper system and there is no parameter either there is no parameter error either here and the aim is to drive you know x1 x2 to zero is t goes to infinity and the choice that he proposed was basically something like this half x2 plus alpha x1 squared and we know that this is only positive semi definite because v of v at k comma minus alpha k is zero for all values of k therefore in fact along the straight line yeah along a particular straight line this function is zero arbitrarily close to the origin also and therefore this is not a positive definite function as per our definition and hence not a Lyapunov candidate. Great so how do we use this so this is where we are today this so this is say lecture 6.6 this is where we are so how do we do how do we use so we take the derivative of this v and let's x2 plus alpha x1 times x2 dot plus alpha x1 dot and I simply substitute for the derivatives from our dynamics so x1 dot is of course x2 so go with that this and x2 dot was minus k1 x1 minus k2 x2 which is the standard spring mass damper sort of dynamics what do we do after that we sort of club the x2 terms which is minus k2 minus alpha and we pull that out here okay so what are we left with this is retained as it is since minus k2 minus alpha is pulled out the scaling on x2 is identity now and the scaling on this guy becomes k1 over k2 minus alpha this is what we have now if we make a choice of k2 since that it is greater than alpha and I'm allowed to because it's a spring mass damper if I'm trying to stabilize I can choose k2 to be anything I want so if I choose k2 to be greater than alpha and alpha is also something that I choose so I have a lot of things that I can treat and also I choose alpha says that that is exactly equal to this quantity exactly equal to this quantity here which of course give me some because this becomes a quadratic equation in alpha so I get a viable solution here and so if this quantity becomes alpha you see that this whole quantity becomes x2 plus alpha x1 whole square and that's what we write here x2 plus alpha x1 whole square on top of that I have already required put in the requirement that k2 is greater than alpha so this quantity is strictly positive therefore this entire quantity is negative semi-definite great the other cool thing to see is that this exactly looks like the v so in fact this is twice of v and twice of the v function that we chose so that's what I do that I substitute x2 plus alpha x1 square is twice v and I can see that v is of course a scalar value quantity so I can see that this is a nice exponential decay in v and there's a nice exponential decay in fact I can even solve this I can even solve this to say that vt is v0 e to the power minus 2 k2 minus alpha t and in terms of x2 plus alpha x1 also I can say that yeah so two things happen one is that this is the v is actually half squared of this and v0 is half squared of this so the halves cancel out and then I take a square root so therefore the two goes away here so that is essentially the exponential decay of v and therefore you know the exponential decay of x2 plus alpha x1 and of course alpha is also positive quantity so this is what we have this is what we have so great so great so now we do a little bit of signal chasing type analysis the steps are slightly different here yeah but they still work out fine yeah so the first thing is v is still lower bounded because v is greater than equal to zero and it is non-increasing because v dot is less than equal to zero right I mean even though v is not a leapon of candidate I still have both of these yeah that it's lower bounded and it is non-increasing therefore v infinity exists and is finite okay may exist plus finite and then v infinity is what we used to denote this limit of limit of v st constant okay so further because v is x2 plus alpha x1 squared we know that this is a bounded quantity yeah we can also show that x2 plus alpha x1 is in L2 right and x2 dot plus alpha x1 dot is in L2 in fact this step is I would say not required because we have already proved that x2 plus alpha x1 goes to zero st goes to infinity yeah so we don't need this don't need this step this can already be claimed from this can already be claimed from your exponential decay of v yeah we already did this right because you can see this is the formula for it x2 plus alpha x1 at time t so if I take limit as t goes to infinity on both sides this guy is going to zero x2 plus alpha x1 is going to zero so what do we know we know two things x2 plus alpha x1 is a bounded signal so let me call this some phi of t defined as that and I know that limit as t goes to infinity phi of t is zero okay sorry not phi of t I have already called it p of t data one so p of t is defined as this and limit as t goes to infinity p of t is zero so I know these two things okay right I know these two things I know further that x2 is equal to x1 dot so I can rewrite p as also x1 dot plus alpha x so what do we do we now apply the Laplace transform on both sides and do a little bit of rearranging of terms so how do I do that Laplace transform here gives me ps and here I get sx1 minus x10 and and alpha x1 s which goes to the other side to become minus alpha x1 s okay now in order to compute x1 s I combine these two and I get x1 ss x10 over s plus alpha and ps over ss display rearranging in manipulating terms now if I apply the final value theorem in order to apply the final value theorem what is it I know that final value theorem says that limit as t goes to infinity x of t is basically equal to limit as s goes to zero as xs okay and that's what we do right so I multiply s on both sides and take limit as s goes to zero so I get from the left hand side I get limit as s goes to zero sx1 s which is actually this quantity yeah and further and of course I can on the right hand side I have this now what do I know about this I also know that p goes to zero as t goes to infinity so from here I immediately have that again by the final value theorem that limit as s goes to zero s times p of s is zero all right great now notice this is what is there here in the numerator right so when I take the limit here as s goes to zero I can distribute it in the numerator in the denominator so the denominator as s goes to zero is alpha and the numerator limit as s goes to zero is zero therefore the outcome is in fact zero and here it is just zero because there is an s here there's some constant and this becomes alpha so basically the limit here is zero so what have we shown is that x1 yeah um let's see why it's yeah yeah what we have shown is that x1 in fact goes to zero as t goes to infinity and that's what my final value theorem further we already know p goes to zero as t goes to infinity and p is x2 plus alpha x1 now if I take limit on both sides of this yeah see if I take limit as t goes to infinity on both sides of this what do I know I know that this guy is going to zero I know this guy is going to zero therefore x2 has to go to zero as t goes to infinity all right there is no two ways about that all right so what have we been able to show we've been able to show that both x1 and x2 there are a couple of things one is we've been able to show that both x1 and x2 go to zero okay now what about boundedness it's not mentioned here yeah but it's also important for trajectories or states to remain bounded okay now if we can do this in separate two separate ways I mean you can you can choose another Lyapunov function which is say v bar equals half the same thing as before k1 x1 square plus x2 square and we have v bar dot is minus k2 x2 square from this dynamic site just by substituting for the dynamics of the spring mass damper from the previous example I know that this is exactly what I get and this is less than equal to zero and this will immediately imply that x1 x2 uniformly stable at the origin okay the stability can be obtained if you want by this earlier Lyapunov candidate which is actually a Lyapunov so I don't have to worry about stability that was already given to me but but this particular Lyapunov like construction actually gave me convergence I can claim that x1 x2 go to zero as it goes to infinity so I even got my nice asymptotic thermos all right great great so what we want to do is we want to use the same idea for the adaptive control right so we already had this sort of a control and we already had the original dynamics and of course we had the error dynamics which was exactly like this and now instead of choosing k1 x k1 even squared plus e2 squared we choose this as a Lyapunov function or I will not call it Lyapunov function but the Lyapunov like function it is the Lyapunov like function it is not a Lyapunov function and use this yeah because this is not a positive definite quantity all right great and now we again take the time derivative just like before so I get e2 e2 plus alpha e1 e2 dot plus alpha e1 dot and this quantity just moves so this guy just remains as it is all through okay because we are yet to choose the theta hat dot so we choose the theta hat dot every time by taking the Lyapunov candidate or a Lyapunov like function and from the derivative you know we try to choose the theta hat dot so that v is at least negative semi-delta because v dot is at least negative semi-delta right that's really the attitude so again we substitute for e2 dot and even dot even dot is just e2 and e2 dot is this entire thing yeah so wait a second this is not correct e2 dot is in fact plus theta theta this is what is e2 dot okay and then what do we do we again if you notice I still get minus k2 minus alpha here so I take this minus k2 minus alpha common and I get e2 plus alpha e1 as it is and the second term yeah we we choose by appropriate choice of alpha yeah and with k2 greater than alpha you will get this sort of freedom but if you're confused you can simply write this as minus a2 minus alpha e2 plus alpha e1 is the same calculation as before e2 plus k1 over k2 minus alpha e1 okay that that is what becomes of all these terms yeah except for this theta tilde term yeah so everything except the theta tilde term becomes this yeah so therefore if we choose k2 greater than alpha and an alpha equal to this guy we will get here so these terms become this yeah great and the theta tilde term we club the two theta tilde terms together so I take the theta tilde common yeah here and I get e2 plus alpha e1 times f right and here I get theta hat dot over sigma okay so now notice that we have to choose different update law yeah update law here is now this earlier if you notice with the previous leapon of candidate the theta hat dot only contained the e2 okay but now it has e2 and alpha equal to has e2 and alpha equal to okay so so that's what and of course I mean then use the gamma for the adaptation game here you is the sigma that's fine yeah so of course sigma is also some positive adaptation game so that shows up here so once we choose that this term essentially vanishes and we are left with this guy okay this is also negative semi-definite yeah remember we did not do any new magic we did not so so what happens is that the V dot is again the same as what you have in the non-adaptive case even though the V may have started different okay so this is still the same negative semi-definite V dot okay but what do we know now yeah so here the steps are slightly different whatever I had you know I had said here that this step is not required and this step is not required but now we will require these steps we will yeah why because this is not V any the right hand side is not V any yeah V will contain also the theta squared okay so I cannot use directly the expression for V so that's what we say hence we won't be like V dot is minus gamma V as in the known parameter case so we use signal chasing arguments so what is it so this is where we use the earlier arguments we know that I am going to write those here to be honest yeah because we said we don't need that yeah so I will write these steps A B and C so what is step A V infinity so V greater than equal to zero V dot less than equal to zero implies V infinity exists plus finite then the second step is that E2 plus alpha E1 and theta tinder are both bounded yeah because that's up that is in the V yeah and V is non-increasing therefore both E2 plus alpha E1 and theta tilde are bounded the third step is E2 plus alpha E1 is L2 why that can be obtained by integrating this thing from zero to infinity and I can I can do this because V infinity exists and is finite and we just showed that and further finally we can write that E2 plus alpha E1 the derivative of that is also bounded yeah how just compute the derivative E2 dot plus alpha E1 dot in fact it is exactly this guy here inside the bracket this is E2 dot plus alpha E1 dot and everything here E1 well I mean after taking this K2 minus alpha common you will get and with this sort of a choice essentially you will have E2 plus alpha E1 the derivative will be from here it will be let's see E2 wait a second it is minus K2 minus alpha times E2 plus alpha E1 plus 1 over K2 minus yeah plus 1 over K2 minus alpha theta tinder yeah so this is what you'll have and so now I know that E2 plus alpha E1 is bounded theta tilde is bounded if I assume boundedness on F I have also that the derivative is bounded okay therefore the derivative is bounded so simply by the Babelatz the corollary to the Babelatz lemma I can claim this E2 plus alpha E1 goes to zero as T goes to infinity okay and I also know that E2 plus alpha E1 is bounded so I have the exact same thing right so I have the exact same thing as before yeah so now I do the same I can do the same steps and I can I can say the same steps right if I so what do I know suppose I define P of t as E2 t plus alpha E1 t I know that P of t belongs to L infinity and I know that limit as t goes to infinity P of t is identically zero okay excellent and I also know that P of t can also be written as E1 dot t plus alpha E1 t okay then I can use the same final value theorem arguments and by taking Laplace transform here so this is P s s E1 s minus E1 0 plus alpha E1 s and so I will get E1 s as s E1 s as s E1 0 over s plus alpha plus s times P s divided by s plus alpha okay so if I take the limit as s goes to zero on both sides I know that this is going to zero I know that this is going to zero so basically I have this is equal to zero yeah so which means that implies that E1 is going to zero as t goes to infinity and because P is also going to zero and P is E2 plus alpha E1 so I also have that E2 is going to zero as t goes to infinity and that is exactly what we want okay that is exactly what we want for the adaptive case all right it's exactly what we want for the adaptive case so notice that there was a change in how I chose the v function which is now a leapon of like function and there was of course a change in how we update theta hat dot because we change the v right as soon as you change the v remember that the update law was always obtained by computing the v dot and trying to make it negative semi-definite yeah and so it's obvious that if I change the v the choice of v the choice of the update law of course changes yeah so it's not like I'm getting convergence of the tracking errors to zero for free yeah because this is just analysis and after all choosing a v and doing derivative and substituting dynamics is the analysis one might ask what did I actually change in the controller so I did the controller remained exactly the same but the update for the theta hat changed and with this change in the update using the same sort of steps that I did for the known case using this uh you know or tega construction I could actually show that both event and u2 are now going to go to zero as t goes to infinity right by the way this sort of proof that we did that if even dot plus alpha even goes to zero as t goes to infinity this proof can also be done in the time domain okay it's not impossible to do this proof in the time domain yeah because this is a nice exponentian dk nice system with with is a nice exponentian decaying system with a bounded bang or bounded vanishing perturbation yeah so this sort of analysis can also be done yeah I mean I would strongly encourage you to look at how to prove the same thing without the final value theorem and going to the s domain or the last one yeah I would strongly urge you to try to do this simply by directly integrating this system yeah because this is just a bounded vanishing perturbation on a exponentially stable system all right excellent so so that's great so this sort of brings us to the end of this week's lectures and we have done rather interesting things we have seen our first set of adaptive control problems we saw that how to do the first order case start with the unknown case like start with the known case use certain decoys to design a control for the unknown case and then come to the update law using a appropriately happen of candidate function and we also show how to choose a candidate but as soon as we move to the second order system we saw that it's not very difficult to hit a detectability obstacle if you end up choosing a non strictly happen of function for the original system all right so it's not that complicated or that difficult to get to a detectability obstacle and then we also saw a means of alleviating the detectability obstacle by choosing what we call the otega construction which is essentially a leopold like function which still helps you with the signal chasing analysis for these kind of integrated type systems so you essentially have an integrated type system if it was not an integrated type system this sort of construction may not work you might need to use a strictly abnormal construction but then with this type of otega construction we showed that with the second order scalar system we are able to in fact make a nice adaptive control design and show that both the tracking errors go to zero as always we don't claim anything on the parameter error we for sure that it's bounded but we don't claim anything on whether it is converging or not to the true one all right and this of course brings in notions of persistence of excitation we looked at that for the first order scalar case we did not look at it for the second order scalar case but it's similar you can do a very similar thing for the second order case also and probably use persistence results that we've discussed before to claim that under the rich enough trajectories you will be able to identify the parameters but how to find the rich enough trajectories how many frequencies do you need in the trajectory these are all still heuristics it's more like trial and error and like we've already discussed it may not always be possible in a real application to create these arbitrarily oscillating trajectories because you may not want your robot to follow such trajectories excellent so I hope it was a very interesting week six interesting exploration into the start of the design process and so henceforth we will see more and more of the design so I'm very hopeful that you will be able to pick up to this from here and start applying to these small tiny problems that all of you are probably already trying to solve in your own you know fields of work thanks a lot and I'll see you again soon