 So, welcome to non-linear control, we have been looking at feedback generation for the past, well maybe couple of weeks I guess, may have seemed longer to you, but it has been maybe couple of weeks, alright, so we did a little bit of the proofs, I of course did not prove the key result, I left it for you to read or maybe I do it in the end, if there is time and so on or we can some of us can discuss it and so on, yeah, but we looked at the key applications, right, and that is of the Frobenius theorem, which is essentially saying that involutivity and you know complete integrability of this distribution is equivalent, okay. So, this is what we have sort of seen in the Frobenius theorem and we were looking at how to use it, okay. So, for the you know for the fully feedback linearizable case, it sort of gives us a very nice set of partial differential equations, so that you can actually identify this control, right. So, we actually looked at it specifically for the DC motor case, right, so there was this DC motor dynamics slightly different from this of course, there was this additional thing here, which is essentially this guy, yeah, and there was this, so basically it is an fx plus gxu, we have been looking at single input systems, right, just to make things easier for us and essentially we were required to check only two conditions, right. First is that this g add, so add of 0g, add of 1g, add of 2g all the way to add of n minus 1g, yeah, is supposed to be linearly independent, so in this case n minus 1g is just add of 2g. So, this was one thing that we were required to verify, I believe we were able to verify this one, I guess, so we did g fg and then add of 2g was the one which was sort of complicated, I am not sure if we actually verified this, right, we said that we will do some of this offline, okay, or even numerically for that matter, yeah, and then the second one was that we wanted to check the involutivity of add of 0g, add of 1g all the way to add of n minus 2g, distribution made by these vector fields, right, and so we did in this case add of n minus 2g is basically is add of g and this was basically very easy to, you know, verify, so we had g and add of g and we just want to check if this is involutive, right, so all we had to do was just see if g and fg leave bracket, is it turning out to be in the distribution itself or not, and so we actually verified this that g and fg, when you take the leave bracket of the two, it turns out to be 0, I believe, right, that is what we got, so let us see, yeah, yeah that was, so fg was this, g was this, I actually vary, so yeah, this was add of 2g which turned out to be very complicated, so and then we were trying to verify that g and f of g fg leave bracket is in the distribution, this is very easy because this turns out to be, I believe turns out to be 0, no, yes, this turns out to be 0, correct, this turns out to be 0 and therefore this is trivially true that this is in fact in the distribution because if you take any vector space, 0 is in the vector space obviously, right, so therefore we were able to verify the involutivity condition. Once we had these two conditions, we know that it is fully feedback linearizable and then all we have to do is to sort of use this equality which is basically saying that, you know, the d beta, whatever is their output because now we are trying to figure out the correct output with respect to which the system is feedback linearizable, right and so let beta be that output, so all we are saying is that this d beta is linearly independent, so therefore this d beta product with vectors of the distribution is exactly 0, right and this is just an evaluation, sorry, yeah, it is actually evaluating these, you know, partial derivatives here, so you get a bunch of equations in partial derivatives and at this point you can pretty much, you know, you get a few conditions, you get that partial of b with respect to x 1 is 0, so therefore there is no dependence on x 1, so you have only dependence on x 2, x 3 and then once you have dependence on x 2, x 3, you can use the second equality to conclude that beta comes out to be something like this, right. So, if you remember, we had looked at the DC motor example earlier also and we had sort of guessed some outputs, yeah and how did we do that, we sort of, we were only looking at partial feedback linearization, yeah, because we started with say, first an output x 2 and then we only got related degree 2, not relative degree 3, therefore the system was not fully feedback linearizable with x 2 and so on and so forth, so we tried different things and then we just tried to find a, so basically what we were doing was, we had fixed, sorry not x 2 but x 3, we had fixed the output, okay, here with this knowledge of Frobenius theorem and the Liebrakitz, we are going backwards, we are trying to identify what should be the output y with respect to which the system can be feedback, fully feedback linearizable and so in this case it turns out that this is in fact that output, okay, even if this is unintuitive and whatever, I mean it may not be something that is making any physical sense to us but this is what it is, okay, alright. There is this small little spacecraft, rigid body example that I have also sort of done here, I am not going to cover this, I want you to take a look at this on your own, yeah, I have asked for this output with respect to which you get full state feedback linearization and in this case it turns out that, I mean I have actually solved it, you can take y equal to rho itself which is the kinematics parameters, right, it can be the MR modified rotary gaze or it can be quaternion whatever, I believe this is written with respect to the modified rotary gaze parameters, yeah, this is the rigid body equations you have in the current homework also I believe, right, so then basically with this output it turns out that you can get you know your adequate linearized sort of the feedback linearized system, alright, under certain conditions of course, right, so anyway, so I have actually done some computations and so on and so forth, I will leave you to look at this on your own, okay. So that sort of brings us to the end of what we want to do with feedback linearization, as the TAs have announced you have a tutorial, right, you will do a little bit more, I have in I mean I have instructed them to sort of bring some interesting problems where you are actually computing these lead brackets because that is the challenging part, just guessing an output and keep taking derivatives is easy because that is not difficult, right, you take an output and then you keep taking derivatives wherever the control shows up, you get the relative degree, then you guess the rest of the states that is still okay, yeah, but this is a little bit more complicated, but this has a little bit more general applicability. Therefore, I have asked them to take up some interesting examples, you are also free to bring your own examples, yeah, and try to discuss it in the tutorial, yeah, that is fine, alright, okay, great. Now what we want to do is I am actually pulling out a little bit of what I taught in adaptive control because I am going to give you now, we are essentially more or less at the end of the standard design methods, yeah, there are no more generic standard design methods, okay, everything else is very specific to systems and so on. So, you have learned until now Lyapunov redesign, right, basically take a Lyapunov function or a control Lyapunov function and try to identify a control by taking derivative of the control Lyapunov function along the system trajectory, that was the first one, then we went to back stepping, right, which is basically how to construct these control Lyapunov functions sequentially, right, and then we went to passivity based ideas where if you have some passivity inbuilt in the system, then you have a certain structure, you can actually come up with the storage function, you can take the storage function and come up with, you know, nice Lyapunov functions, right, so you also have this passive interconnections and things like that, so we did the passivity based methods, then we did the feedback linearization, which is not based on the Lyapunov method at all, it is just property of the system itself, okay, it basically gives you some kind of nonlinear state transformation, which will make your system appear linear, okay, so that is really the idea, so there are no other generic methods, now it is more, now it is more on what kind of problems you are trying to solve, so adaptive control is one such problem in like a scenario in nonlinear control, yeah, which occurs very commonly in nonlinear control and what is the scenario, the scenario is that you have unknown parameters in the system, yeah, these unknown parameters could be mass, inertia and things like that, of course, more recently some of you might be aware there is this neural networks and deep learning because of, you know, very good computational facilities now has become very, very popular, so all neural network and deep learning is doing exactly this, identifying parameters, okay, so what it does is in typical adaptive control the way we teach it, we are just trying to learn some constant parameters of the system, okay, when we are working with neural nets and deep learning algorithms, you are trying to identify functions, not trying to identify points and parameters, but trying to identify functions, but there is a very, very nice classical result which says that any function can be linearly parameterized in terms of this standard radial, like basis functions, yeah, these could be radial basis functions or activation functions and things like that, okay, so that is what neural network does, it basically thinks of, you know, functions as a linear combination of some standard basis functions, right, and then all you have to identify is again some constant parameters, alright, so you are back to an adaptive control type problem, okay, so you can even use the adaptive control framework in learning, yeah, which is actually sort of well understood, yeah, so anyway, so applications of adaptive control are significant even in, even before we were doing learning and stuff, even when there are basic parameters or system that are unknown, you know, like mass inertias, these are not easy to quantify, especially when you talk big systems like spacecraft, aircraft, yeah, or where you are losing fuel or there is some damage to say your propellers, a lot of unknowns or if there is a sensing error, yeah, so these all factor in as unknowns, unknown parameters, we again deal with constants here, yeah, but this has still a lot of utility, yeah, so these are scenarios where you cannot adequately model the system, like if you talk about, you know, 1000 kg or 5000 kg spacecraft, you cannot really, you know, do rotational testing and all that to get some inertia values and all, so whatever you have is a guess, so it is better to then use something like adaptive control, okay, all right, so before we even do any adaptive control, we need to look a little bit at some key results that we use very commonly in adaptive control, of course, we use the stability theorems that you already know, but we also use a little few additional results, yeah, these are very powerful and so we need to state them and sort of look at how they are used, yeah, first, so the first is, there are a few lemmas, the first one is basically this lemma 1.1, which says that if you have a function f, which is bounded below and not increasing, okay, so what is it, it is bounded below and not increasing meaning, if you have a function like this, so there is a bound below and it is not increasing, so it is like this, it could be constant, it could be going down, constant going down, constant going down, yeah, can never go up, yeah, this is the kind of function we are talking about, it has a lower bound and it is non-increasing, okay, then this lemma says that such functions have a finite limit as t goes to infinity, okay, so limit as t goes to infinity, f of t is some finite limit, okay, the limit exists and is finite, okay, so this is a rather key result that we constantly invoke in what we call signal chasing analysis, this also something we will look at, of course, there is this exercise which says what is this finite limit, yeah, I will leave it to you, because it says there is a finite limit, the question is what might this finite limit be, yeah, anyway, so I will leave that, okay, alright, so the second lemma basically says it sort of gives you a result that has you to evaluate uniform continuity of a function, if you do not know what is uniform continuity of a function, please go read it up, continuity is pretty simple, you already know, yeah, again there are epsilon delta definitions for continuity, similarly uniform continuity, okay, basically continuity does not depend on the point you are evaluating, that is what is called uniform continuity in general, yeah, typically when you say a function is continuous, you say continuous at a point, uniform continuity there is no continuity at a point, it is wherever, yeah, okay, but still if you are not clear you should look at the definition of uniform continuity, all I am giving you is a sufficiency condition to verify uniform quantity, what is the sufficiency condition, if the derivative is L infinity, okay, and if you remember I told you L infinity is identical to boundedness, any function L infinity implies the function is bounded, exactly the same things, okay, so basically if your derivative of your function is in fact bounded, then f is uniformly continuous, okay, this is an easy sufficiency check for uniform continuity, yeah, otherwise you have to check with the definition which is not easy typically, typically hard, yeah, so simple examples you can see, I mean because I know that f dot has to be bounded, I know that sin t is uniformly continuous, right, on the other hand if I take, let us see, sin t squared, is it uniformly continuous, sin t squared, yes, but why, how is sin t, what is the derivative of sin t squared, is it bounded, no, not bounded, yeah, so you cannot say anything about uniform continuity because this is only a sufficiency condition, it does not say if it is not satisfying the boundedness what happens, but this is not a uniformly continuous function, sin t square is not uniformly continuous, continuity depends on the t, so when I say the continuity depends on the point, it does not mean that it will become discontinuous at some point, okay, this comes from the epsilon delta definition, continuity says that if you are given an epsilon, there exists a delta if, so that if the argument is delta away from a point, then the function is epsilon away from the point, okay, now that epsilon can depend on time, sorry, the delta can depend on t, epsilon cannot depend on anything, in uniform continuity delta does not depend on t, okay, anyway, go back and look at the definition of uniform continuity, this is basically just a test, yeah, sin t square does not satisfy the test, okay, sin t that satisfies the test, sin t square, no, okay, alright, great, right, yeah, so anyway, I mean, I mean, there is also a simple example here, if you take x t, then you take the 2 vector norm, then it is 1, yeah, so, this is just talking about boundedness, so x is basically a bounded, it just says that x is in fact, in fact, not just x, x infinity, yeah, the infinity norm of this signal is 1, right, again, this is something we have already covered, just how to compute the infinity norm, infinity norm is just basically soup over time of this guy, yeah, so x infinity is supremum over time of this, right, of any vector norm, so here we take the 2 norm, yeah, this is basically just saying that it is a bounded signal, it is fine, it basically just that is, this is not talking about this result or anything, no, this is just saying that x is a bounded signal and therefore, it is a infinity, yeah, any bounded signal is a infinity, okay, great, so of course, I have also given you this exercise, define uniform continuity, just so that you read it, yeah, and give the epsilon delta definition, yeah, not some arbitrary definition, we need the epsilon delta definition, okay. Now, unfortunately in these nodes everywhere, this is wrong, it is Barbalat's lemma, I do not know why we have made this blunder here, it has become Barbarat's lemma, it is not a rat, it is a Barbalat's lemma, there is no rat involved here, alright, okay, so this is Barbalat's lemma, alright, so why we talked about these results is because we wanted to reach up to Barbalat's lemma, this is a sort of equivalence or extension even of Lassal, Krasovsky Lassal's theorem in some ways, okay, if you remember the Lassal invariance is talking about convergence to a compact set, okay, but the Krasovsky Lassal, whatever, Barbashin Krasovsky Lassal, right, theorem was talking about convergence to the origin, okay, when the V dot is negative semi-definite only, okay, but if you remember everything we did in the Lassal invariance had required that the system be time invariant, autonomous system, yeah, we were always dealing with autonomous systems, here the Barbalat's lemma is going to state an equivalent result, but not necessarily for autonomous systems, okay, it has got no, you do not have to have an autonomous system, okay, but again this is only generalizing the Barbashin Krasovsky Lassal theorem, okay, not the Lassal invariance, Lassal invariance is completely different and way more general because it is talking about convergence to a compact set, yeah, Barbalat's lemma does not do anything like that, so what is this Barbalat's lemma saying? It says it is a convergence result like I said, it says that if you have a function can be scalar or vector value does not matter, of course it is a function of time, therefore it goes from I have said R, alright, such that the signal is integrable, what is integrable mean? Integrrable means that this integral exists and is finite, okay, so if you integrate it from 0 to infinity, then it exists and is finite, okay. Further, suppose f is uniformly continuous, okay, then limit as t goes to infinity, f of t is 0, okay, so this is the convergence result, as you can see you are talking about convergence, okay, and we have also, of course later on you will see how we use it for states because this is just talking about the function, right, but if you remember the state is also a function of time once you solve it, once you solve the equations, the differential equations, it is a function of time and also initial conditions, okay, but still a function of time, right, once you fix the initial conditions also, alright, so this is a nice convergence result, it says that if the function is integrable and uniformly continuous, then the function goes to 0 as t goes to infinity, okay. There is also a, of course there is a nice note which says that in case of vector valued functions, the integral has to be satisfied component-wise, so basically it means that component-wise you want the integral to have a finite limit, okay, that is it, so there is a simpler version or a corollary, yeah, what is the corollary? The corollary is basically that if the function is L infinity and Lp for some p, which is not infinity, of course, and further f dot is an infinity, then limit as t goes to infinity f of t0, so this is a corollary of the previous result, this is a corollary of the previous result, just note why this is a corollary, the first thing f dot being in L infinity already implies that the function is uniformly continuous, correct, so that is what is the second condition here, already I get one condition, now this other condition it looks like an integrability condition, right, why? Because first you are saying function is L infinity which means it is bounded, so leave that aside, but if the function is Lp, what do you have? What does it mean for a function to be Lp? Absolutely, the p-norm is integrable, the p-signal norm, what is the p-signal norm? The p-signal norm is this, this is what we have defined the p-signal norm, right, now if I say this is integrable, it is basically as good as saying that this is I mean to the power 1 pi p does not matter, right, if this is going to infinity, the to the power 1 pi p is also infinity and vice versa, okay, so when I say integrable, when I say that function is Lp, I know that this is less than infinity, all right, this already looks similar to this guy where there is no power of course, right, there is no power involved here, but very similar looking condition, right. So therefore, this is in fact a corollary, that is what I ask you to prove, I am going to cut this and I am going to say prove that limit as t goes to infinity, yeah, basically I am saying if you have all these conditions, I want you to prove that the function goes to 0, yeah, basically I want you to use these conditions to go back to the Barbalat's lemma original conditions and therefore you have f going to 0 as t goes to infinity, okay, all right, so that is why, so basically proving that this is the corollary, okay, so this is the Barbalat's lemma, so the next step is to basically see how we can use the Barbalat's lemma, yeah, this is significantly simpler than all your feedback narration material, so you will follow this rather easily, okay.