 So, let me take some examples alright, you have seen enough theory I guess, I think theory is relatively simple, I hope you understand that. So, ok. So, this is again something that I have sort of created, but not too far away from reality, ok. So, again a sort of concocted example, but not too far from reality, ok. This is a relatively standard structure, ok. Typically you will see this kind of a structure in rotating systems again. So, and this term would usually be here, yeah, but I have sort of switched it because otherwise it becomes a slightly tough example, ok. We can look at the more tougher one also later, but for now let us look at this, ok. Aim is stabilization which means that I want X omega to be L infinity and X omega to go to 0. Again we can also do tracking, but in steps later on, let us not worry about it. So, now how do we start? You have seen instead, what is the first requirement now, ok. So, let us try to identify what is psi by the way, not zeta, that is ok, yeah. Here there is no psi, it is the omega state, right. We have the X and the omega state, the integrator and also notice that I deliberately made it not look like an integrator. That was the purpose of making this example, yeah. The second stage does not have to be a pure integrator, it is not. You see it is not a pure integrator, which is not a big D as you will see. Now tell me something, what do we say? We assume something nice for this system. What do we assume? For the, we start with the first system, right, that is the whole idea. You start with the first system. What do we assume for the first system? Yes, but yeah. So, existence of a k naught X and existence of a CLF, right. What is it for this system? Very simple scalar. What is a CLF? What is the k naught X you think? So, step one guess v 0 X and k naught X, yeah, for this system, for the first subsystem. We always start with the first subsystem, that is how back stepping. What is v 0 X? What is k naught X? Anything, give me some choices, anything that comes to your mind. I mean you have now seen enough Lyapunov candidates and Lyapunov functions and so on. Give me whatever, anything, yes? No, just v 0 X. You always start with the first subsystem, forget the second, what is it? X square, okay, thank you, thank you, my hearing is bad, sorry. So, v 0 X equal to X square, I want to test this, okay, alright, as a CLF. Let us see, how do we test if this is a CLF? Let us forget about doing k 0, but how do we test this is a CLF? Do the usual tricks, what is it? v 0 dot, it is what? Okay, I will take half X square. So, I will get v 0 dot as what? X omega. In this case, omega is the control, okay. So, omega is notionally the control, right, because we do not have a second system in our mind as of now. Omega is notionally the control, okay. So, if we said that, so what do we need? If the control vector field goes to 0 and the drift vector field has to be negative for all non-zero X, but what is the control vector field? X, sorry, not the control vector field, but the partial with respect to the control vector field. So, l g v 0 is actually X. So, l g v 0 is actually equal to X and if this goes to 0 means X equal to 0, okay and so there is no other possibility, you do not have to check anything is negative or anything, it is trivially true, okay. Implies l f v 0, no need to check, yeah. It is trivially true, okay. So, such cases are a little bit weird to deal with, but they are okay, that is not a problem, because essentially you are saying X will be 0 if the control vector field is 0 and that is the equilibrium anyway. So, you do not need to go anywhere else. So, you are done, that is the logic, okay. So, done, great. So, this is a valid CLF, valid CLF. What about K 0 X? How do I choose K 0 X? What is a good controller for X dot equal to omega system, linear simple system, no? What is a good control? You can guess or you can use the Lyapunov function you have to do Lyapunov reshaping. So, fine. You already have v dot is X omega, what should I choose omega as? Minus X, okay. I will choose minus K X just or minus, yeah. Just to give it a little bit more general, it will work for any positive K, right. So, these two are done, okay, because this will give me implies v 0 dot for this case will be minus K X squared, which is negative definite, which is the WX we were talking about. Great, done. Step one accomplished. Step two, go to the next system, right. Construct v X omega and find control. So, folks, what is v X omega? v naught X plus what? Half norm of error between what? Omega minus K 0 X, which is omega plus K X squared, yeah. It is a scalar. So, norm is just a square, okay. Excellent. So, I am not going to use the formula that we because practicing is a procedure, yeah. The purpose of showing you was the procedure. So, we do not have to go back to those formulae and remember and implement those formulae. This is easy, okay. Now what? I need to find the control. What will I do? Take the v dot, right. We have been doing this. Take v dot, okay. And that is equal to v 0 dot X, right, which is basically X omega now, plus omega plus K X omega dot plus K X dot, right. And this is X omega plus omega plus K X. What is omega dot? It is omega cross X plus the control plus K omega. And I have just substituted for the dynamics. Now, what will I do? I use the same trick that I did before. This term here, I write the omega as in terms of my new variable, that is this omega plus K X, right. So, I can write this as minus K X squared plus X omega plus K X, right. So, this term becomes these two terms. We did this even in the previous steps, right. And omega plus K X times omega cross X plus U plus K omega, yeah. Now, notice that I already get the nice term, right. The W X already showed up, right. It like it did in the general case also. This is the W X. This W X shows up immediately, yeah. All I did was write this term in terms of the new variable, sort of the new variable that I have created, right. I want to write everything in terms of the new variables, which is X and omega plus K X, that I have appearing, okay. That is what I did. I wrote this omega as omega plus K X minus K X. And that gives me this gauge, all right. And now I know that this term can be combined with this term, okay. No need to take transpose either. It is too easy, right. So, basically I have minus K X square plus omega plus K X, U plus omega cross X plus K omega plus X, all right. Yeah. So, I already know this is CLF, not a problem. I mean, if you want, you can work hard and you know that the term multiplying the control is the L F bar V, sorry, L G bar V. And if this is 0 means omega equal to minus K X. So, this entire guy goes away, okay. And this is negative, okay. So, all good. No problem, okay. All right. Now, what would be the control now? If I wanted to use this Lyapunov reshaping idea, what would be a good controller here to make sure that V dot becomes negative definite? Just like we did for the general case, what would I do? Minus omega cross X minus K omega minus X cancelled everything and then minus omega plus K X, right. Simple. Yeah. And you can run as many simulations as you like. This will always work. This is a very good controller. It will work beautifully, okay. Even if you add disturbance. So, all Lyapunov analysis, again, we did not do this, but maybe I will do this maybe a little bit later because you have not done all the design. But any Lyapunov design or Lyapunov reshaping or any of the methods that we do, it gives you free robustness. Robustness is free. You know, somebody might tell you that, oh, but this is very theoretical design and you know, what will happen if there is noise, there is disturbance, which is true, what, how will your control, it will perform well, okay. It will give you bounded nice performance because robustness is free in Lyapunov analysis, okay. And we will sort of try to understand. Basically, any CLF-based design, by the way, not in general, if you proved or if you use the Lassalle invariance type methods, which allow the use of semi-definite functions to prove convergence and stability and so on, they will not necessarily give you robustness, not guaranteed robustness. But anywhere where you use CLF-based design, right, which is these, whatever we did are doing a CLF-based design will always give you robustness for free. Even if you add disturbance to your simulation and we will give you assignments in this direction, okay, where you will add noise, you will add disturbance and so on. And you will see that your performance remains close to ideal, okay. You are not going to deviate. It is not going to destabilize your system, okay. So, that is of course one of the features. In most cases, it is also possible to compute how much the disturbance will impact your performance, okay. It is possible to compute, okay. Again, if I find some time, we will cover this, how to compute, yeah. It is also possible to control the effect of disturbance using control gains, okay. For example, what is the control gain here? I will mark it. This is the control gain, yeah. Unlike your typical PID control, you do not have very, you know, sort of very obvious PID type gains, okay. Here the gains are a little bit more complicated. So, how you will look at this sort of a controller? In fact, I would say I would not call even this as a control gain. I will call this as the control gain. So, let us see. These are the feed forward terms and these are the feedback terms, okay. The logic is pretty straightforward, okay. If you look at these terms, they are coming about just due to the dynamics of the system, okay. These are just the dynamics terms coming from here. This is the dynamics of the system coming from here, okay. And you are sort of cancelling out the dynamics, yeah. You are cancelling out the effect of the dynamics, yeah. These are therefore called feed forward terms, yeah. It means that you sort of predict what the effect of the dynamics is and you cancel it as best as you can. Obviously, you can never do it well. Therefore, it is called feed forward term. The feedback terms like this term and this term and this term are actually the stabilizing terms because this term is coming from here, yeah. And this term is coming from here, okay. These are actually the stabilizing terms, both of these, not just one, okay. So, these are the feedback terms. They are actually the terms that are used to stabilize the system. I mean, it is a little bit complicated. I mean, you can also remove this and because there is already a K here and all that. So, it might seem like there is only one stabilizing term. This is not necessarily, not necessary to include, yeah. In fact, I could also not have included in the control law at all, yeah. I can just adjust it in the K as you can see, yeah. So, this term does not even need to be there in the control law. Therefore, let us not worry about this term. This is actually the feedback term, yeah. And you can see, again, there is a lot of parallels here. It is a nonlinear system, yeah. Notice it is a nonlinear system, small nonlinearity, still a nonlinear system, okay. But the control that you got is actually a PD control, yeah, because this is a proportional and a derivative, okay. This is a proportional term and this is a derivative term. So, if I again want to label more, this is the D term, this is D term. So, PD control. And so, most aeromechanical systems again, PD control works very well. Usually, the control for nonlinear aeromechanical systems will be PD plus feed forward, okay. So, this is what this is, right. It is a PD plus feed forward control. So, applied controls folks or control engineers will typically use this terminology, yeah. They will have PD, feed forward, feedback and these kind of. But the feedback terms are essentially proportional derivative, right. Because you took x and omega, this is proportional derivative, alright. A lot of people also wonder why a lot of PID works a lot of times, right. I mean, it is a big question, right. Why does it work all the time and you know, why do we do all this nonlinear design if PID works so well? Typically, PID will work well when you do not know the feed forward terms very well, okay. So, what the I does is, it is sort of at least in the linear domain. By the way, PID works only in the linear domain. Actually, it does not work in the nonlinear domain. If your oscillations are too big, then it will not work well for you. So, what you try to do is you try to maintain the system in a very linear zone and then you apply the PID works well, okay. So, in the linear domain, these terms will have relatively smallish contributions, okay. And so, the integrator is what creates this internal model, right. So, it adjusts the steady state error, okay. So, these terms. So, the I is essentially replacement for these terms, okay. When you do not know these terms too well, you would employ an I term, okay. That is sort of the purpose of, you know, sort of the principle why PID will work relatively well, yeah. It is sort of I if you, the integrator term is what it is a 1 by S, right. So, 1 by S is what it is an integrator, right. And this system is an integrator, right. Just like most aeromechanical systems I say, right. It is an, it has an integrator built in, right. So, the I term is an internal model, yeah. A very standard idea that if you have the internal model in the controller, internal model of the system in the controller, then you can kill the steady state error, okay. You can also prove this. I think if you have done a good frequency domain course, there is proof of this also. Yeah, I do not remember it, but it is not difficult, yeah. So, if your controller has an internal model of the system, which is what? Which is saying I know the feed forward terms. What does it mean to have an internal model of the system? Means I know the feed forward terms, right. If you do not know the feed forward terms very well and you are working in a sort of linear domain, then 1 by S is a good internal model and that is an integrator. And that is in your controller works well, yeah. Cancels your dynamical, the feed forward effects and then you have a nice damping reaching. The proportional is reaching, dynamic D is reaching. What gives you the sort of stops your system in some sense, you know, does not have crazy oscillations, okay. So, that is the idea, alright. So, you can say this is a way of designing CLFs, so that you can come up with controllers, typical controllers for aeromechanical, you will find them to be PD feed forward structure, okay.