 So, the last thing we were looking at was this cascade connection, yeah, so this was the last sort of topic in passivity that we were doing, so you will have your tutorial of course, so hopefully you will have a bit more clarity. What we were seeing is if you have a passive system which is actually driving a stable system, then the passivity property is retained, ok, and you know that once you have passivity property and zero state observability, then you can construct a stabilizing feedback, ok. So, we have already seen this, yeah. So, along with back stepping, this is a very, very strong you know method for designing controllers, right. So, this becomes a another rather rather powerful tool, yeah, in your arsenal, ok. So, we also saw these examples of course, we saw the you know robot example, right, which was the case where you were talking about feedback passivation, right, that was the robot case. Then you also looked at the rotational dynamics, yeah. I tried to explain a little bit what these MRPs are and so on, but again like I said, we are not so concerned by how these kinematics are formed and so on. We are more interested in just working with the equations, yeah, because this is not a dynamics course. So, we are not trying to explain the equations, but as you can see it is this kinematics system was being driven by the omega, right. So, omega can be thought of as a control, omega is of course, the angular velocity and the angular velocity dynamics along with the output was this part, alright. So, this was and we of course, saw or at least you are supposed to prove that this is in fact passive, right. This is a passive system with this input and of course, some modified output, yeah. So, there is this feedback passivity and passivity are more or less identical, yeah, because if you apply a feedback, you get a new control with which you can get passivity, okay. So, you had this passive system with output omega and this output is driving this kinematics, alright and we figured of course, well I mean I asked you to figure that how you can select a controller and so on, yeah. And in fact, the control structure was also very obvious, right for this cascade case, the structure is actually given in this, you know, how we work this out, right. The structure is right here, right and that is exactly what we had here, right. And all I asked you to do in your exercise was to choose a particular W function. This W was essentially the Lyapunov candidate for the kinematics system and I asked you to choose this particular W and this again comes from the motivation for this is from the kinematics, yeah, from the MRP formulation, yeah. So, do not worry about how I got this, I did not get this magically or anything like that. Folks who came up with the modified Rodriguez parameters and started working with it for control design came up with this, okay. So, not all right. So, all I asked you to do was use this W here and take the partial and you get a control, okay. So, now you of course have already posted on Moodle, you have your exercise, right, which is this problem and the robot problem. So, these are as real as they get, right, you have a robotic manipulator system, you have a rigid body attitude control problem. So, you have both these problems and you have been given some numerical data of course, what is the inertia and so on and so forth, link lengths and things like that and you are asked to actually do the control, of course, complete the exercises and formulate the control and implement it in code, okay. So, you are expected to run simulations, all of you should have different looking plots, not same plots, try different initial conditions, yeah, I want the plots to look different, they should not be the same, okay. And that should give you enough exposure to, in fact, I have also asked you because we have not done any exercise on back stepping, I have also asked you to do a bit of back stepping design for the robotic system at least. And so this gives you enough exposure to the two key methods of control design we have learned, yeah. And really, really, I hope if nothing Antonio's lecture should have impressed upon you that almost every system you can think of is passive, yeah, everything is just an energy input output equation. The only thing you need to be careful about is what is the output to choose or if you have to do some feedback passivation that is you have to add some term to the control to make it input output passive, yeah. But this passivity seems like it is free because it just relies on, in fact it does not even rely on dissipativity, right. If there is no dissipation, you still have the less than equal to but if you have dissipation, you have the strict inequality and strict passivity and passivity, yeah. But in all the cases that we have been working on and which mostly Khalil, the textbook of nonlinear systems focuses on is just passivity. He is not looking for strict passivity because the strictness can be introduced using the feedback, right. I mean you can always add a negative definite term, you will get your strictness, no problem. So the strictness is not, you know, a limiting factor, yeah. So that should give you a good enough exposure. If you can do this, you know, assignment well, you simulate, you can get good plots, you would have, I mean I would think you would have learned enough in terms of control design, yeah, nonlinear control design. You will be able to pick up a system, I mean you may have some hiccups here and there but you will be able to pick up a system and design a controller, okay. You will be able, because these are two very, very far-reaching tools, yeah. In fact, back-stepping and passivity also are very closely related because you can use the back-stepping ideas to, if you remember, the idea of back-stepping was to just get CLFs, right. If you keep adding systems, you keep getting more newer CLFs, right. So you can use those CLFs as storage functions for the passivity. So it also helps you construct storage functions, yeah. You might be able to use that storage function to gauge passivity and then do a design, yeah. So these things can also be combined in a very nice way, all right. Okay. In general, the control that you design with passivity ideas will be simpler than back-stepping. Anyway, you will see that, yeah. You will see that in your assignment. All right, did. I was reading some reinforcement learning stuff. Yeah, this is a good one. You should also try to read. I mean, this is an article. It basically connects reinforcement learning to adaptive control. So basically reinforcement learning is control, nonlinear control in some sense, yeah. So if you try to do any real-time learning, because mostly, most of the learning that you will learn in courses, you will see that soft line because it's really data intensive. But if you want to do real-time learning, because that's where you want to go, because if you do offline learning with some data set, and then if your real conditions, don't meet that data set. For example, you did all your exercises in the day time, all your data was collected with a manipulator or with a quadrotor in the day time, and you did amazing learning of all your vision data, you did exceptional work. And then if you fly the quadrotor even in the dusk, not just not even night, if you fly it in the dusk time when the light is a little bit lesser or different, your learning will fail. So there is to be some online learning also. So reinforcement learning is actually feedback based learning. So it's actually sort of connected. So anyway, I was also reading stuff. Okay, great. So what we want to do is go to sort of the last key topic, last key compulsory topic in this course, which is on feedback linearization. Feedback linearization is also a design, control design method. Yes, one of the older, I would say oldest control design method does not rely on any Lyapunov theory at all. There is no Lyapunov theory here. It relies on vector fields and flows and things like that. Therefore, the theory is a little bit more complicated. Yeah, so I spend a little bit more time on this. But it's a very powerful tool in the sense that if you once you know how to use it, you can actually simplify your control design to a large extent. Okay, so what's the motivation? Okay, let's look at it. Just the pendulum, I keep bringing up the pendulum example. It's such a nice example for a lot of situations. If you look at the pendulum, this is the dynamics for this, whatever pendulum that I've drawn here. And if I write it with this state space notation, I get something like this, very standard nonlinear system. I mean, actually a simple nonlinearity, but still a nonlinear system. Now, if I choose this feedback, what is this doing is basically cancelling this nonlinearity. You are used to doing this, by the way. And we have been inadvertently doing this in all our everything we've been doing, right? We cancel this nonlinearity. So what do I get? I get a double integrator, which is why I kept saying a lot of times to you that most mechanical systems look like double integrators are double integrators. Now, for this system, it seems rather simple. Let become a double integrator. It's a linear system. So what did you do? You use a feedback transformation, what we call this is called a transformation, because you used a feedback to construct a new feedback. We did this in passivity also, we did this in back stepping also. So, but the purpose of this feedback transformation is to linearize the system or make the system linear. I will not use the word linearize because by that, you have a different connotation, you think Jacobian and all that, we're not doing any Jacobian here. We applied a feedback, which basically cancels the nonlinearity. And therefore, you have a linear system as an outcome. What is the good thing about it? You can use linear control principles to now drive the system. I can even use, you know, whatever frequency domain methods, I can specify overshoots and all the nice things that people know about frequency domain methods. So, therefore, you have a lot more sort of playground to play with that everybody knows which is the motivation to look at feedback linearization to begin with. Now, if I go back and look at say, for example, a slightly more complicated system, which is say the robot dynamics. Yeah, you look at this system, it's not very obvious how it is feedback linearizable, but it's not that bad because you already know that this is an invertible matrix. Right? This mass matrix or the inertia matrix is an invertible matrix, right? We know that. So, I can of course, put in the inverse everywhere, right? Everywhere I get an inverse, excellent. Then I can write this as q1 dot is q2 q2 dot is the entire right hand side, right? With an inverse. Now, if I use my control to cancel all of this stuff out, just cancel it, right? So, I can apply a control you can see which is cq q dot q dot plus dq dot plus gq that will cancel everything, right? And then with a new control say m times v, right? Then m inverse and m cancels out again becomes identity and then I just have q double dot is v, again a double integrator, okay? Very simple. In these cases, it's very simple. Why? Because things are evolving in nice Cartesian coordinates. That's why this looks very simple, okay? But there are more complicated examples also. One such example is again something we've seen, this guy, okay? This is also feedback linearizable, right? We are not, it's not evident how, yeah? Because if you see here the nonlinearity is where in the kinematics, right? Whatever nonlinearity is here, it's easy to handle, right? I can always cancel this, right? Cancel the nonlinearity using the control, I'm done. So, this becoming linear too easy, no problem. The problem is that this is nonlinear, then how do you make the entire system linear looking, okay? But the fact is this system is also feedback linearizable, yeah? The question is how do you look? What are the new states in which it is linear, okay? So, feedback linearization is combined with state linearization, okay? So, you also have a state transformation and a feedback transformation typically, okay? It's not just application of a feedback, okay? Remember that. Whenever we talk about linearization, feedback linearization, although the topic is called feedback linearization, actually we are also doing state linearization, which means you are choosing appropriate state transformation, yeah? These transformations are nonlinear transformations, okay? Not easy linear transformations, okay? So, anyway, let's look at, let's go forward and see. So, anyway, yes. You mean the other lecture, yeah? Here? I think we discussed this a little bit. I mean, this is again a dynamical systems notion, not a control theory notion, but because it's a dynamical, it's a mechanical system, that's why I can say all this. Kinematics is just, see, rho in this case represents angles, actually. It's actually representation of orientation, right? So, it is sort of a position variable, if you think about it, angular position variable. So, anything that gives you angular velocity, the derivative of angular position, this is always called kinematics, okay? Same in robots, right? What is robot kinematics? It is the relationship between the joint velocities and the end effector velocity, yeah? So, x dot, y dot, z dot versus q1 dot, q2 dot, q3 dot. You have this Jacobian equation and all that. If you have seen robotics courses, you have a relationship between these velocities and this. So, this is exactly what this is, right? There is relationship between body angular velocity and the rate of the orientation variable, yeah? The variable is complicated and nonlinear, because if you remember rotation is represented as a matrix and this is just a parameterization. So, it's complicated, but still, it's still a rotation variable, okay? So, anything that gives you rotation rates as a relation with the body rates, it's a kinematics because it's relating position derivatives, okay? And then anything that gives you velocity derivatives, this is the angular velocity derivative, this is dynamics, this is the dynamics of the system, okay? It's just actually terminology, nothing more. So, this is the dynamics of the system and typically, this is where the force and the moments and all these appear, yeah? Okay? Great. Now, so that was the motivation that we can actually get rid of nonlinearities of some systems, yeah? In more often than not, we can only get rid of the nonlinearity partially, only in some states, not in all the states, okay? But we want to explore when, how, or how much can we do, okay? That's the whole idea. So, we start as usual with this nice control affine system, we are used to this, right? That the system is basically affine in the control. These are f and g are all in Rn. So, states are in Rn. So, f is a n vector, g is an n vector, yeah? We already know this is called a drift vector field, this is called a control vector field. Here, I have written in this way where there is only one control, which is scalar in like a single input system, you can of course, generalize it, just makes the presentation very complicated. Therefore, I restricted to single input case and everything is smooth, right? I have assumed all the nice properties. Notice also that this theory is very, very painful to present and talk about even if explicitly time appears here. Time appearing will make this whole thing very, very messy, yeah? So, invariably most, almost all feedback linearization theory that you will see is for time invariant systems or autonomous systems, okay? Actually, a lot of theories that we do is for that. That's why method like back stepping and so on are powerful because they will work irrespective of time varying and all is irrelevant, yeah? It will work anyway, all right? Excellent. So, we are focusing on control affine systems, autonomous control affine systems, single input, n states, all right? Great. Now, let's not worry about this guy. So, I mean, I was just trying to work out what these are, but I am not going to do it now. We already know this notation, but I am going to repeat this notation. What is the lead derivative? I hope you know this notation, yeah? What is the lead derivative? If you have a function, which is a map from Rn to R, okay? You are doing everything in Cartesian coordinates. The same things can be done in geometric spaces also, geometric manifolds also, but we are not doing any of that. If you have a map from Rn to R, yeah? Everything is smooth, by the way. Smoothness is a given, yeah? And if you have a function f from Rn to R, yeah? Then Lfh, okay? That is the lead derivative. This is the lead derivative of h with respect to f is just partial of h with respect to f times f, yeah? We have already seen this notation in this, when we were talking about the Sontag-Sussmann universal formula and things like that, yeah? This notation is pretty standard, yeah? So, all this and remember, h is a scalar valued function, yeah? This notation works only for scalar valued functions. If it is not scalar valued, this is not the correct notation. I mean, this notation is still used if h is not scalar valued, but then it means something else. So, Lfh is the lead derivative of a scalar valued function with respect to a vector field, yeah? f is a vector field, yeah? It is the drift vector field to be specific, yeah? That is how we are using this notation, yeah? It is just partial of h with respect to x, f of x, that is it, that is the formula. And you already seen, I mean, we have not talked about controllability or anything, but you have already seen that these lead derivatives play a big role in the universal controller design, right? They are, they form key terms in this universal controller design. So, obviously, these are very important quantities, okay? That is Lfh. Then we have what is called a lead bracket, okay? This is actually a bilinear, not a bilinear, it is a bilinear operation. It is a bracket operation, okay? This is, again, you are not going into too much of details. You can look at more details if you want. I have taken notes from Ravi, little bit of bits of his notes. You will see that there is more material other than that. So, I have taken bits of notes from him. And he has taken it from this book, okay? This is the absolute Bible when it comes to feedback linearization, yeah? For this topic, Alberto Isidore's book is the final word, pretty classical book by now. So, what is the lead bracket? It is a bracket operation that is it takes two vector fields, yeah? Here we had a scalar field and a vector field. It is a scalar valued function and a vector field. Here we take two vector fields. And what is the lead bracket? It simply is this operation, dgx fx minus dfx gx, okay? What is this dgx dfx? This is the Jacobian, okay? Just the Jacobian. This is basically, this is how it is written, yeah? But why I have used capital D instead of writing it like this, I have tried to make a distinction here because here h was a scalar valued function. So, this, what is the dimension of this guy? Anybody? What is the dimension of del x del x? h is scalar valued, x is n vector, yeah? So, typically the way I look at it is as a 1 cross n, a row vector, I have always look at it as a row vector. It is your call how you want to look at it because if I look at it as a row vector, then this multiplication makes sense, right? del h del x fx makes sense, yeah? Otherwise, there is a little bit of a problem, right? Or you can look at it as an inner product between the two also, yeah? Anything is fine, your call. You can look at it both as vectors and you can look at this as an inner product between the two, yeah? Same deal. Fg is basically dgf minus dfg. I do not write it like this, I use this because g is actually now a vector valued function, right? It is a vector field, right? So, this will be a matrix and n by n matrix now. So, you can see that this, if you take the lee bracket of two vector fields, you get another vector field, right? What is a vector field? It is just giving you vectors as functions of state. That is it. That is why it is called a vector field, yeah? It is like saying that I take a point here, I get a, I get a, I think we discussed this also, right? If you take this sort of space and you have these many points, at every point I get a vector. This is a vector field, okay? It actually tells you how in what directions the systems can move, yeah? We do not talk about controllability again, but it is well understood that if you have the drift vector f and the control vector g, then you can also move in the f, g inner lee bracket direction. You can make the system move in this direction also, yeah? So, you see, right? I got a new direction to move in, right? So, it is almost like saying I have a car, right? I am saying that I can turn right by 90 degrees or I can go straight. I can only do two things with this particular car of mine. But then just by those two operations, I can also do the lee bracket operation, which I do not know. It may be some other direction, maybe 35 degrees in between, yeah? So, it is actually telling you that you can actually move in directions that are beyond the drift vector field and the control vector field directions, right? Because naturally, if you look at this expression, it would seem to you that you cannot, right? At least intuitively, it seems like this is just a scalar multiplying g, right? So, you are just moving in span of f and g, it looks like, right? You can only move in f direction and g direction, right? But it can be shown that you can also move in f, g direction in the lee bracket direction. Again, this is not something we are looking at right now, yeah? Let us not worry about it. But that is why these are important, yeah? Because these actually give you new directions to move in, yeah? And that is why determines controllability of a system, right? As many directions, if you can move in all the directions, if you have three dimensions, you can move in all three directions, you are done, which means you have controllability, right? At every point. But so, lee derivative, very critical, whenever we are looking at all these potential functions and the derivatives of potential functions along some direction, along a dynamical system, right? Lee bracket because it talks about new directions in which your control system can move, yeah? So, this is what forms the basis of all of feedback linearization, okay? Like I said, I take with permission these nodes. So, this is the system again, some, I mean, here he has specified some open set and so on. Let us not worry about it. So, if you take some output of the system now, okay? And you can see I have repeatedly used the same notation for the functions. I denote the output as h of x, okay? If I take some output h of x, okay? Then what is the derivative of the output? If I take the total derivative of the output, that is y dot, what will it be? It will be partial with respect to h, partial of h with respect to x, right? Times the dynamics, x dot, which is fx plus gxu. So, that will be what? So, if I actually write it out. So, y dot is partial with respect to x, yeah? And this guy is actually Lfh, the way I have defined it. This is Lfh, right? So, you need to get used to this notation. And this along with this is Lgh, right? So, that is why y first derivative is Lfh plus Lgh times u, okay? All right? So, that is pretty simple, okay? This is how the first derivative goes, okay? If I take the second derivative, now let us try to see what happens if I take another derivative, yeah? It is pretty simple because I will keep writing in the same notation, yeah? If I take derivative of this, it will again be partial of this with respect to x times fx plus gxu, yeah? But that is basically going to be the same as saying Lf square h plus Lgh times u, okay? You can work it out. It is very easy, yeah? Because it is just partial with respect to x and then you multiply by Lgh. So, it is actually pretty straightforward, okay? So, and so this will be the second derivative, okay? This will be the second derivative, yeah? So, if I keep doing this on and on, yeah? So, one of the assumptions that we typically make is of a relative degree of a system, all right? What is the relative degree of a system exactly defined by this? That Lglfk hx is 0 for all k from 0 to r minus 2 and it is non-zero for k equal to r minus 1, okay? This is called a relative degree of the system. It is nothing very complicated. It is very straightforward. It just tells you how many times I have to take the derivative of the output to get the input in the equation, okay? So, that is why I have actually taken this example. If you see this pendulum example, okay? Let us take this example, yeah? Suppose I take my ys, y equal to hx as the first state, x1, okay? What is the, what is Lfh, by the way? Lfh is first I take partial with respect to x. So, it is 1, 0 in the row vector, okay? Then I multiply it by the f. What is f? f is the drift vector field, this guy, right? The one without the control is f. The one with the control is g, right? So, I just multiplied it with x2 and this whatever minus g sin x, g over L sin x1 will be here and the product is what? Just x2, right? Is that clear? Yeah? This has to make sense to you. This is very simple. This does not make sense. You will have trouble later. Yeah? All right? Great. Now, if you see when I took my y and I took its derivative, all right? I have not computed an Lgh, okay? What is the Lgh? Let us compute the Lghx also. It is the same thing, 1, 0 multiplied by g. g is what? The control vector field. It is 0 comma 1 over ml square, right? That is this guy and the product is 0, okay? So, what does it mean? It means that y dot, I mean all this effort to do one, y dot is actually equal to x1 dot is equal to x2. You see that is the same x2 that appears here, okay? Because that is what I did. All this expression, all these g derivatives and so on, this guy, this guy is just to express the derivative, the total derivative of the output, subsequent total derivative. You see the control does not appear here because Lgh is 0. Now, let us take the next derivative, y double dot, second derivative. That is what? Is equal to x2 dot and that is what? This guy minus gl sin x1 plus 1 ml squared u, okay? So, what? Control is appearing in the second derivative of the output, okay? Relative degree is 2, okay? So, same thing will show up in the lead derivative also. Now, let us take Lf squared h, yeah? What is Lf squared h? It is basically the, you take this as h now and then do an Lf again. So, this is del partial of this with respect to x multiplied by f again. So, partial of this with respect to x is what? 0, 1, not 1, 0, but 0, 1. And if I multiply it with the same guy again, I get the second term, right? And that is this, alright? That is this. Then what? What about Lglfhx? Again, take this as, this is the Lfh, partial with respect to x is 0, 1 again, right? Multiply now by g, the control vector field. So, it gives me 1 over ml square and that is exactly the second derivative, right? Again, this is all this Lfh, Lglfh, all this is notation just to express the total derivative, okay? This notation just makes life, I mean, although as of now you might find your life more complicated, but actually this is supposed to make your life easy when you are doing calculations, okay? Because when you take recursive derivatives, you cannot keep doing x1 dot, x1 double dot. I just did a special case, right? How will you write the general case? So, you need this notation, okay? And that is the whole idea. What was the condition for relative degree r? It was that Lglfkh is 0 from k to r minus 2, okay? And that is the case for us, for the pendulum example. What did we have? We had our relative degree is 2. So, r minus 2 is actually 0, okay? So, what have we shown? We have shown that Lgh is 0, yeah? Because this is 0 means this is not there. So, Lgh is 0 is what we have shown, right? And then we also have that Lglfh, r minus 1 is 1 in our case. So, Lglfh is actually non-zero, right? We saw that. And as soon as Lglfh becomes non-zero, you see, I just did this, right? The second derivative contains Lglfh, right? Lf square which is non-zero already, but Lglfh is non-zero. So, in the second derivative control showed up because this term became non-zero, right? So, this Lglf, Lglf square, they will keep appearing now, okay? This is where this is a little bit more complicated because of the notation, yeah? But the idea is they are just trying to express the derivative, several rounds of derivation, yeah? Which you cannot express otherwise. There is no other nicer way of expressing this than this, okay? Because you have a dynamics coming in at every derivative, all right?