 So we've already started the design, right? So we've already started a little bit of design. So we'll continue with the design ideas itself. You will also have a tutorial session over the weekend, where you will cover a little bit of back-stepping and more examples and control Lyapunov functions. I think there is still, I'm sure, a little bit of uncertainty about how to do these things. Coming back to what we were doing last time, we were doing back-stepping. The idea is that it's a very, very nice, way of designing CLFs stage-wise, right? So if you start with a scalar system, right? And you basically, well, not a scalar system, but whatever, you know, I would say first-order system, yeah? And then you know that you have a CLF and a stabilizing controller here. Then if you add an integrator to this sort of a system of the same dimension, then you can extend the existing control Lyapunov function to a control Lyapunov function of the new system, okay? And that's what we did. We realized that if you actually, you know, make sure that the state is exactly equal to the desired feedback, everything works nicely. But since we cannot do that, we do next best thing, which is basically try to drive this variable to zero, yeah? I also emphasized, and please keep this in mind, that this is different from the tracking problem, yeah? Because tracking here, usually you have a function of time, right? It's some trajectory that you have designed, which is a function of time, not a function of state. So this is not the trajectory tracking problem or anything. If you, on top of this, want to solve a trajectory tracking problem, there'll be some additional terms here, okay? So anyway, we have not gone to the tracking problem yet. So we basically essentially just proved that this new function is a nice CLF, okay? It's a very nice, in fact, it's a quadratic, right? These are the sort of Lyapunov functions we are very comfortable with, right? The quadratic Lyapunov functions, right? So we essentially have added a quadratic term in this error, okay? All right, great, great, okay? Now, we also looked at this example, okay? Once we have done this proof, all is good, yeah? We are now, we now also looked at an example, which is actually a non-linear system, yeah? Also, doesn't look like a pure integrator, right? The important thing to remember is this doesn't look like a pure integrator, but we were still able to use back-stepping. So this adding just a pure integrator is not sacrosanct. This is not like, you know, this has to be the case. You can, it can be slightly different. The key thing is, you require that the control be able to cancel these terms out, right? So you want this system to somehow, you know, the control to be able to cancel these non-linearities out, okay? That's sort of the idea. So what did we do? Now, one of the big concerns was, how do you even start with a, you know, Lyapunov function or a CLF for the first, you know, first integrator? You take the simple case, yeah? We took half x squared, and we saw that if you take k0 is minus kx, then you are good to go, okay? Just stabilizing control. Then we use this k0 and the nu psi, which is omega in this case, to construct a new variable, which is the error between the state and the desired value of the state, right? And that's it. It gave us the control Lyapunov function, okay? And once we had that, we just took kept taking derivatives and the control showed up, right? And once the control shows up here, we already know that this V is a CLF. We don't have to verify it again for every particular case. We know that this is going to be a CLF, right? So either at this stage, you can use the universal controller, which like I said is a more complicated formula. So which is why, you know, you might want to do something different. So we do Lyapunov reshaping type of a thing, okay? So you essentially choose U so that this whole thing becomes negative definite, okay? And we took a few guesses, I guess, and we basically cancel these guys, and then you introduce the good term, right? And we also saw what is the physical meaning of this. I mean, how do, you know, control practitioners denote these terms in the controller? So you have a, it's actually a PD controller with a feed forward term, okay? So it's a PD plus feed forward controller, very standard. Even in the Syscon department, when you do some experiments with Tudov and this and that, I mean, there's always a, the typical controller that you are experimenting with is a feed forward or a PD plus I controller. So, and you know that the purpose of the integral is to in the linear case sort of do the job of the feed forward terms, okay? So we discussed this, great. Now, what I want to do is I want to go back a little bit and look at this, so this is from the control Lyapunov function lectures, by the way. Let's go back and look at this example again. Let's revisit this, yeah? If you remember, the purpose was to sort of find a CLF for this system, very simple, double integrator system, right? A lot of, you know, mechanical systems, have this structure. You can actually reduce them to the structure, yeah? So this is pretty relevant actually. So if you look at this system, okay? And you, we try to construct a sort of control Lyapunov function initially, if you remember. I'm just trying to remind you what we did. The stuff in the blue was what we wrote initially and tried and wanted to check as a CLF and we took a derivative and we ended up with this much. Here the control vector field term that is the BX was X2 and the drift term was the X1, X2, right? So this is the LF1V and this is the LF0V, okay? So we, so what do we want? We want that whenever this guy is zero, we want this guy to be negative for all non-zero states, right? Now if this guy is zero, we know that X2 is zero, which means this guy also turned out to be zero. So this was not a good CLF, right? There was an issue and then it almost seemed like I arbitrarily gave a CLF. I said this is a CLF, okay? Of course we verified it. I didn't tell you any motivation for how I came up with it, okay? I just wrote this, right? And I said let's try this, okay? As a CLF, we took the derivatives again, yeah? The BX, that is the control vector field terms came out to be this guy. And the drift vector field terms came out to be this guy. Now if this was zero, you wanted, it meant that X2 is minus X1, right? So X1 is minus X2, whichever way you want to write it. And then the first term, AX, became minus X2 squared, right? Which is essentially negative, right? Whenever the state, whenever you have non-zero states, okay? So this was a valid CLF, okay? Of course I gave you another sort of trick also, that whenever your Lyapunov function doesn't turn out to be a CLF, you can always try adding mixed terms, yeah? Which also worked out. This was also a valid CLF, okay? No problem, okay? But I'm not, I don't want to focus on this guy. I want to focus on this guy. Does this remind you of something now? What? If you look at this and you sort of look at this and you look at this, yeah? Yeah, yeah, this is, how did I get this? Because I know backstipping, yeah? Because I knew backstipping beforehand, yeah? So I had some more additional information over you, which is why I know that this will work, yeah? So if I, if you simply go and go back and take this example in fact, yeah? Very simple. And now I say that I want to construct a CLF for this system. What do I do? As usual, I focus on this system first, right? So what is this? I will say that my x2 desired or what you, what you have been using as k0x is equal to what? What is the desired x2? I would like minus kx, this is no x, kx1, okay? Minus kx1. Let me say I just take minus x1. Just to make my life easier, I just make, I keep k to be one, no problem. Technically, you can choose any k, right? But I say I want k to be one, okay? So then what is my error term? So what is my, and what is the v0? What would be the v0? Just half x1 squared, right? This is whatever I mean we've been doing, nothing too fancy, okay? Because I know that if I take v0 dot, it's x1, x1 dot, and if I take, and x1, x1 dot is x1, x2, right? So this gives me v0 dot is x1, x2. And if I did substitute for x2 as minus x1, I get minus x1 squared, good to go, no problem, okay? But of course, I cannot make x2 to be exactly minus x1. So I use the back stepping idea. I create an error. So what is my v for the entire system? What would be my v for the entire system now? Absolutely, yeah? And I know, because I've already proved, I don't have to do any further work, that this is a CLF. So this is a valid CLF, okay? I already know that this is a valid control Lyapunov, okay? So I don't have to do any additional work, okay? And what is this? What is this? This is exactly that guy, right? We just saw this, right? Here, yeah? Half x1 squared plus half x1 plus x2 squared, okay? So the motivation for writing this was exactly back stepping, because I know that this will work, okay? For this particular system. So it's actually rather powerful. You can do this kind of, you can play these kind of games for a lot of systems. I'm not going to go into any further examples right now. Anyway, you will see a few more in the tutorial, hopefully, which is planned for weekend. But what I will do is I'll go to the next design methods and there whatever examples we find, we try to solve a few examples. We'll try to do the same with back stepping also, okay? So the next sort of module is passivity-based design, okay? So we want to use passivity for control design, okay? So whatever examples we find here, what I will do is we'll also try to do the same with the back stepping idea and see how things are different, okay? So that way you have a comparison point, because the ideas are sort of connected. There is an integrator idea here also, okay? So a lot of these ideas came about because of aeromechanical systems, to be honest, yeah? Then later on, although again, mostly traditionally, especially in India, you'll find control engineers in electrical engineering, yeah? And maybe chemical engineering, process control and so on. More recently, aeromechanical engineering, aeromechanical programs, you find controls folks. It's not, it was not there that far back. Of course, you had in aerospace, always guidance, navigation and control. But when I was doing my undergrad in mechanical, control was like almost negligible, honestly speaking, in mechanical engineering, yeah? At least there were no researchers in the area, yeah? Maybe there was of course a course, the standard frequency domain course. But what I'm saying is a lot of these methods, which are by now classical, have come about from mechanical system ideas, okay? The motivation is aeromechanical systems. Later on, they've tried to see if these conditions are also satisfied by electrical, biological systems. And then these methods have been applied there also, okay? But so it's very interesting that somehow we, you know, aeromechanical folks have lost contact with controls for a while. But anyway, it's back, so we are fine, I think. All right, so until now, you've seen two methods of design. One is CLF. I say this as a separate method because you've already seen that once you have a CLF, you can do re-uptown reshaping, that is take derivatives. Try to, you'll get a control term and you try to choose the control so that you get a V dot negative definite, okay? So this is a pretty good method in itself if you can already guess a CLF. Now, then you have a back-stepping method which is actually an idea of a CLF, but you're just extending it to higher-order systems, okay? Again, I did not mention this, but you can imagine that it's not difficult. And again, the ref is KKK book, yeah? You know which one, right? That is the Kristic-Kanalokopolis-Kokotovic book on adaptive control, yeah, yeah, not easy, yeah? I practiced several years, okay? So the KKK book is a reference. You can go look at it, yeah? This method can easily be extended for, let's see, systems of this kind. I'm going to say FX1 plus GX2, X2 dot is F1, sorry, G1. F2 X plus G2 X3, X3 dot is F3 X1, X2, X3 plus. Or if you want to make it, you know, I'm sorry, if you want to make it simpler, I mean, this will also work, but yeah? It's actually, it'll work for several stages, right? I did this two-stage thing, but you can see that I can, I could have added a third stage and added another term to the error, yeah? There'll be a third term with X3 plus something, yeah? I can go on doing this forever, yeah? It'll look very, very complicated, of course. I'm not saying it's going to look simple, but in reality, again, in the typical aeromechanical system context, we are working with what? Atmos 6th order system, yeah? So it's not that far, that difficult, yeah? You can actually do this by hand. But these kind of systems are called, I mean, these are triangular form, yeah? Or I mean, I think there is also strict feedback form, yeah? These are called triangular form, or strict feedback form systems and so on. Why? Because you can see what's happening, right? These drifts are depending only on the previous states, right? And sort of the additional terms are depending on the next state, yeah? This is like this, this is controlled by this guy, this is controlled by this guy, this is controlled by this guy, and up and up. So backwards, you keep designing, yeah? In reality, it doesn't look like this is controlling, because these are states, but that's how we've done back-stepping, right? We've created a desired X2, then created an error, desired X3, create error, desired X4, create an error. And you can do this, yeah? So KKK book actually has a proper structure on how the design will look, very messy looking, but it'll work, yeah? Okay, great. So passivity-based control, all right? So we've already seen that if you have a chain of integrators, now I'm going to say, if you have a chain of integrators, you can do back-stepping, okay? Back-stepping gives you a way of constructing a CLF for a chain of integrators, okay? Very powerful. Once you have a CLF, you can do so many things, yeah? Great. Now, we are going to look at slightly nicer systems, okay? This is somehow systems having some intrinsic good property. Until now, we have not assumed anything, but the strict feedback structure and all that, yeah? Which is not so difficult, not such a very stringent assumption. Honestly, most systems have this kind of a structure. It's not that difficult, yeah? Because if they don't, then life is really, really hard for you, yeah? Now, typically, whatever systems you can think of and realistically, you will find that this sort of a, you know, strict feedback form is there. In, you know, maybe it's not a linear strict feedback form. There may be some, you know, typically what you will see is there will be some thing pre-multiplying these, right? There may be some non-linear pre-multiplication to this and all that, yeah? That would be the complication, but otherwise you will have some strict feedback form, okay? Which is still doable, workable. But for passivity, we need a little bit more assumption on the system intrinsic property itself, okay? So what is it? We are going to now define passivity first, okay? Great. So consider this input-output dynamic. So now we have an input-output system, okay? So not just states and control, but also an output, yeah? Because passivity requires there to be an output. So x dot is fxu. Again, we are not assuming, you know, explicit dependence on time. Things become way more complicated. So this is just x dot is fxu and there's a y, which is equal to h of x, yeah? Notice that we are assuming that the output and input are the same dimension, okay? This is also a requirement, okay? Otherwise it's difficult. There may be more generic versions, but this is the more established version, yeah? Where the input and output are the same dimension, typically the dimension less than the number of states, right? Typically, your number of actuators will be less than the number of states, yeah? It will be unusual if you have more. Then they are over-actuated systems, okay? Of course, standard assumption is that f is locally Lipschitz and h is continuous, yeah? Standard assumptions, okay? Now, this system is called passive. If there exists a C1 storage function V of x, which is positive semi-definite, such that if you take V dot, which is as always defined as partial of V with respect to f of x, then this has to be less than equal to u transpose y or the inner product of u and y, okay? It's a weird-looking definition, yeah? It's sort of weird-looking definition. So I hope you sort of appreciate that. Well, we will try to see what is, what may be a physical sort of interpretation for it, okay? But what you are saying is that you have a storage function which is like a Lyapunov-like function, right? Because we are not saying that V is positive-definite, we only want it to be semi-definite, right? So it's not a Lyapunov candidate, but it's a Lyapunov-like function, and it is C1 function, of course, right? So what we are saying is that if you take the derivative of the V along the system trajectories, yeah? Then it is upper bounded by the inner product of the input and output. Notice that the input is appearing in both places, okay? So this is, as you can see, a very intrinsic system property. This is not, has nothing to do with how you choose control or strict feedback form or anything like that, okay? But a lot of mechanical systems possess this property, which is the cool thing. When we will look at it, let's not worry about that. Okay, great. Just passivity itself is not enough for us to give stable controllers. We also need another property which is called the zero-state observability, okay? This is very much like the observability that you know from linear systems, the definition itself, but just a generation to nonlinear systems, not a big deal, yeah? What is it? The system is called zero-state observable. If no solution of x dot is equal to fx zero can stay in the set, hx equal to zero other than the trivial solution, okay? As of now, I'm just reading this, okay? What did I say? I'm saying that if you make the control to be zero, okay, if you don't apply any control, forget the control because typically in observability, even in linear systems, control plays no role, yeah? If the system x dot equal to ax and y equal to cx is observable, then x dot equal to ax plus bu and y equal to cx is also observable, okay? I hope you know this. Anyway, because your controllability matrix is what? C, a, c, a squared, c, it doesn't have b anywhere, and b is irrelevant here, okay? So same in the nonlinear case also, right? You make, you remove the control. Control is playing no role, okay? So what is the point that we're trying to make? We are trying to, and what does it, what do you, how do you define observability in a linear system? Anybody? Linear system observability, how? Definition, not condition. Condition is this, whatever, the controllability, observability matrix condition. But what is the definition? It's not observable. Yeah, what is the definition? I just said it's the same words that I just said. You can't do it. It's not observable. Huh? No, that is again one very, very special situation. If you're all states are measured, then obviously system is observable. Typically your observations or measurements are less than the states, number of states, right? Pretty obvious, right? I can give, take the simplest of example you can take, whatever. You can take a drone, right? States are position, velocity, angular position, angular velocity. What is your observation? You just have the three positions or three velocities for a gyroscope, yeah? Or three velocities, three linear velocities and three angular velocities. You don't have position measurements, typically. Or good position measurements, okay? So measurements less than number of observations, okay? So basically the way all these definitions are stated, observability, observability, observability stated in a sense, it says that you can reconstruct the state from the observations. This is the thing. Can you reconstruct the state from the observations or not? That's the whole idea, okay? So basically how do you, then you try to formalize it in many different ways, yeah? But the basic idea is reconstruct state from observation. And what do you mean by reconstructing states? In, for most systems governed by ODEs, all you need is the initial condition, right? Once I give you initial condition, entire state is reconstructed. Again, we are talking theoretical, huh? If there is noise and all, obviously there is filtering and all, that's a different matter. But we are not talking about the practical case. We're talking about the theory. If it works in theory, the practical case will also work with some perturbation, some oscillations, yeah? But the point is you just have to reconstruct the initial condition, okay? So given a set of observations, can you reconstruct the initial conditions? That's the question that you ask, okay? This is also very similar, yeah? Here you say that if you look at the set h equal to zero, all the states were hx equal to zero, okay? We are saying no solution of x dot equal to fx zero will stay in this set, except the equilibrium, except the zero state or except the zero trajectory, okay?