 We have more or less, again I have not spoken about methods like the Babalat's lemma which is a comparable method to LaSalle invariance, not LaSalle invariance but more like Babashin Krasowski LaSalle, we have not of course gone there, but we are more or less at the end of the analysis part of non-linear system that we wanted to cover, again many more things out there, this is what we wanted to do, okay. Now, so anybody who is interested in doing more in-depth analysis for non-linear systems there is a very nice NPTEL course and this is run by Arun Main-Darkar and Ramakrishna Pashumarthi in IIT Madras. So they have a nice NPTEL course on non-linear systems analysis, okay. So they cover analysis in significantly more detail, yeah. So they have all these things like point carbonetics and theorems and so a lot of more depth in terms of non-linear systems analysis I would say. I have sort of skipped some of those because again in the interest of moving more towards the design part of things, okay. So in order to start design, control design for non-linear systems which is what we want to do now and henceforth, the first thing we need to understand which is again a little bit of seems a little bit theoretical, but it is actually quite useful, we will start to use it rather soon is control Lyapunov functions, okay. So until now you have been seeing these Lyapunov functions which are purely a function of state, right. But I hope you understand that as control engineers, we are not just looking at dynamics which are functions of purely the state, right. We are interested in manipulating the dynamics, right. I mean if somebody asks me what I do, I actually and they are probably not in the field, I just tell them I manipulate differential equations, really what we do, we have a control term and we just manipulate differential equations and try to get a particular behavior that we are interested, okay. Now this differential equation could be a model for anything, yeah. And there are so many things that now folks in control are working on. So I mean the possible playground is huge, okay. So now we start moving into functions of both state and control, okay. And then we see what these functions have to satisfy, what properties these functions have to satisfy, so that they are good, you know, they help you design good controls, okay. So usually direct, so you might think that we design controllers by just looking at a system and guessing something, no. You actually, at least folks who do non-linear system, non-linear control, they first design a Lyapunov function or a control Lyapunov function and then they use the control Lyapunov function and try to make it negative definite using the control term that appears in the control Lyapunov function derivative, okay. So that is the idea, right. So that is how we typically approach all control design problems, yeah. This is typically called Lyapunov redesign, yeah. So we use the Lyapunov function to design a control, yeah, not the other way around. I am not designing a control first. Again, for linear systems, too easy, yeah, it is just minus k, you have to choose a k. So it becomes easy, but for non-linear systems, such simple structures will not work. So there is no way you will be able to guess a controller very easily. So you do this, you actually guess a Lyapunov function, potentially an energy of the system or energy like term and then you try to design a controller based on trying to make v dot negative, okay and control Lyapunov functions are what give you the basis for choosing such v, okay. So we are usually looking at systems of this form, yeah, again just for being very precise because this was a sort of a text kind of a material that we were writing. So you see all the arguments being presented here. But I would write it as x dot is fx ux, yeah. So the only thing that is worth noting here is that the control is a pure function of the state. We assume that the control is a purely a function of the state and not time explicitly. Time does not explicitly appear here, okay. So this is important and of course you have the, I mean everything in different domains, you have the states in going to in Rn, control in Rm and the vector field in whatever I mean, accordingly defined vector field, okay. We of course assume existence of solutions and so on, okay. All right, one of the typical assumptions that we also make is that there exists a continuous feedback which guarantees that the origin is asymptotically stable, okay. We do not use it immediately. So let us not worry about it too much for now, okay. We assume the existence of a stabilizing feedback is what we are saying. Actually we do not use it much at all in our discussion. This is for something later because we actually, in our discussion we will be talking about when does such an alpha exist, okay, that is what is the aim here, okay. So first things first, what is a control Lyapunov function? This function v which is now still a function of purely the state by the way, okay. This is not a time varying system. Just like a Lassal invariance discussion, we are still not working with a time varying system. It is purely state dependent, okay. Time does not explicitly appear on the right hand side, okay, okay, great. So we assume the function v which is a function of the state and mapping to a real number of class c infinity, we are already assuming it is a smooth function, yeah, is said to be a control Lyapunov function for this system if these assumptions are satisfied, okay. So remember until now as always v has no connection to the dynamics at all, v is just a function of x, yeah, could be anything, sin x cos x sin square whatever, you just have to satisfy some assumptions, okay. So until now v just like in your Lyapunov theorem statements not a function of the, does not have any connection to the dynamics at all, right. But we start by saying that it is positive definite, okay, if you see, this is just the positive definiteness assumption, right, the locally positive definitor, LPDF or whatever, we have been calling it positive definite function and in fact the first statement is what makes it a candidate Lyapunov function also, okay, okay. So even the control Lyapunov function, the first requirement is that it is a candidate Lyapunov function, okay. The second requirement is the interesting requirement, it basically says that the infimum over all controls of v dot is actually v dot by the way, right, the way we have defined it, it is partial of v with respect to x, x dot which is this, this has to be strictly negative for all x in vr and x not equal to 0, okay. This pretty much looks like the negative definiteness of v dot, right, pretty much looks like negative definiteness of v dot. So as such no big difference, the difference is very subtle, what is this subtle difference? I am not implicitly saying that v dot is negative definite without the control or something like that. I am saying I should be able to choose a u, okay, such that v dot becomes negative definite for every x, okay. So remember this is very interesting, so I am not necessarily defining a smooth control or even a continuous control, I am just finding some points here, okay. Look at this, the only thing that I am minimizing or finding the infimum over is the control, okay and here x is fixed, I fixed an x when I compute this guy, I fixed some x, all right and I compute this entire thing which is now a function only of the control, yeah because notice the infimum is only over the control, so I fixed an x in this whatever bound, bounded domain, yeah and then I compute this quantity and now I am saying that once I fixed an x I can always find a u which is now again some point, some vector in Rm, some single vector in Rm, not a trajectory or anything like that, just a single vector in Rm, right, such that this becomes negative, yeah. So the important thing to remember is that I might just get points which look like this, for different values of x, I might just get these distinct points, they may not, it may, if I join them together, I am not necessarily getting anything continuous, right, because typically if you remember I said that this course is about finding smooth control, continuous control and things like that, right, but here it does not, this itself does not seem to guarantee this will happen, right, your control that or u that you compute from here might just be these distinct points which when joined together do not give anything continuous, ok, you might have jumped jumps and so on, because you are just trying to find an infimum over u which is negative, ok. So this is the subtle difference, you are allowed to play with the u in some sense, so that you get some negative v dot, ok, anyway this is what we do in control, right, so we play with the control so that you get stability, yeah, but important thing to remember does not guarantee continuity of the control, ok, alright. So this is what is a control Lyapunov function, a function which is a candidate Lyapunov function and gives v dot negative if you take infimum over u, alright, great, example, ok, this is the system, all concocted, cooked up system, do not worry about it, enough for us to understand what is going on, yeah. So this is x dot is x minus 2 u cubed, ok, again I have written it as a function of x and all that, do not get so worried about the notation because it is a text, so we have to write it very precisely, ok, we assumed all n and m is 1 that is the state space is one dimensional, the control is one dimensional, ok, again too easy. Now what do I say, I claim that v equal to x square by 2 is a control Lyapunov function, yeah, remember this looks pretty much like the Lyapunov functions we have been choosing, right, which is not so surprising because the first requirement is that it has to be a candidate Lyapunov function, right. So the vx that I choose for a control Lyapunov function has to be a candidate Lyapunov function. So structure cannot be too different, the control Lyapunov functions also have energy like structure, some quadratic terms or something that cannot become negative and things like that, ok. So vx equal to x square by 2 obviously satisfies this condition, yeah, no problem it is 0 at 0 and strictly positive for nonzero values of the state x, ok. Now the second condition requires me to compute del v del x fx u, yeah, which is just in this case xx dot, right, and x dot comes from here, right, and I get this, right, x square minus 2x u cube, ok. Now obviously what do I have to do, I have to fix x and find a control which makes this thing negative, ok. Now for me it is easy, I mean I have given some example. Suppose I fix x at 1, then this is 1 minus 2 u cubed, I can take control to be 1, works because this becomes negative. Similarly if I take x to be 2, I get 4 minus 4 u cubed, I can take control to be 2 becomes negative, right. I cannot take u to be 1 because whatever, it will not become negative, that is a problem, right. So I take u to be 2, I could have taken 1.5 also, yeah. So I can get distinct points, discrete points like these also, yeah, because I can chose anything greater than 1 here, here, sorry, anything greater than 1 here, anything greater than 2 here and so on and so forth, anything greater than 3. So I can get all these discrete points here, no problem, ok. Now I have a nicer solution also because if I choose my control such that u cubed is actually equal to x, ok. Suppose u cubed is turns out to be actually equal to x, yeah. Then this is just x square minus 2x cube, x u cube which is x square minus 2x square, this is minus x square, which I know is negative definite, right. We have already done this many times, yeah. And again this is also not a unique choice, I could have taken x by 2 also. So that would have become you know minus x square by 2, no that would not have worked, that would not have worked, yeah it is ok. But you see it is not a unique choice, I can take u cube equal to x, u cube equal to 3x by 2 and things like that, yeah. So it is not a unique choice, ok. I can have many such choices, but the point is I have such this u cube equal to x and nice, in fact I have given a function interpretation to u, right, until now here I was constructing individual points given 1x, right. Here I have constructed a signal, right. So already some improvement for me, right. So I have constructed a signal u, ok, great. Now small problem I do not know, I hope you notice, u becomes this if I choose u equal to x cube. So if you, if I sorry, if I choose u cube equal to x, the control is x to the power 1 by 3, very nice everywhere, but at the origin, yeah, because the derivative is this, yeah. At the origin is going to blow, the derivative is blowing up, right. So it is not smooth, it is continuous, in fact it is smooth everywhere, but at the origin, ok. And origin is actually an important point for us, because that is where we want to go, it is not like it is outside the domain, so we do not care where it goes and so on, yeah. This is an important point. So there is a certain weakness in this control, it will behave poorly when you get close to the origin, ok. It will start becoming doing funny things, yeah, very fast changes and things like that as you get closer to your equilibrium, huh, with stupideric. Anyway, so, so obviously researchers who are working on these control Lyapunov functions wanted to achieve something more and they realized they can achieve something more only if they specialize to control affine systems, ok. What is a control affine system? Affine means linear, ok. Control affine means linear system in the control, ok, alright. If it was both state and control affine, then it would be a linear system, yeah, but it because it is only affine in the control, it is not still a non-linear system. So, you typically write it as F0 plus summation of Ui Fi, ok. So this is a very nice, I mean if you do not like this overload of notation, you can write it as, ok. Why the index M? Because there are M controls. So each Ui is actually scalar valued now, yeah. Each Ui is scalar valued and F0 and Fi's are vectors of dimension N. I hope it is evident to you. Notice that this guy is a subset of Rn, it is a ball in Rn, ok. Is that fine? It is a ball in Rn, not a ball in, yeah, it is not scalar because control is also state dependent. Only these states are evolving in the ball, yeah, it is not a ball in scalar or anything, ok. So notice that each of these Fi's, F0 and Fi are N dimensional vectors, ok. And this is a very, this is what most non-linear theory on controllability, observability, all this relies on these vector fields, yeah. These are called vector fields. Why? Because once I give you any point in the state space, ok, what does this represent? These represent velocity directions, right? F0, Fi's, these are velocity directions. I hope this is, if the control is not there, if control is 0, then you get a velocity direction from this, right? So given any point in the state space, this F0x actually tells you which way you are getting pushed at that point, which way the state is going to get pushed. Similarly, Fix is telling you which way the state is going to get pushed, if that, if ith component of control is 1, ok. So these actually, this is, I hope this evident here, this is what helps you talk about controllability and observability. Because if F0's and Fi's are funny, for example, if F0 and Fi are both such that the last three components are 0 always, ok. For all F0 and Fi, if last three components are 0, ok, it is n dimensional. So last three components are always 0, all 0 for F0 and all Fi's. Then it's obvious that I can't make any movement in the last three directions, right? If you think R3, X, Y, Z and all F0's and all Fi's have 0's in the Z, Z row, that is the third row, then I can't make any movement in third direction. Whatever control you apply, how does it matter? You can't move in the Z direction. Wherever you started in the Z direction, you remain in that point in the Z direction. So I hope it's evident to you that controllability is defined very neatly and nicely using these vector fields, ok. It's a vector field because given a point in the state space, it generates a different vector. So it's a field of vectors, ok. So F0 is called the drift vector field because that is what the system drifts even if you don't have control, it will drift in this direction and Fi's are called control vector fields, standard terminology, ok.