 Welcome to another session of SC602, non-linear control. So what we were doing last time was actually giving you a intro of adaptive control, right. So I think I of course did not go much further. So there is of course an entire section you can see which is on back stepping parameter unmasked with control, I did not cover this because again this is not an adaptive control course but of course you can see that the examples we were considering the control and the uncertainty appears in the same equation, right. So this is called a matched uncertainty, right and as you can imagine these are much easier to handle. All that we did was instead of theta star we put a theta hat, right. So again I did not say that at the time but this idea that whatever control I get I just substitute the unknown with an estimate of the unknown is called the certainty equivalence principle, ok. So this is very commonly used whenever you hear somebody in adaptive theory talking about certainty equivalence, this is what they mean. They basically design the control of assuming the parameter is known, ok and then in the adaptive control this is replace the true value of the parameter by its estimate theta hat, ok. So this is the notion of certainty equivalence. So basically you are taking the certain controller and you are creating an equivalent controller. There is no, you never change the structure of the controller and this obviously gives us an easy way of constructing the control of course, right. So the idea was pretty straight forward. All we did was we again constructed the known case control, right. So we of course we did it via back stepping in this particular case but it does not matter how you do it. You construct a known case control and then you basically replace the theta star in the control by the theta hat, right. And once you do that you basically add to your Lyapunov function a term in the parameter error, right. So this is what we did. So you had this Lyapunov function for the system, right or the control Lyapunov function. Remember that we need a control Lyapunov function or a strict Lyapunov function, right. So you should not even start doing adaptive control without a CLF or a strictly Lyapunov function. Otherwise adaptive control is known to fail royally, ok. Not even like you know in small ways it will basically really damage the system in the presence of disturbances. So make sure that for the known system you always start with a strictly Lyapunov function which is what we get by back stepping that is why back stepping is so popular. Then we just added a term in the parameter error, ok. This was the idea, right. And then we did the analysis and this the purpose of this analysis was to get a update law. We got that the really cool thing like I said was you do not care where you start this update law, right. You do not care about the initial condition it can be anything it can be as far from the true value as you want. You will still get exact tracking this guarantees that you get exact tracking, ok. And that is pretty amazing if you ask me, right. Because you made the system agnostic to the unknown, however much it was. So as you saw with the your typical signal chasing arguments which is what we did in the end here we could prove that e1 and e2 both go to 0, correct. And we of course only could prove that something like this, you know. You only can prove this on the parameter errors, right. So the parameter errors are not guaranteed to go to 0 unless you have something called a persistence of excitation condition, right. Of course we did not discuss what is the persistence of excitation condition. That is not part of the plan again, right. But finding the parameter is the so in adaptive control we do not particularly care about finding the parameter or aim is to track and we did it successfully. So we do not care to find it, ok. But in system identification and also in all learning, all of learning, all of deep learning, finding the parameter is the requirement, ok. And so in those cases persistence of excitation is inherently assumed, yeah. Although they may not use those same terms, same ideas or same words sorry they may not use the same words but the idea is the same. Persistence of excitation is required for parameter learning, ok and that is the crux of it. You cannot learn without having this kind of a persistence, yeah. But although typically all the learning algorithms are data driven, so nobody tries to verify these conditions. They just assume you did a good job, ok and then you run the algorithm, yeah. But in reality if you do want to learn in a true sense you need persistence of excitation, ok that is really the key point here. So this is what we will do in terms of adaptive control, we are not going to do anything more. What I want to do is start a new topic and that is on constrained control. We want to talk about control under state constraints, ok. So a lot of you, until now we have been talking about a lot of scenarios, all of you done nonlinear control design and I hope you have learned some methods at least pretty well. But we have not seen a lot of realistic scenarios, we have definitely not seen disturbances but I did say in passing, yeah. We did not cover it but we did say in passing that you know that because we are doing Lyapunov based design it is naturally robust. In the sense if you add some disturbances your system is not going to become unbounded. You are still going to have some kind of bounded performance which is governed by the size of the disturbances. And in a lot of cases you can even increase the control gains in order to make the residual set. Suppose you are supposed to go to the origin, instead of going to the origin you will somehow you know circle around the origin, all right. And whatever the size of the circle or the set is can be made, can be shrunk by choosing larger control gains, ok. So this is really the advantage of Lyapunov based design that you are getting robustness for free, ok. It is easy to prove, I am not going to really prove it here. But robustness is sort of something we have already handled that is why I do not talk about it separately in this course, ok. So you can just blindly go and design a Lyapunov controller for your systems. Basic robustness is bounded noise, bounded disturbance handled, ok. You do not have to you know think about it and worry about it, yeah, great. But the other scenario is that of constraints on the control and constraints on the state, ok. Of course I am not going to talk about control constraints per se, which can also be handled by the way in the using the same methods that I am going to talk about. But we want to talk about state constraints. So if you see one of the issues with non-linear control is that if you have ever implemented any non-linear control and we have implemented these back stepping controls on quadrotors and things like that. Usually what will happen is you will get a lot of overshoots, ok, like in linear systems when you are talking about a linear system you design a linear controller you can quantify how much is the overshoot, right. You can actually say I want this much overshoot accordingly I will choose the PID gains using some transfer function ideas and all that you can actually compute what is the overshoot you can actually control the gain so that the overshoot is you know minimized and things like that. In the non-linear case there is no such equivalent, ok, that you can that how much can you overshoot and it is a non-linear system, right it can actually throw you out before coming back in, alright. So it is a possibility or it will be overshoot. So we see a lot of overshoots and then gain tuning is not very easy so this is another thing I will get a lot of questions on how do I tune gains and things like that, ok. So one of the key requirements in a lot of applications is that your state trajectory is while they are trying the transient basically until now we are only talking about asymptotic performance, ok. So that is what. So until now only asymptotic guarantees, yeah. What does it mean? It means that I am only saying that I will do this as T goes to infinity or that as T goes to infinity. I am going doing something nice as time becomes large, ok. So the big question is what about transients, yeah. So what happens to the transients, alright. So while I am converging to the good place, how bad is my trajectory, yeah. This as you can see none of the theory that we have done until now does not really cover this. Yes, you have to understand that because again we are using Lyapunov base design, we have always had something like V dot of X of T say is less than equal to 0. At least we have had this, right. We had some kind of negative semi definite which means that we have V of X of T is less than equal to V of X of T0, correct. And this gave us some kind of ellipsoid, right, I mean this is, what does this mean? This means that in terms of your, right, in terms of your, yeah, in terms of your real world systems, what does it mean? It means that I sort of got a ellipsoid in which my states will remain, ok. Now this ellipsoid could be of any size, right. I mean it is not being, yeah, it is governed by your initial conditions, right, here, right. But suppose your initial conditions were large, you started with a large offset, yeah, not that uncommon. Suppose you are very far from your desired trajectory, so you started with a large offset. And this is your ellipsoid, that is governed by this, this ellipsoid is actually nothing but V, well, let us see, this is actually ellipsoid is, if you like this terminology, yeah. It is weird notation, but all I am saying is it is like, if you compute V X T0, it is some constant value, right, because it is V is mapping to scalars, it is some constant value, right. And if you take V inverse of that, this is the shape you might get, yeah, you could get some other shape also. I am not saying it has to be an ellipsoid, but I am saying it is some shape, some closed set, right, because it is a inverse of some constant, ok. Because say for example, if I think of V as X1 square by 2 plus X2 square by 4, right, then I want, so this is, this set is something like equal to some constant, right, so you can see this is some kind of an ellipse equation, right, so ellipsoid, ellipse equation. And similarly, if you do more, you can get some general versions of ellipsoid. Now the point is if your initial conditions were rather large or initial errors were larger large, because this can be in terms of X or it can be in terms of error, depending on how the problem is posed, right, then this ellipsoid is large, right. Now what happens in the future, later on all I am saying is I am restricted inside this set and nothing more, right. So my states can be anywhere here, right, I mean in the sense that all I am saying is that any instant in time I am not going to exceed this ellipsoid, yeah, which can if my trajectory, if my desired was this, yeah, this is my desired, right, I want to sort of move here, yeah, usually you will take origin is the desired, but do not worry about it, I am just giving you a representation, it may not be the truth, in error variables typically origin will be your desired always, that is how we have been working. But if you think reference trajectory, right, you should X minus r is equal to the error, so the r could be something in this blue, something like this blue thing, right. But I may be straying from this really far away, right, I could do this, try to reach it, but then I could go here, try to reach it, I could go here and then eventually slowly I can reach, right, but I am going all the way to the end, could I could potentially go all the way to the boundary, right, which means I my states are growing larger large in an attempt to reach this set, okay. So yes, there is some bounds on the transients, but they are not bounds that you might like, yeah. Then there is another scenario where there is a lot of safety critical applications, what is a safety critical application? One simple application is obstacle avoidance applications, okay, suppose I have some robot and I have these, yeah, so I have this starting point and I have this end point, yeah. I want to go, but I do not want to hit anything, okay. So now the robot, if I design a normal control, I have no way of specifying that I have to avoid these sets, right, there is no such thing, right, how do you specify it, there is no way, typically your normal control will probably just go here, which it will not because there is an obstacle, so it will hit the wall, yeah, that is a problem. So what do you want the controller to do? You want it to go here and then figure out there is an obstacle, I see, yeah, we want it to go here and figure out say, okay fine, there is an obstacle, then do this, maybe go here, then do this and do here, okay, whatever, whatever, I mean I am not talking optimality here, optimal path might be something like, right, I am not talking optimality here. But I am saying that I want to at least avoid the obstacles, right, so if I see an obstacle I should turn, now it should not be like I have to, of course you can do it the funny way where you can see an obstacle, give a trajectory which is around it and then start rotating around it and all that, right, that is of course one way, but the other way of course you can include it in the control design itself, right. So safety critical applications are basically saying that you have some reach or avoid sets, you have some avoid sets or possibly you do not want the states to grow too large in the transient itself while it is trying to reach, okay, you want the states to behave nicely while it reaches, okay and that is a requirement for almost any control, yeah, anything, anything, you do any trajectory tracking you probably do not want to stray too far away from the desired, yeah, unless because it may go out of your operational range, sensors might be an issue, everything might be an issue, if you have 2 large velocities in a spacecraft that is a problem, so you do not want to even in the transient stage exceed this, okay. So this is what sort of motivates the notion of barrier function for control, okay, so this is what we want to look at and there is a little bit of theory we will cover, yeah and let us see, I will sort of try to motivate it, initially if you look at this dynamical system, this is just a dynamical system, okay, not a, yeah, there is no control yet, okay, as of now no control, we want to talk about what these barrier functions are, right. So the idea is, the idea is make sets invariant, okay or forward invariant, forward invariant is forward and time invariant, okay, so until now our aim has been what to have stability and reach a point and reach a equilibrium, right, we did not try to make sets invariant, although if you remember we did all this Lassalle theorem, naturally there were some invariant sets, okay, so that is where, so let us see, let us use that example also, okay, suppose I take again some v of x of t, alright, now suppose again there are, it has level sets of this kind, this is one level set of v, another, another and so on, right, because so this will be something like v of x t equal to say c 1, v of x t equal to c 2, v of x t equal to c 3, these are the sets because I already showed you how you draw and of course you would expect that because it is smaller, so you would expect by virtue of the fact that v is a Lyapunov function, you would expect that what c 3 will be less than c 2, less than c 1, right, because it is inside, so you expect that it is a case, right. Now what do we do, we start typically with Lyapunov function, Lyapunov candidate maybe, right, v x t, right, and then what do we typically get, we at least try to get something like v, yeah, because if we did not get this, I guess we achieve nothing, right, so obviously let us assume we got this, so therefore at this point this became a Lyapunov function itself, okay. Now like I showed you before, the set which is defined by, so we already know that v x of t less than equal to v x at t 0 holds, right, because of this, right, so what does it mean, it means that if I define my set omega c or omega say c 1 as set of all x such that v, well I will just say v x less than equal to c 1, yeah, what is this set, this set is just the outer ellipse, right, is the outer ellipse, okay, is invariant under what condition, what do I need for this set to be invariant using this Lyapunov function, I only need that v x t 0, right, is that fine, right, because I already have that v x is less than v x t 0 and v x t 0 is less than c 1, I am good, okay, okay, alright, alright, alright, great. Now, now one of the issues, so this is, so I got invariants, right, this is actually invariant, right, because forward in time invariant, if I start inside this v x equal to c 1 set, I am going to stay inside it, okay, alright. Now, suppose my initial conditions, okay, so this is assuming what my initial conditions are somewhere inside this guy, somewhere inside this outer ellipse. Now, suppose my initial conditions were inside the inner ellipse, okay, somewhere inside the inner ellipse, right, so in that case you can see that omega c 3 is also invariant, if v x t 0 less than equal to c 3, correct. So, suppose I started inside the smaller ellipse, then this becomes invariant, which means I never escaped this guy, right, never escaped this guy. So, basically by this logic, if you keep using this, you will see that depending on initial conditions, how it is chosen or in fact even by scaling the v itself, you cannot, you do not need to actually change initial conditions, just by scaling the v itself, you will be able to show that all these sets inside some larger ellipsoid are all invariant. So, once you start inside them, that is what is invariance, right, invariance means start inside a set, remain inside a set, okay, that is invariance, okay. So, whenever you start inside, you remain inside this set, okay, but the point is this is not what we are necessarily looking for, okay. So, it might so be the case that our set, this guy is our safety set, that is I want to remain only inside this larger ellipsoid, that is my safety set, I am allowed to operate anywhere inside this larger region. But because of this Lyapunov style of construction, what happened is if I started inside this set, I never escaped this, yeah, I do not even utilize the larger space, right, I do not go here and come back to origin, right, whereas I am allowed to, I am allowed to use the larger space, larger state space, because I only want to keep this guy invariant, but because of the Lyapunov construction that I used, right, with positive definite function and then negative semi-definite V dot, what happens is if I start here, I remain only inside this set, it which is governed by my initial condition, not by a predefined set that I as a user, you know, defined, right. Typically what, how do you define your safety set or desired region of operation? You decide it, predefined it, you do not base it on your initial condition, initial condition can be anything, right, but it is you do not base your safety critical set on where you started. So, because of this kind of construction, you will always stay inside this set. So, you are actually wasting the ability to work here, yeah, which might in some cases be what you desire actually, yeah, you might need more control effort to always stay inside, yeah, you might be allowed to get out and then come back in, that is okay, because as long as you are remaining in the larger set, but this does not allow it, okay. So, we want to look at constructions, which will allow us to make one set forward invariant, but everything inside is not forward invariant necessarily, okay. So, that is what we are looking to do with barrier functions, okay.