 We have seen the Babalat's lemma, the two versions of the Babalat's lemma if you may, one is the corollary of the other of course. So we want to see how we can use it. So we are going to look at a very simple example, okay. So yeah, I mean, this is nice aside which says that the discovery of Babalat's lemma is what made analysis of adaptive systems possible, yeah, otherwise for the longest time folks were struggling to figure out how to prove convergence in adaptive control, yeah. This is a very good way because Lassalle invariance does not apply, yeah. So this in fact was sort of a path breaking result or the result was already there and the finding of the result was the path breaking part, I guess, alright. So for the application we just look at a simple spring mass damper system, yeah. You know that the dynamics looks like this, right, for this system and this f is the external force that is being applied and if I write it in state space form then it looks like this, okay. X1 is the position, X2 is the velocity, okay, pretty simple and f is of course the force if you choose to apply it, okay. Now typically in nonlinear control also in adaptive control we want the states to follow some trajectory, some desired trajectory, again I do not think we did any trajectory following examples as such, I do not think we did any trajectory following examples but it is not very difficult, you will see how we do trajectory following. How do we do trajectory following? Suppose I want to follow a trajectory for this spring mass damper system which is of course a position trajectory and a velocity trajectory, right. Now the velocity trajectory of course has to be the derivative of the position trajectory, right because otherwise it is not a compatible trajectory, yeah. Position and velocity, right, they have to be the derivative of each other, right. So this is a, in fact in control this is typically called a matching condition. Basically it says that your trajectories have to be, have to satisfy some conditions. They cannot be some ridiculous, for example if your dynamical system is second order system and your trajectory is derived from a fourth order system, does not make sense, you cannot track it, okay. So basically this is like a matching condition, these are called matching conditions. So the trajectories satisfy this requirement, yeah, okay. Now once you have this, we define error variables, because what we have learned how to go to 0 until now, that is what we have been doing, all stabilization everything is going to 0. So we want to construct error variables because we will drive the errors to 0, okay. So what is the error? There is a position error, there is a velocity error. The position error is just x1 minus x1 to z, the velocity error is just x2 minus x2 to z, alright, done. And you can see that x2 minus x2 to z is basically x1 dot minus x1 to z dot. This is from the dynamics of the system and the spring mass damper, alright. Now I am going to write the dynamics of the error, because that is the system I am going to work with. I will always work with the error system now. So it will be even dot and e2 dot is what I am going to write. So what is even dot? Even dot is x1 dot minus x1 to z dot, but that is exactly the same as this. So that is e2, makes sense? This happened because I had a matching condition. If I did not have a matching condition, this will not happen, okay, great. Then I compute e2 dot which is x2 dot minus x2 to z dot. x2 to z dot is just some function of time. So just I can write it as x1 to z dot double dot, yeah, and x2 dot comes from the dynamics. And that is what I have substituted here, okay. So that is it. This is my dynamics, the error dynamics. And I want to drive the errors e1 and e2 to 0, okay, that is the end. So how do I do it? I can usually have no function and all that, but it is pretty straightforward. What do I want to do? I try to get a nice system to follow, okay. What is the nice system I try to follow in this case? I know that this system, yeah, I know that this is a nice system, right. It is a damped oscillator, yeah, and I know that this will do a good job. So I want to follow this system, right. So I choose my control such that my right hand side looks like this, yeah, okay. So what did I do? I did exactly that. I chose to cancel this, this, this, you can see. And then I introduced the nice terms, all right. Simple, that is exactly what I did, all right. Of course, K1 and K2 are non-negative. In fact, strictly positive, I do not know why we have to say non-negative. They are actually strictly positive, yeah, because K1 and K2 are exactly these guys, okay. In fact, they are not restricted to choosing the same K1, K2 here. You could have chosen something else also. Your call, how much control you want to apply, yeah, that is your call, yeah. And you can see this control, I think we discussed this at some point. A lot of control of mechanical system looks like some feed forward plus feedback. This is exactly that. This is the feed forward part, which is cancelling the dynamics and effect of trajectory. And then there is a feedback, which is like a proportional derivative control, PD control, okay. If you do not have a feed forward term and you do not have any idea what your feed forward term is supposed to look like, then you have to have an integral term, okay. So, if you are not doing good, so this is the standard principle by which control folks work, yeah. Why does integral term work? Because it is some kind of, it reduces your steady state error, right. So, it is like a internal model principle. That is the idea. It is introducing an internal model. But if you have a very, if you already know your feed forward term, which is this, you do not need the integral. This is enough, okay. Integral term is required. If you are modeling errors and you do not know what your proper model is, then you need an integral term. Otherwise, you have a feed forward plus a PD, good enough, okay, great. So, this is the F. So, of course, I end up with this dynamics. That is what I wanted to do. Now, what do I want to do? I want to prove stability, right. So, now I am back to this system. Of course, you will say, why should I, you know, put some great effort into it. I already know that this is, this is basically, you know, a linear system. A linear system, time invariant system. I am just going to compute the eigenvalues. I am done. I know that I will get nice negative eigenvalues here, okay. But suppose this was not a linear system, right. But it was a nonlinear system, yeah. And you are, you end up in this situation. You will need to come up with an energy functional and things, a Lyapunov function and all that. So, let us do it, okay. Why not? Yeah. And because more often than not, we will come up with a nonlinear system. So, what do I do? I take a very standard Lyapunov function. What is this? This is the energy of the system. Energy of this guy, right. Because this is the potential energy. This is the kinetic energy, right. Just you know that this is nonnegative, right. And then I start taking derivatives along this trajectory. Just like we have been doing, okay. What happens? First term gives k1 even even dot. Second gives e2 e2 dot, right. Plug in for even dot, which is e2. Plug in for e2 dot, which is this guy. What do I get? Minus k2 e2 square. This is only negative semi-definite, right. Because it contains only one state, yeah. Nothing can be definite until it contains all the states, right. Obviously only negative semi-definite. So, from Lyapunov theorem, what do I get at this stage? What can I conclude from Lyapunov theorem? V is nice, positive definite, radial unbounded and everything. And V dot is negative semi-definite. What do I conclude from the Lyapunov theorem? Stability. Stability, only stability or uniform stability if you want. Although it is irrelevant here because there is no time dependent. Uniform stability in the sense of Lyapunov, okay. But that is ridiculous, right. Because I know that this system is asymptotically stable, exponentially stable, yeah. So, I want to be able to prove more, yeah. And you know that you can do this with LaSalle invariance in this case. Because it is an autonomous system actually. The closed loop system is now an autonomous system, okay. But we won't. We will use the Babylon at Lymanov, okay. All right. How do we use it? We do what is called signal chasing analysis. So, remember this word, yeah. And the steps are very standard, yeah. You have to, it is almost like memorizing. You can memorize these steps, 1, 2, 3. Always work like that, yeah. So, anyway, so this is the claim. Yeah, we have already proved stability. So, we only are left to prove convergence, right. Asymptotic stability is just stability plus convergence, right. So, we only need to prove this much, yeah. Okay. How do we do this? Step 1. So, we know that V is lower bounded. Because it is greater than equal to 0. And it is non-increasing. Because V dot is less than equal to 0. This means what? By lemma. The first lemma that V infinity exists. And is finite, yeah. Any signal that is lower bounded. So, I am looking at, so notice that here until this point, I was looking at V as a function of the states and so on. But here I transition to writing V as a function of time, okay. So, I implicitly assumed that I have solved the system and plugged in the solutions. Therefore, it is a function of time, okay. But remember also, this is big caveat when you are using bubble at Slema, yeah. This is not a uniform result. Because you fixed an initial condition, okay. You did not take arbitrary, this results do not hold valid for any initial condition. It is for that particular initial condition you chose. But then you can choose another initial condition and do the same analysis, okay. So, that is one of the issues that is a point of contention when folks use bubble at Slema. But that is not a big deal for us right now, okay. So anyway, V is lower bounded, non-increasing. Why the first lemma we saw today, V infinity exists and is finite, okay. Great. Second step, both even and E2 are bounded. How? V is quadratic in even and E2, right. So, V is not increasing. So, therefore, V is less than equal to V0, right. Therefore, V itself is bounded. If V is bounded, V is quadratic in even and E2. Nothing can cancel each other, right. Even k1 even square plus E2 squared. So, they cannot cancel each other. Therefore, both even E2 have to be bounded. If either one of them is unbounded, V is unbounded, okay. No choice, okay. Therefore, even E2 are bounded and boundedness is identical to L infinity. You already said boundedness and L infinity are exactly the same things. Alright, great. Step 3, E2 belongs to L2. How do I do that? Whatever appears in the V dot, I integrate both sides of this equation from 0 to infinity, okay. Integrate 0 infinity, 0 infinity, both sides, okay. What do I get? This, okay. Now, I know that the left-hand side is integrable, right. Why? Because the left-hand side is just dV by dt times dt, right. So, dt dt goes away, so it's just integral of dV. So, basically it is V at infinity minus V at 0. But I already proved that V at infinity is finite, right. So, from step 1, this is basic. The left-hand side is just V infinity minus V of 0, okay. Clear, clear, okay. Simple step. And the right-hand side is as it is, I have not touched it, okay. So, what do I know? And this, what does this look like? What is this? 2 norm, the square of the 2 norm. 2 signal norm, okay. That's what I've written here. I can actually solve this to get the 2 signal norm as this guy. I'm sorry, what? Ah, okay, yeah. 2 signal norm is this, 2 signal norm is just the definition. So, I get this equality. From here, it's obvious that E2, this is bounded, right. Because this is, in fact, I can solve this. This is V0 minus V infinity divided by K2, right, from here, right. And therefore, E2 is L2, right. How do you say a signal is in L2? If it's L2, norm is bounded. So, it is, okay. Great. Step 4. E2 dot is also bounded, okay. What is E2 dot? This, already proved E1 and E2 are bounded. K1 and K2 are constants. So, obviously, this is bounded. So, E2 dot is bounded, okay. I can now use the Babel-Arts lemma, the corollary. Why? On the signal E2. E2 is L infinity and L2 and E2 dot is L infinity. So, therefore, by the corollary to the Babel-Arts lemma, I have proved that E2 goes to 0, okay. So, whatever appeared in the V dot, the first set of steps is proving that whatever appears in the V dot, that goes to 0, okay. So, we have done that, all right. Great. Now, so we have done that E2 goes to 0. Now, we want to prove that E1 goes to 0. How do we do that? We start by proving that the derivatives of E2 go to 0, okay. So, let us do that. Next steps, okay. So, I will say actually, until here, proved whatever appears in V dot goes to 0, all right. Now, in order to prove that the rest of the variables or rest of the states go to 0, I will start by proving the derivatives of all these quantities go to 0. So, I want to prove E2 dot goes to 0, okay. How do I do that? I will apply the original Babel-Arts lemma. How? I will start by claiming that E2 dot is integrable. So, what is the integral of E2 dot? This guy. But I know that E2 infinity is already 0. I just proved it, because this is just E2 infinity minus E2 at 0, right. Again, with a poor notation, do not worry about it. There is no such thing as E2 infinity. It is actually limit as T goes to infinity E2, okay. But I have proved that it is 0, yeah. I am just using an abuse of notation, okay. So, this is 0 and this is minus E2 0. So, this is minus E2 0, which is a finite quantity, right. Obviously, you started with the finite value of the initial state. You could not have started with an infinite value. Again, wouldn't make sense. So, therefore, E2 dot is integrable, okay. We satisfied the first requirement of the Babel-Arts lemma. What was the second requirement? That the signal be uniformly continuous, all right. So, integrable and uniform continuity. How do I prove it uniformly continuous? Take the second derivative, right. So, derivative of this should be bounded, okay. I am just taking the derivative of this. I am taking the derivative of this guy now, right. And that is what? k1E1 dot k2E2 dot. I again plug in for E1 and E2. But I know again that E1, E2 are bounded and everything else is constant. So, therefore, E2 double dot is also bounded, okay. So, if E2 double dot is bounded, means E2 dot is uniformly continuous, means E2 dot is going to 0, okay. Now, we are pretty much done. Look at what is E2 dot. I have proved that E2 dot on the left goes to 0 as t goes to infinity. On the right, I have proved that E2 goes to 0 as t goes to infinity. So, if I take limits on both sides, the only way the equality can hold in the limit is if E1 goes to 0 as t goes to infinity, okay. If I take limit as t goes to infinity for this limit to hold, this guy is already going to 0, this is already going to 0. So, I am left with the requirement that E1 has to go to 0 as t goes to infinity, okay. So, that is what you see in the next page. Right? That E1 goes to 0 as t goes to infinity because nothing else is possible, okay. So, this is it. If you see the logic is a little bit similar to the Lassalle invariance when we use, because if you went with Lassalle invariance, you will first look at the set where V dot is 0, which is the set of all states E1, E2 such that E2 is 0. And then you will look at the largest invariance set inside E2 equal to 0. For that you will say that E2 dot is 0. And if E2 dot is 0, you know that E1 has to be 0. So, similar logic actually, but the way we do it is slightly different, okay. Here you use notions of integrability or being in an LP infinity space and uniform continuity, okay. Actually, this is easier to implement. All the steps look longer, yeah. It seems like we took more time through this, but this is easier to implement than the Lassalle invariance. Many people get confused with Lassalle invariance, but they do not with this, yeah. You just have to do these exact 8, 9 steps, yeah. The steps are exactly like that. The first set of steps is to prove that whatever appeared in the V dot is going to go to 0. After that, you just prove that the derivative of whatever appeared in V dot goes to 0. And once you prove that derivative of whatever appeared in V dot goes to 0, you have that, you know, you will end up proving that the other states also work with you. You should, you should. If you can, then you can, then you cannot do much more. Anyway, so, like I said, you could have used the Lassalle invariance, but of course, Babelert lemma, not Lassalle invariant, but Barbashin-Krasowski-Lassalle theorem. But the Babelert lemma can be used in a wider context. For example, this setting. If the coefficients are now functions of time, okay. Suppose for some reason you have functions of time as coefficients here, okay. Whatever, you want to get some fun performance in different domains and whatever. Go faster in some domain and slower, faster initially, slower later or something like that, yeah. Then how do you prove? Then you cannot even use eigenvalues and all that. Now you cannot. Now it is no longer linear time invariant system. It is a time varying system. So, eigenvalues do not work anymore, yeah. So, simple results, simple ideas will not work. So, the question is, can you use Babelert lemma still to prove? Actually, you can. You just have to make some additional assumptions, okay. So, of course, I have given a nice hint which says, I mean, first obvious assumption is that these two have to be strictly positive for all time, okay. So, that is the first assumption. Otherwise, you cannot even construct a proper Lyapunov function, okay. But the idea is, these, for these sort of systems, Lassal invariance also will not work. You cannot, because I am not saying anything about this periodicity or anything. This is not constant, not necessarily periodic. So, Lassal invariance and Babashin-Krasovsky-Lassal also does not apply. But Babelert lemma has no issue, no distinction between this and the time invariant case, yeah. If you can still prove that the signal is L infinity L2 and derivative is L infinity, you will still have the same result, okay. So, this is an exercise that I would sort of like you folks to try and see how you can use the Babelert lemma, okay. So, this is sort of how you use the Babelert lemma. Like I said, standard steps, yeah. The basis is this 2, 3 lemmas, right. The lower boundedness lemma, that is lower bounded and non-increasing, then you have a limit as t goes to infinity. This is the one. The other one is that f dot is bounded, implies that the signal is uniformly continuous, and then the Babelert lemma. So, these three results are what I used to do this signal chasing analysis, okay.