 So, like I said, we are essentially augmenting the state with a new variable, this new variable is the parameter update law, ok, that is what we say. So, your complete dynamics will look like, e1 dot is e2, e2 dot is theta tilde fxt minus k2 psi 2, let us see what is, no, no, I did not, did I get this wrong? Psi 2 dot will be theta tilde fxt minus k2 psi 2, correct. So, this is actually not, e1 dot is e2, this is equal to psi 2 plus psi 2 minus, ok, I do not know why that is happening, ok. And I will have theta hat dot is equal to or in fact I will write it the other way around, theta tilde dot is equal to minus theta hat dot and that is equal to minus gamma psi 2 fxt, alright. And for this what was the v, so this is the entire closed loop system, ok, this is the entire closed loop system. This guy is of course the parameter update, theta hat 0 arbitrary. For the states obviously everything is given to you, right, I mean you are given the initial conditions and everything, but for the parameter update the initial condition is arbitrary, yeah. You do not, you should start close to the true value, but you do not have to, there is no such requirement, alright. Now, let us see what happens to a tracking objective, ok. So, what was the v? The v was half e1 square plus half psi 2 square plus 1 2 gamma theta tilde square and v dot was minus k1 minus half e1 square minus k2 minus half psi 2 square, yeah. The first one is positive definite, the second one is only negative semi-definite, ok. So, obviously you have uniform stability in the sense of Lyapunov, that is one. Now, we want to do signal chasing, ok, because we have negative semi-definite v dot, we are exactly in the domain of Babalov's lemma, right. So, you want to do signal chasing, right. So, how does that look? Exactly the same steps, right. What is the first step? Yeah, I am not going to go back. Yeah, the first set of steps is to prove that everything that shows up in v dot is going to go to 0, ok, alright. And you can even do it in a smarter way by the way, you can in fact just, I mean some people also do this, they just try to prove that v dot is going to go to 0, because if v dot goes to 0, then each of these terms go to 0, ok. That is another way of doing it, yeah. But I would say you stick to the steps I said, yeah, do not try to come with your own steps. The first step in trying to prove everything in v dot goes to 0 was that you say that v is lower bounded and non-increasing, right. So, I know that v is lower bounded and non-increasing, yeah. What does this imply? Implies that v infinity which is basically limit as v goes to infinity, v of t exists and finite. This was the first step, remember. Yeah, I have done this so many years, so I know I remember the steps, yeah. What was the second step? You do not remember it. Okay, fine. We look at boundedness of all signals in v, okay. So, it is very obvious that v of t is less than equal to v of 0, because v is non-increasing by this, right. So, v is non-increasing, then vt is less than equal to v0, which means that none of these can become unbounded if they started bounded, right. And they did start bounded, I mean, otherwise I would doubt the sanity of the simulations, okay, which implies E1, psi 2, theta tilde, all are bounded signals. Now, because I am going towards Barber-Latzheimer, right, I want to prove that, if I want to prove that everything inside v2 is going to 0, I want to prove that these signals have are Lp, L infinity and the derivatives are Lp or L infinity and so on, okay. So, now I want to prove that these signals are L2, but that is pretty straightforward. I am going to integrate this way. Integrate both sides, v dot t dt 0 to infinity is equal to integral 0 to infinity minus k1 minus half E1 square minus k2 minus half psi 2 square, yeah, just integrating both sides, right. Now, I know what is my left-hand side? My left-hand side is actually, what is my left-hand side? Right, v infinity minus v0. So, I am going to say v0 minus v infinity, because I am going to flip the signs on the right-hand side. This is just k1 minus half integral 0 to infinity E1 square t dt plus k2 minus half integral 0 to infinity psi 2 square t dt, right. Yes. Now, it should be obvious to you from just this equality that individually each of these is finite. Yes? Why? Why is individually each of these finite from this equality? I am saying this is finite, this is finite. Why? This is finite, sure. Now what? I am saying that this integral is finite and this integral is finite, just from this. How do you conclude that? First thing, is there anything negative on the right-hand side? Any, can anything be negative quantity here on the right-hand side? No, right. No, right, because I am taking integral of a square term. So, everything, the integrand itself is not negative. Therefore, if I take integral, obviously not negative. Integral is just a limit of a sum, right. Obviously, not negative, not negative. Plus sign, not negative, not negative. They cannot cancel each other, okay. This cannot cancel anything here, okay. Which means what? If the sum is actually equal to this, individually each of them, right, has to be right. Otherwise, yeah, if this is bigger than v0 minus v infinity, there is a problem. This will become a greater than, yeah, but that is not the case. So, individually each of them, because they do not cancel each other out, right. So, obviously, this individually each of them is less than equal to v0 minus v infinity. And if you notice, this is enough, right, to claim that e1 is in L2. And this gives me psi 2 is in L2, right, okay, alright. Now, I can use the bubble at Slema, right. Oh, no, I am not done yet, sorry. What about e1 dot and psi 2 dot? I am claiming they are bounded, okay. I am claiming they are bounded. What is e1 dot? e1 dot is e2. What is psi 2 dot? Psi 2 dot is this. I already know that psi 2 is bounded, okay. I already know theta tilde is bounded. Yeah, this proved. Theta tilde is also bounded because it appears here, right. So, the only requirement is that f remain bounded, okay. So, this requires me to make an assumption, okay. Notice just by the fact that e1 psi 2 are bounded, x will also be bounded, okay. It may not be obvious to you, yeah. But e1 and psi 2 are basically just errors coming from the errors. Errors with the reference trajectory. The reference trajectory is typically a bounded trajectory, yeah. You do not never give an unbounded trajectory. So, you have a reference trajectory. So, you are computing error between your states and your reference states, which are bounded, okay. And if you say that your error itself is bounded, then it means you are only a bounded distance away from the reference trajectory, right. And if the reference itself is bounded, you are a bounded distance away from the reference, then x itself, the states itself also have to be bounded, okay. So, if you say that essentially error bounded means x minus, here when I say states are bounded, this is basically also means x1 minus r is infinity, right. And r is already an infinity, right. Therefore, x1 is an infinity, okay. So, basically you also have that x is itself bounded, okay. So, in order for psi 2 to be bound, psi 2 dot to be bounded, what do you need? You need that this quantity be bounded, okay. Not always, but when the input states are bounded, okay. So, the assumption is typically written as assume f x, t is bounded for bounded x and all t, okay. If you make this assumption, you will immediately have that even dot and e2 dot, which is minus k2 psi 2 plus theta tilde f x, t are bounded, okay. And once you have this, you can use the corollary to Barbalat's lemma to claim that what? I can claim that e1 and psi 2 are going to 0, okay. I actually need no further steps, although the when I showed you the application of the Barbalat's lemma used further steps, but right now I do not need further steps, right. Because e1 and psi 2 going to 0 implies what? e1 and e2 are going to 0, okay. So, I have achieved tracking, yeah. Rather amazing, although it looks like I did very simple things, some simple manipulations here and there. Again, please go back and read. So, you can follow, but just by introducing, so what did I do in essence? In essence, my controller which was a what we call a static controller became a dynamic controller. What is a dynamic controller? The control depends on some value which comes from a dynamical system, right. So, my control depends on theta hat and theta hat comes from this dynamics. So, just by moving from a static controller to a dynamical control, it is almost like saying I added some integrator in my controller, okay. Not a linear integrator, but a non-linear integrator, okay. By adding a non-linear integrator in my controller, I made my system agnostic to unknown parameters, yeah. I do not know the parameter, I actually do not know the dynamics well at all, but I exactly track the trajectory. This is not an approximation, okay. I exactly track the desired trajectory in the absence of disturbance and all that of course, of course. See, you all of you must have at some point or the other seen or heard of robust control, okay. What is robust control? It is always h infinity and this kind of control, yeah. What is the idea in robust control? The idea is that and that is applicable only for linear systems typically. There the idea is you design the, for linear systems at least, you design the control in such a way, the control gains in such a way that it can tolerate some error in parameters, okay. But their error is rather limited. You do not know how much error, okay. The error it can tolerate is not infinite, not significant, okay. It can tolerate some error in the parameters, okay. Beyond that, you will get only bounded performance. In fact, even with the error, you will only get bounded performance. You are only guaranteed that your system will not blow up. You are going to get a nice bound around the desired trajectory, okay. But here what are you doing? Here and I mean I am not saying that is a bad method or anything. I am just saying that that is a different method. In that method, the advantage is you are not changing the control structure at all. The control structure remains the same in robust control, okay. There is some whatever, some PDKX minus KX type of a feedback. It is like a state feedback, okay. Structure remains the same. Here it is no longer just pure state feedback. Here you have a dynamic feedback, right. You have a theta hat dot. So, there is a dynamic feedback that is happening, okay. So, we have changed the structure of the control but what have we achieved? We have achieved precise tracking, okay. So, in adaptive control, you can achieve precise tracking even if you do not know the system, okay. And that is pretty amazing if you think about it, okay. Now, if I do the rest of the steps, I told you that this steps is enough. I have already achieved tracking, okay. Let us see what the rest of the steps give me, okay. If I do the rest of the steps, I would essentially be able to prove that even dot and psi 2 dot go to 0, right. That is what we have been doing. We started with proving that everything that is in v dot goes to 0. Then we prove that the derivatives of those quantities go to 0 and we can, yeah. We can, can prove that this happens, okay. But even dot going to 0 just means e2 goes to 0 that we have already proved. So, nothing special there. But psi 2 dot going to 0 gives me what? It gives me that minus k2 psi 2 plus theta tilde fxt goes to 0, right. But again, I already know that psi 2 already goes to 0. So, what do I have? I have that theta tilde fxt goes to 0, okay. Unfortunately, I have not proven anything about parameter convergence, okay. No evidence of parameter convergence or if you folks like this learned, I did not learn squat, okay. I did not learn the parameter, okay. Now that is, I mean it may not seem nice to you but that is sort of the power of this method, yeah. It did not require you to learn the parameter. I still did pretty nice tracking control. If you give me a robot or if you give me an airplane or if you give me a quadrotor, I am doing my tracking. I do not care to learn some parameters. I do not care to learn the inertia. That is not my job as an engineer, right. I wanted to go to the waypoints, go to the, you know, particular formation, do whatever I wanted to do. I wanted to do the control task. I do not care if it learns the parameters, okay. But then if you do care about the learning part, yeah. Then there are some results, yeah, which are connected to what is called persistence of excitation. And these results are required also in deep learning, by the way. It may not, these do not come up obviously upfront, yeah. You will not do good learning unless your data set is rich enough, yeah. And how you specify rich enough, which is a very, very vague word is using persistence of excitation. This idea comes from system identification. This has got nothing to do with adaptive or adaptive control or learning or anything. It comes from system identification. Basically, it is like saying that eventually you are going to solve some linear system of equations. And the senior system of equations must have a solution. If it does not, then you cannot, okay. So, that is what it comes to. You write, you can write this E1, E2 system. So, this dynamics, okay, I guess it is done. You can write this E1 Xi2 theta tilde dynamics in this linear system structure that you can see, okay. And this structure leads to some persistence excitation type results, okay. So, basically you have, this is what it will look like. I guess E1 Xi2 and theta tilde, this bottom right, minus K1 1 0 0 minus K2 F and you will have 0 minus gamma F0, right. You will have something like this. I think this is correct. Minus gamma F, right. Yeah, yeah, yeah, this is absolutely right. This is, the bottom right is what you will have, okay. Whatever is in the bottom right is what you will have. So, this is the structure that you will have. And this structure is amenable to applying some nice results on persistence of excitation, which are pretty classical, yeah. And you can actually claim that you will achieve parameter convergence also under persistence of excitation, yeah. Because we do not talk about it. So, I am not going to go into it in much detail. But like I said, doing further steps in Barbalat's lemma is useless in this particular case because you cannot prove parameter convergence. All you can prove is that product of theta tilde and F goes to 0, okay. Now, if F is passing through 0 regularly, the function F itself is going through 0s, then this means nothing, theta tilde is not going to 0, okay. But if the function F is such that it never goes to 0, it is always non-zero, then yes, it means theta tilde. So, you are asking something from F, okay. So, if you notice, there is a nice structure here. So, you see the 0 F and 0 minus gamma F, they are transpose of each other just with a gamma multiplied, right. So, you are asking something on F, the last column and the last row, okay. That last column and last row has to have persistence of excitation, okay. And if it so happens that F never hits 0, then you automatically have persistence of excitation. That is a nice assumption, this is a very bad assumption. Notice, that is why I said very carefully, when I made boundedness assumption on F, I did not just make a random arbitrary boundedness on F, that would mean I am only allowing functions like sin x and all that. I am not saying that, I am saying that F is bounded if the states are bounded. So, polynomial x is allowed, x squared allowed, yeah, because if, yeah, that is why I was very careful. Assume F x is t is bounded if states are bounded, if x is bounded, that is allowing polynomials. But if you just say F is bounded, then I am only allowing sinusoid and all that trigonometric functions, right, pretty sad. You see, I, you know, in the space of all analytic functions, I went to the sines and cosines, right. So, obviously, I am significantly weakening or strengthening my requirements and weakening the set of functions that I can work with. So, that is the idea, yeah. This assumption that the function does not pass to 0, even sin x does not satisfy. So, I mean, you can see that it is not that easy, yeah. On the other hand, sin x is persistently exciting. So, I can tell you the parameters will converge. If F of x t is sin of x, parameters will converge, okay, because it is persistently exciting, yeah, or whatever, delta persistent in this case. All right. So, that is basically adaptive control for you. In a nutshell, there are of course many, many more cases and so on and so forth. As you can see, I have already taught entire semester and probably do that next semester also, but yeah, yeah. So, but that is essentially the nutshell of what is adaptive control, yeah. We will do some more again, new modern controls and the subsequent lectures, yeah. All right. Any questions? No? Okay, we will stop here. Thank you.