 Hello all. Welcome to yet another week in our NPTEL on nonlinear and adaptive control. I hope you have been enjoying the journey with me for the first week. And we hope to make this week even more exciting, so we will delve into deeper details on what goes behind in doing adaptive control. We of course start this week again by looking at this very nice representative image which is that of a rover on Mars. And the hope as always is that we can develop algorithms that will drive systems such as these for autonomous motion. So this is Srikanth Sukumar from Systems and Control IIT Bombay. So without delaying any further, let me go into our lectures. So in fact, before I move forward into the lectures for this week, a quick recap of what we sort of saw last time. Okay, that is in the last week. We sort of spent our time last week in trying to develop a lot of the structure that is required to study adaptive control or nonlinear control for that matter. Yeah, so what is this structure? This is structure such as norms, vector spaces, inner products, induced norms, supremums and things like that. Okay, and notions of convergence and so on and so forth. Okay, so this is the idea. So this is what we have been studying about in the last week. And this week, we sort of go into a little bit more detail of the mathematics that is prevalent in adaptive control. And the title of the week's lectures are basically Barbalat's Lemma. So this is a rather important tool that we will learn today. And we will also, well, we will learn this week and we will also see by the end of the week how this powerful tool can be used to analyze convergence in adaptive control. Okay, so without sort of delaying any further, I will go into what these important lemmas are today. So let's see. So we have a few different sort of preliminary results that are required before we can even go into talking about things like the Barbalat's Lemma. That's really the idea here. So what are these sort of preliminary lemmas? The first one is this Lemma 1.1 which says that if there is a scalar valued function, so by a scalar valued function, we mean a function which takes a real number and outputs another real number. Okay, so the only property that we have specified for this function is that it takes in a real number and outputs another real number. And usually for us, this real number that is input is the time. Okay, more often than not. Now, suppose we have some additional properties on this function f. Yeah, what are these additional properties that f is bounded from below and f is non-increasing. Okay, what does it mean for a function to be bounded from below? It means, so bounded from below, if I look at this property, what does it mean for a function to be bounded from below? It means that the function f of t is greater than equal to some constant. Let me denote it by f under bar. Okay, for all time greater than 0. Okay, so for all time greater than 0, f of t is lower bounded by an f under bar. Okay, there has to exist such a f under bar. All right. And further, it is non-increasing. So what does it mean for a function? So notice that we have not said anything about the differentiability of f and so on and so forth. Whenever we talk about non-increasing, the first thing that pops into your mind is to take the derivative and see if it is decreasing or something like that. All right, but please do not sort of lean into that temptation because we are not saying anything about the differentiability of f or even the continuity of f. So suppose, but the notion of non-increasing can still be defined without requirement for continuity and so on. So suppose I have a plot of this function that looks like this. So it is non-increasing. So say it, well, sorry, since it is non-increasing, it can never increase. It can either stay constant or dip. So if you look at this, so this is the vertical axis. So the function I have actually shown you is not even a continuous function. Why is it not a continuous function? Because if you look at this piece and this piece right here, there is a distinct jump. Therefore this is not a continuous function. However, it is still a non-increasing function and because its value either stays constant or goes down. It never actually increases. So this f of t is a non-increasing function. So if these two conditions are satisfied, that is if it has a lower bound, right? So what is this lower bound? Say this lower bound can be represented by say something like this. Yeah. So say this is the lower bound f under bar and further it is non-increasing something like this. Then f of t has a limit as t goes to infinity. The f of t has a finite limit as t goes to infinity. Okay. So from this picture itself, it should somehow sort of give you some indicator as to what this finite limit is going to be. However, this is left as an exercise. So this result is found in the book by Ianu and so adaptive control. All right. So I encourage you to look at the proof and actually see what is this finite limit. Okay. What do you think this finite limit is going to be? So one thing is for sure, right? We've already seen the notion of a supremum. I encourage you to look at the notion of an infimum, simply the opposite notion. So if a function has a lower bound, if function has a lower bound that is it is bounded below, it means that the function definitely has an infimum. Okay. So you can sort of see that it's sort of going towards the lower bound. The lower bound may not necessarily be the infimum. We have already seen examples of this. I mean, if you take, for example, a set of the form, for example, if I take a set of the form 0 comma 1, okay, the infimum of this set is 0, just like the supremum is 1. We already saw an example of this. All right. But if I look at a lower bound, then minus 1, minus 2, et cetera, et cetera, are all lower bounds. Okay. These are all lower bounds. All right. Therefore, infimum is not necessarily the lower bound. Okay. However, it is, so it's an important distinction just like supremum is not the upper bound, but the least upper bound. Similarly, the infimum is not the lower bound, but the greatest lower bound. Okay. So I want you to look at what this finite limit is going to be. Yeah. And this picture is a pretty good indicator of what's going to happen. But the important thing to remember for us is that we have a scalar function, which is lower bounded and non-increasing. Then it has a finite limit as t goes to infinity. All right. Great. The second lemma is that if a function, a scalar valued function, again, same, is such that the derivative is L infinity. That is the derivative is bounded. Remember, L infinity non-existence essentially implies boundedness. We already saw this in the last week. Yeah. The end of the last week, we saw that L infinity non-existence implies boundedness of the signal. All right. So for moving forward, I'm going to label this as week two, lecture one. All right. Sorry. I didn't do that earlier. All right. So if we have a scalar valued function such that its derivative is bounded, L infinity and bounded are identical notions, then the function f is uniformly continuous. Okay. Very, very important. Very, very important. And we regularly use this notion of uniform continuity. Okay. We will regularly use the notion of uniform continuity. Okay. We have not spoken about this in this course. However, it is expected that you will know what is continuity and what is uniform continuity. In fact, this is the given as an exercise that you need to define uniform continuity. So uniform continuity is a rather special version of continuity itself. Yeah. Continuity means that there are no gaps, just very, very vaguely speaking. Yeah. Of course, you have very specific epsilon delta proofs, and epsilon delta definitions which say that continuity sort of implies that for a small value in the argument, the function value does not have very dramatic changes. Yeah. That's what it means for a function to be continuous. Uniform continuity is a further specialization of this notion. Okay. It means that the continuity is not affected by time. That is the argument itself. Okay. But I encourage you to look at this definition. I encourage you to write up this definition. I encourage you to understand this definition because we will be regularly using this. Okay. We will regularly use it. All right. So we are saying that if a scalar valued function is such that its derivative is bounded, then the function is uniformly continuous. All right. Great. So of course, it should be evident to you that if a derivative is bounded, it means that f is at least a c1 function that is once continuously differentiable because otherwise I cannot even speak of the notion of f dot. Yeah. So because I have used f dot, it means that f is c1. Okay. So this is a bigger assumption than what we had in 1.1. In lemma 1.1, I didn't even have to assume continuity. But in lemma 1.2, I am assuming that the function is differentiable. Okay. All right. Great. Now, let's sort of look at an example. Although we have seen this example even in the last week, but I repeat it. Yeah. If I have a vector valued function x of t defined by sine t and cosine t, then the two norm is simply equal to one. Yeah. I'm not actually showing the computation. I mean, this is fine. I mean, I can just simply write it out. We have done this in the last lecture of the previous week. Yeah. And so I'm not really going into too much detail. So the vector norm is one. Therefore, the infinity norm, which is simply the supremum of the vector norm. Right. So this is just slope over t norm 16. Yeah. Yeah. That's just one, because the supremum over all time of a constant is the same constant because the time argument doesn't really show up in the argument here. There's no time argument here because this is just one. All right. All right. So this is, of course, I mean, we know all this that any vector norm could have been used and so on, but we choose the two vector norm simply because it's easy to compute. It gives me a very nice simple result. Okay. Great. Then we move on to the e lemma. Okay. Then we move on to the very, very key lemma, which is the Babylon's lemma. So before we move on to this lemma again, I hope you have sort of, you have to sort of commit to your mind these two lemmas. That is lemmas 1.1 and 1.2. Okay. That is whenever I have a scalar value function, which is non-increasing and lower bounded, it's bound to have a finite limit as t goes to infinity. Okay. And whenever a scalar function has a bounded derivative, then it is uniformly continuous. Okay. So I mean, we can look at very simple examples. Notice that this is a sufficiency condition. It doesn't say that if the function is uniformly continuous, then the derivative is bounded, nothing like that. It just says that if the derivative is bounded and the function is uniformly continuous. All right. So let's see some, you know, basic examples. So let's see. Okay. So if I take a function f of t as sin t. Yeah. And then I compute f dot of t. It's cosine t. And this is of course a bounded function. Yeah. What does it imply? It implies that f of t is uniformly continuous. Okay. On the other hand, if you look at something like f of t is t squared. Yeah. And f dot of t is then y is t. This is not bound. Yeah. Because this goes to infinity as t goes to infinity. Therefore, this is not a bounded function for all time. Of course, for some window of time it is, but not for all time. So this implies that, yeah, f is possibly not uniformly continuous. Yeah. Okay. Possibly. I use the word possibly. Why? Because the converse is not really part of the claim. Right. So if f is, f dot is bounded, then uniform continuity is guaranteed. But if f dot is not bounded, it doesn't mean that f is not uniformly continuous. It could still be. In this case, you will see that it is not uniformly continuous. It is not very difficult to verify. Right. In this case, it's possible to verify that it is in fact not uniformly continuous. However, that cannot be claimed in general. So this result only says that f of t equals t squared is possibly not uniformly continuous. Okay. We cannot guarantee that it's uniformly continuous. Okay. Great. Right. So, so once you have these two lemmas committed, which sort of give you the first one. The first one gives you something on convergence of certain special functions. And the second one tells you something about uniform continuity of certain functions. We can move on to the Baabalat's lemma. Okay. So the Baabalat's lemma is, is a sort of very simple looking but quite an amazing result because the advent of this result is what made analysis in adaptive control possible. Before this result came up, folks knew about Lyapunov analysis. We have not yet studied Lyapunov analysis. But folks knew about Lyapunov analysis since the 1800s, the advent of Lyapunov. Right. But as soon as adaptive control designs came into play, and you will see later on in some examples that it was not possible to analyze convergence of adaptive systems using just Lyapunov methods. Yeah. And so everybody was stuck. An algorithm was found. It seemed to be working fine in examples. However, there was no way to prove convergence. Yeah. And okay. And you should understand what these words mean by now. Yeah. So Baabalat's lemma is what enabled the adaptive control as a field to move ahead and adaptive control evolved only because of the advent of Baabalat's lemma, I might say. Okay. So what is this Baabalat's lemma? Very celebrated, but very simple in its statement. Yeah. Baabalat's lemma essentially, like I said, because it helps with convergence analysis, you can imagine it is a convergence result. Okay. Baabalat's lemma is essentially a convergence result. And we are going to very carefully look at what this is. Okay. So the Baabalat's lemma integral form, like it says here, but this is what we call the original Baabalat's lemma, says that if I have a function F, which takes scalar inputs and outputs a vector or a scalar, anything is possible. Okay. No problem. N can be one, two, three, anything. Yeah. So scalar input, typically time again. Yeah. We are always talking about time as an argument here because we're talking about convergence in time. Yeah. As time goes to infinity, good things happen on signals go to zero. That's really what we want to prove again and again in this course or for that matter, in most nonlinear systems course. Yeah. So if we consider a function, which is taking a scalar input time and gives out a vector. Yeah. So that the signal is integrable. Okay. So what does it mean for a signal to be integrable? It means that integral from zero to T, F sigma, D sigma with limit as T going to infinity exists and is finite. Okay. So this is what it means for a function to be integrable. So somehow there's a definition inside the lemma. Okay. There's a definition inside the lemma. And what does the definition say that a function is integrable if limit as T goes to infinity, zero to T, F sigma, D sigma exists and is finite. All right. So this is what is a function to be integrable. So as you can see, this is integrability is an interesting property. It's not, it's somehow connected to your L1 norm. You should think about it. If you think about it, this is somehow connected to your L1 norm. But I'm not taking any norm here. Yeah. This is, there's no norm here. It's just F sigma, D sigma. So I'm integrating component-wise. Yeah. But it looks like an L1. Yeah. When do you, when do I say that the signal is L1? I say signal is L1. So X is in L1 if integral zero to infinity norm of XT, DT is less than infinity. This is the notation that is it exists and is finite. So it looks very similar, right? These look very similar. Okay. Right. They look the same, but similar. All right. So the first condition on this function, on this function that takes time as an argument, gives out a vector is that it is integrable. And the second condition is that F is uniformly continuous. So remember, we already talked about uniform continuity here. Okay. So it's already showing up in Bob Lutz lemma. So if a function is integrable and it's uniformly continuous, then it converges to zero as T goes to infinity. Yeah. Right. Rather powerful result. Okay. It was a rather powerful result. So, you know, this is, let's see. Yeah. It was a rather powerful result in order to sort of indicate to you how strong this result is. I will try to construct an example which has only one of these properties and not the other one. All right. And then we see that it converges what happens to the function, what happens to the convergence of the function. All right. So suppose I have to make this picture very carefully. So I draw the axes like this. So again, this is F of T and the x axis is time. Yeah. So this is zero. Now I have to make marking. So this is say one, two, three, four, five, six, and so on. I mean, yeah, we move on like that. All right. And then I make these axes very carefully. So I'm carefully constructing this very neat looking function. Yeah. Carefully constructing this very neat looking function. Let's see. And I want to make it so that it's integrable. That's the idea. I'm trying to make it into an integrable function. So how am I doing this? So at this argument one, I keep my, let's see, which way am I going? Which way am I going? Am I making it taller? Am I making it taller? Yeah. Let's try this. Let's try this. I'm going to try something. All right. Let me give this a shot. So at time one, the height is one, right? And the width is say, whatever I mean, it's also one, right? So like this, right? So the width is also one. Yeah. So similarly, when this is say h, yeah, h is an integer. And the height is, so this is height is h. Yeah. So this is sort of denoting h. And the width is one by h. I know this picture is not very representative. So let's see. Let me try to make it representative. Yeah. So this width, yeah, is one by h. All right. So what is the, now, if I look at the area of each of these, so if I try to integrate this function from zero to infinity, it is really the sum of these areas, right? So if you try to do this, right, if I try to do this integral from zero to infinity, ft dt is actually equal to summation from h equal to one to infinity, an area of this. Times base time. So let's see if I got this right. So I don't want it one by h, but I'm taking it as one by h cubed, if you may, the very specific case. Right. So this is one by h cubed. Yeah. So what do I get here? I get summation h equal to one to infinity, one over twice h squared. Okay. And this, as a lot of you would know, is as a finite sum. Okay. This is a finite sum. Okay. So excellent. So what have I just shown? I have just shown that this function ft is actually integrable. Right. Right. So what have I just shown? I have shown that this function is integrable. Okay. But what can I say about its limit as t goes to infinity? Anyone? What is going to be the limit as t goes to infinity? It's amazing. The function has no limit as t goes to infinity. Right. Because you're probably lying inside one of these triangles, right? You're lying inside one of these very, very long, thin triangles of very, very tall and very, very thin triangles. And so therefore there is no limit, in fact. Yeah. So why? Because this function is not uniformly continuous. Yeah. So it's not easy to, not difficult to actually verify that it is not uniformly continuous. Yeah. And so both these assumptions of Bob Lutz lemma are actually very tight assumptions. Right. It seems like it's only a sufficient condition, one way result. But these are pretty tight requirements. Yeah. If one of them is doesn't hold true, then you no longer have convergence as t goes to infinity. Okay. Great. So, so what have we seen today? We essentially talked about a couple of important lemmas. Right. On convergence and of course uniform continuity. And then we looked at one of the most critical lemmas in adaptive control. That is the Bob Lutz lemma. Yeah. So you look at a corollary of this next time and then try to see how this lemma can be put to use. All right. We'll stop here. Thank you.