 Welcome to another session of the NPTEL on nonlinear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. You folks can again see this very nice representative background image that has been ubiquitous in our course. This is sort of amalgamation of sensors, robotics, actuation and algorithms. The sort of algorithms which we choose to, I mean seek to design that will drive autonomous rovers such as these on Mars and Moon and so on and so forth. So, without further ado, we go into our lectures. So, until last time we had sort of looked at an introduction of adaptive control. We sort of saw what these, you know, building blocks mean, you know, there is a state space model block, there is a controller block. But in addition, there is also an adaptive control block, right? So, the purpose of this adaptive control block was to estimate these unknown parameters in the system, all right? So, now as we move forward, one of the things that we sort of need to remember is that the purpose of or a lot of what we do in adaptive control and non-linear control in general is this asymptotic analysis, all right? So, that is we sort of try to see what happens as time goes to infinity to different signals and functions and so on and so forth, all right? So, this is one of the key aspects of what we are going to do as we move further along in this course, right? Now, there are a few pitfalls in this sort of journey of ours. Now, one of the, you know, first few pitfalls is regarding, you know, what happens to the convergence of functions and correspondingly what happens to the convergence of its derivatives, all right? So, this is what we want to address first today in this lecture. So, suppose I consider a function, right? And let's see, if I consider a function and I have this knowledge that as t goes to infinity, the function converges to a constant, right? So, suppose I know that the function converges to a constant as t goes to infinity, right? This is what we are talking about. Then, this does not automatically imply that the derivative of the function goes to zero, right? I mean, intuitively we sort of assume that, yeah, this should be true that, you know, if function is converging to a constant, then the derivative goes to zero. If a function goes to zero, the derivative goes to zero, all of this somehow seems to make sense in our minds, right? But this is not true. This is one of the first pitfalls. So, we quote a counter example, right? I mean, of course, there are many, many counter examples possible. We quote one of them, right? And what is it? Suppose I take the function which is sin t squared divided by t, right? And suppose I take the limit of this function as t goes to infinity. Let's look carefully at what happens, right? The numerator, right, keeps oscillating between minus one and one. It does not matter what happens to time and, you know, how much time is lapsed, yeah? If I plug in something like an infinity or a very, very large number here, it doesn't matter. Sin of t squared is still going to oscillate between minus one and one, all right? So, this is, you know, not really reaching a limit. However, the denominator is going to go to infinity, right? As t becomes really, really large, the denominator is going to become very, very large, right? And therefore, it's obvious that the denominator is becoming very large while the numerator is anything between minus one to one. So, it doesn't matter what happens to the numerator. As I take a limit as t goes to infinity, we get zero, all right? So, we get that our function converges to a constant which is zero, all right? This is, you know, what it is one of the things that we need to remember, right? Now, let's look at the derivative of this function, right? So, what is the derivative? I mean, it's very standard. We just computed using the chain rule and the product rule, yeah? So, the first piece here is the derivative corresponding to the one over t term here, all right? And the second term here is the derivative corresponding to the numerator. It's very easy to compute. I mean, I really encourage you to do this computation by yourself, not to believe me, right? Then what happens? Now, we look at what happens to each of the term if I take a limit on both sides, right? What happens? Now, if I look at the first term again, it starts to look very much like this term, right? I mean, I mean, there is a sine t squared minus sine t squared, which is again oscillating, right? So, it's something between plus minus one, but the denominator is again going to go to infinity. In fact, faster than the one by t, right? Which means that this term, right, is definitely going to go to 0, all right? So, this term is definitely going to go to 0, no problem, right? What happens to the second term though? What we note is that the second term now does not contain anything in the denominator. It's only a trigonometric function that is a cosine t squared in the numerator, all right? Which is not so great anymore, right? Because now I know that the cosine t squared is just going to keep oscillating and not reach a limit at all, right? And this is something which, you know, should bother us to be honest, yeah? Because honestly speaking, this function is not, that is, f dot of t, in fact, will not have any limit as t goes to infinity, all right? So, the derivative of the function has no limit as t goes to infinity, yeah? So, this, whatever, seemingly counter-intuitive thing has come true that that though the function itself converges to a constant 0 in this case, the derivative does not really converge to anything at all, all right? And this is, should be baffling for us, all right? The important thing to note is that we did not really take any very badly behaved function, you know, it's not a non-smooth function, it's not a, you know, discontinuous function or anything like that, except that t equal to 0, right? The function is, in fact, c infinity, all right? What is c infinity? C infinity means infinitely times continuously differentiable, all right? So, this is infinitely times continuously differentiable except at t equal to 0. For those who have never seen this notation c infinity, I strongly encourage you to look up this notation, all right? Make sure you familiarize yourself with it because this notation is definitely going to show up regularly in our course, all right? So the important thing to notice, we did not choose any poorly behaved function to sort of prove our, you know, thesis that the function converging to a constant does not mean the derivative converges to a 0, all right? We picked a relatively nice function, okay? So the derivative, in fact, continues to oscillate between minus 2 and 2, all right? So that's what we say here, right? So the derivative doesn't go to 0, but it oscillates between minus 2 and 2, all right? So the converse also does not hold true, interestingly, all right? So what's the converse? The converse is that if the derivative goes to 0, right? Can we say that the function itself is going to go to a constant, all right? So this is again another important question, yeah? What is the question? If the derivative goes to 0, is the function itself going to go to a constant? We again present our hypothesis and say that, no, this is not true. And again, we present some counter example, yeah? Please ignore this. I have deliberately cancelled this because this is erroneous, this is wrong, right? But look at this example right here. If I take a function f of t, which is log of t, logarithm of t, then the derivative is 1 over t, yeah? We all know what is the derivative of the logarithmic function, right? Simply 1 over t, all right? And we know very well that if I take the limit as t goes to infinity for f dot of t, then it's 0, right? Because the t goes to, the denominator goes to infinity and therefore, the limit is 0, right? On the other hand, if I take the limit of f of t, right, bad things happen, yeah? Because t goes to infinity and logarithm of course, goes to infinity, may be slower than t, but still it does go to infinity, right? So therefore, limit of f of t is in fact, infinity, right? Wow, that's bad, all right? In fact, it's not a bounded function at all. However, the derivative is actually going to go to 0, right? Again, if you note, this function is nicely behaved everywhere except at the origin, yeah? This function is nicely behaved, right? So remember that we are only sort of interested in the behavior, at least as far as asymptotic analysis is concerned, at infinity and that is at very, very large time, yeah? So we are not really interested in small time behavior, right? So therefore, the function not being well defined at the origin and so on should not matter so much. Always tweak the function so that it's well behaved at the origin and everything goes through. I mean, how would I do that? Very simple. I will, you know, say make this instead of log t, I will make log t plus 1, right? And now the function's derivative is 1 over t plus 1, right? And I still have the exact same result that is 1 over t plus 1 goes to 0 as t goes to infinity. Log t plus 1 blows up as t goes to infinity, right? In fact, this is now very well behaved, I mean, very well behaved at the origin and since the time is always considered to be non-negative that starts at 0. So I have no problem. This is a very valid, nice function in the domain of my interest, right? So I have a very nice function f of t, nicely behaved at the origin and everywhere else beyond t equal to 0 and still it does not satisfy this sort of intuitive idea maybe that if the derivative converges to 0 then the function itself converges to a constant, okay? So I really want all of you to drive this notion out of your minds that if you have a derivative converging to 0 then the function goes to a constant and vice versa that if the function is converging to a constant then the derivative goes to 0, right? Where this lacunae happens is because we are not looking at the function itself, right? We are not really looking at the function itself, we are looking at the limit of a function, okay? We are not saying that the function is a constant, it is obvious that if the function is a constant the derivative is 0 and vice versa, that is obvious but we are not saying that we are saying that if the function converges to a constant, right? So the limit in these, I mean in all our discussion this limit is what somehow messes up things for you, okay? So this is very critical, please keep this in mind, alright? So I really strongly, strongly encourage all of you to come up with more counter examples, I know you can, yeah? So give it a shot, just take hints from what I have done. The more you sort of come up with counter examples, the more you will convince yourself and also actually develop the habit of coming up with counter examples, yeah? In applied mathematics and mathematics, coming up with counter examples is one of the most challenging things, you know? Most mathematicians, a lot of mathematicians actually write papers out of coming up with counter examples of some other result that somebody seems to have proven, just to claim that, yeah, that result is probably not correct, yeah? So, yeah, you can actually make a business out of coming up with counter examples, yeah? So it is an interesting exercise, just I really would recommend that all of you try to come up with these counter examples, alright? Once we have sort of understood, not understood, but we have sort of got rid of some myths, some temptations in a symptotic analysis, we come to the next important set of notions that are really, really critical to what we want to study, okay? Yeah? And what is this? These are the notions of norms, alright? There are two things we are really interested in. The fact that we do asymptotic analysis means that limits and convergence in these notions become really important for us. And second is, because we are always dealing with vectors, signals, vectors and signals, because all the states, all the control, outputs, everything is a vector. So we are always working in some kind of a vector space, at least for the purposes of this course, these are vector spaces, yeah? There may be other more complicated structures in other courses. But right in this course, we are dealing with vector spaces and hence all the objects that we are dealing with are also vectors, alright? Since we have vectors and then we have matrices which are operating on these vectors, it is important for us to sort of have a measure of the size of these vectors, yeah? Because we eventually want a lot of these vectors to go to zero, right? Or we want one vector to dominate another vector. We want to actually have a sense of how these notions can be created. Like in real numbers, it is very easy to say that three is greater than two, right? But if I give you a vector, for example, if I just said, you know, I have a vector three, one and minus one, five, right? Can you tell me which one is greater than which one is less than the other, right? There is no obvious way of doing these things. So norms actually gives you one such obvious way because it reduces any vector to a scalar, yeah? I am not saying that this is the way, yeah, but this is in particular way. And this, in fact, gives us a lot of ability to do a lot of mathematics with vector spaces, yeah? So like I said, well, I did not say it yet, but the norm is essentially a measure of length, yeah? In a vector space. A norm is a measure of length in a vector space, yeah? So keep this in mind. But of course, we have a more formal definition of what a norm is. So norm is a function in our case from Rn to R. So we are keeping it a little bit simple. So we are only working with real vector space, right? So norm is a function from Rn to R, yeah? And it's a valid norm if it's non-negative, right? So this is the notation, the two vertical brackets. This is the notation, right? Get used to this if you have not seen this before. I would expect most of you have seen some version of this, yeah? But all right. So the first property is that it is non-negative. The second is that it's 0 only if the vector itself is the 0 vector, OK? The third is the scalar multiplication property that is the norm of alpha x is actually equal to mod of alpha times norm of x for any scalar alpha. And the final and probably the most critical property is the triangle inequality, which is that norm of x plus y is less than equal to norm x plus norm y, OK? So these are the standard properties for norms. Any function that you can come up with that maps from Rn, that is your vector space, to real numbers, to scalars, yeah? With these properties will be a norm, a valid norm, OK? Will be a valid norm. Now what are examples of a valid norm? One is the, I mean, the commonly used norms are the infinity norm, right? Which is just the maximum of all the elements, OK? So if you just take the max of all the elements of a vector, then you have the infinity norm for the vector. And then you have the p norm, OK? What's the p norm? The p norm is simply taking the absolute value to the power p of each component, summing them up and then taking the pth root, OK? So that is what is the p norm, OK? Now I'm going to go forward and then back again, yeah? So these are, you know, rather important notions and these are rather important norms, OK? So if for example, if I take a vector, say, something like this, right? What is that? This is essentially a vector x in R4, all right? Which is 3, 2, 7, 5. These four elements are 4, right? Now suppose I want to compute these norms, right? So what will be the infinity norm? Infinity norm is just the largest component. So it should be obvious to you that this is the largest component. Therefore the infinity norm is 7, right? What about some p norms, right? There are some useful p norms, right? The two norm is actually the Euclidean distance, like I said, yeah? What all of you are used to in measuring distance is the Euclidean distance, is actually the two norm. How do you compute it? You take the absolute value of every element. Well, I mean, the example I've chosen, everything is positive, so absolute value does not change anything. I take the pth power, that is the squared in this case and then I take a sum and then I take a square root, all right? So I've not actually shown what the answer is, but it should be pretty straightforward for you to compute, okay? Finally, what is the one norm? The one norm is again very similar to compute. You take the absolute value, add them up, take the first root, which is basically do nothing, and then you get the answer 3 plus 2 plus 7 plus 5, which is just 17, all right? So this is some key vector norms, all right? And these are sort of some of the ideas, yeah? These are some of the examples of norms that you have that will be very useful as we go forward, okay? We also have the notion of a matrix-induced norm. So like I said, we have vectors, right? And then we have matrices, right? So since we have both vectors and then matrices which operate on these vectors, we will see how, don't worry about it now, but the point is we also have to define a norm. So the way we do this is that we use the vector norm itself to somehow generate a matrix norm, okay? So this is again something very normal in mathematics is that whenever you want to solve a new problem, you try to get motivated by a previous problem that has already been solved or a simpler problem that has already been solved, yeah? So here we already have the notion of a vector norm, right? So a typical mathematician would think, yeah, why not use the vector norm itself to generate a matrix norm, yeah? Smart, yeah? So what is it? So how do I do that? So the matrix-induced norm, that's why it's called the induced norm because it's induced by a vector norm, is simply measuring a maximum magnification of a vector, of any vector by a matrix, okay? So that's why you see this definition. It says it's supremum, which is a generalization of the maximum, okay? So the supremum is a generalization of the maximum, right? And so it's taking the supremum over all x in Rn, right? So it's the supremum over all possible vectors, right? And the numerator is the norm of Ax and the denominator is norm of x, right? So it's somehow measuring the magnification due to the matrix A of all possible vectors in the vector space, okay? So it's also important to notice that this matrix A is not a square matrix, right? Not necessarily a square matrix. You can define a norm for any matrix. It doesn't have to be a square matrix, right? Now the interesting thing to note is that the numerator is therefore a different sized vector and the denominator is a different sized vector. But only because of the norm, right? Again, you should sort of become conversant with the power of the norm, right? Only because of the norm, you are able to sort of compare vectors in two different vector spaces, right? Because R2 and R4, for example, are two different vector spaces. But just because you have this norm operation, you're able to compare the two, okay? So if you know, if you have some idea of, you know, eigenvalues and eigenvectors, you can make some guesses as to, you know, what will be the maximum magnification and things like that, all right? So I would leave you to try to figure out what would be this maximum magnification of a matrix, right? I mean, so the important thing to note is that this, depending on what norm, vector norm I choose here, like, say if I put a P here, the P here, I get a corresponding P matrix norm, okay? So depending on what vector norm I use, of course I have to use the same vector norm in the numerator and denominator, then I get a corresponding P induced norm. All right? It's as simple as that, okay? So now, before sort of discussing what is the supremum and so on and so forth, we sort of want to understand what is the structure that is given to a vector space by a norm, all right? So this is something that is critical, right? There's something that is very critical. So I'm, of course, assuming that all of you have seen vector spaces or some variant of it, all right? If you've not, again, this is something that you really should see in a linear systems course, like with a state space linear systems course, right? Because whenever we talk linear systems, we are saying that the system itself is evolving on a vector space, right? The notion is the idea of spaces, subspaces, planes and so on, planes and subplanes, right? Hyperplanes, yeah? So that's vector space is essentially generalization of these hyperplanes, right? So these are spaces where superposition principle is satisfied, okay? So again, I am being very vague about it, yeah? But I would expect that all of you do know what is a vector space, yeah? Because otherwise, you do not follow what is a normed linear space, right? Whenever I say linear space, linear space and vector space are identical, yeah? Linear space, linear vector space, vector space, these are used almost analogously, yeah? So what is a normed linear space, yeah? So a normed linear space is essentially a linear vector space with an associated norm, okay? That's it. If you have a vector space or a linear space or a linear vector space, like I said, and you have a norm on it, then the two together denoted as x, this norm form a normed linear space, yeah? Okay? Now, the good thing is most of the spaces we are working with, which is RKs, RPs and so on and so forth, they are all normed linear spaces, right? So basically, any Rn with any of the aforementioned vector norms, right? That is the x infinity, x1, x2. That is Rn with infinity norm, Rn with one norm, Rn with two norm. These are all normed linear spaces, okay? So just the fact that you're able to define a norm in this space makes it a normed linear space, all right? So remember this, yeah? So this is a rather nice set of notions, okay? The notion of a normed linear space, we will continue to use this on a regular basis, yeah? So again, so what is it that we are looking at is some notions on, you know, on sort of pitfalls in asymptotic analysis that we want to avoid, yeah? That's the first set of things that we did look at today. The next set of ideas is on the fact that we want to deal with vectors, because states, controls, outputs, everything are vectors, right? And we want to look at sizes of these, yeah? We want to look at what happens to the sizes of these vectors as time evolves, right? So therefore, in order to sort of make sense of all these notions and compare two vectors if you want to, right? So we define vector norms, matrix norms also are something which are very critical because these matrices will eventually operate on these vectors, right? And we want to sort of assess how this operation affects the vector. So therefore, we have taken the liberty of defining a matrix norm using a vector norm and this is called the induced norm, right? And this matrix induced norm also has some nice properties which we will look at next time. And then we also saw that these norms on a vector space, in fact, give us rather nice structure. I mean, as of now we are simply making definitions out of it, yeah? But the fact that you have a norm linear space is a rather serious object that you have, all right? I mean, it's a serious amount of structure that you have in this space, yeah? First of all, you already have a vector space which is linear in the sense that you are in hyper planes and there is a superposition principle that is followed. On top of it, you have a notion of a distance on this vector space, yeah? Which becomes really, really useful in all sorts of analysis, all sorts of mathematical analysis that we are very, very keen on exploring, right? Even the matrix induced norm is the norm that we have developed for the matrix also makes the space of matrices a normed linear space, okay? We don't discuss it at this stage. But yeah, I mean, this is an additional tidbit for you, if you may, that these norms that we have defined for the matrix, that is the matrix induced norm also makes the space of matrices, for example, Rn by n matrix along with the induced norm also gives you a non-linear space, all right? All right. With that, we will stop.