 Hello everyone. Welcome to yet another session of our NPTEL on non-linear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. So we are again in front of our very nice background image of this rover on Mars, which essentially is an autonomously operating device. And we hope that we will soon be able to analyze and design algorithms that drive systems such as these rovers. So what we were looking at last time was the notion of function classes. We had sort of began our discussion on the Lyapunov's direct method. And leading up to it, we required first the notion of function classes. And then we defined three classes of functions. And beyond that, we started speaking about notions of definiteness. And we started speaking about notions of definiteness. We had defined the first notion of positive definiteness of a function. So let us look at where we go from there today. So this is lecture 4.1 now. So we are now into our fourth week lectures. So if you look at how we define this, we define positive definiteness as requiring a couple of things. First is that you have a scalar valued continuous function, of course, of this form takes two arguments, which is a time in the state. And then we need it to be zero value for all zero states. And further, we required it to dominate a class K function in some local region that is in some ball around the origin and for all time. So we have these requirements. We made a very, very nice illustrative image to indicate that though it dominates a class K function, it itself need not be a strictly increasing function. And in fact, it can just cross over this class K function beyond a certain radius or beyond a certain bound on the states R. So the first thing we want to do is sort of try to connect what we learned about definiteness of matrices to definiteness of functions. So because we are using the same criminology, one wonders whether these two are indeed connected or not. All right. So the first, I mean, we saw that definiteness of matrices of symmetric positive definiteness, symmetric matrices has three equivalent conditions. The first being this kind for quadratic form, which is required to be positive for all non zero states. And then you have something like, you know, the eigenvalue conditions, that is all eigenvalues of A have to be strictly positive. Notice that because it is a symmetric matrix, all the eigenvalues are in fact real. And all the eigenvalues are in fact real. In fact, this is what is called a diagonalizable matrix. All symmetric matrices are in fact diagonalizable. And further we have, sorry, further we have that this all principal minus have to have a positive determinant. So let us look at something. Let us look at some sort of a function construction based on this. So suppose using this guy, I define a function v. So from here, I define a function v tx. Well, I define two functions. I mean, I define, it doesn't matter. Well, I mean, why don't I do this? I define v t comma x as say t times x transpose x. Okay, I just in fact, if you notice all I did was I took the same quadratic form right here. I take this. So one thing is obvious that this is continuous. It is taking both state and time and mapping it to a real number because this is a real number. A is an n by n matrix. So the question that we want to ask ourselves is that is A a positive definite matrix? Sorry, yeah. If A is a positive definite matrix, the question we want to ask is, is the function v also positive definite? Okay, the quick answer is yes. So I don't want to keep any suspense. Anyway, it should be obvious to a lot of you. And this is in fact the case. So the first thing I can I can see is that this is greater than equal to, you know, let me be careful here. This is t. In fact, I made this example here too. Actually, in this example two, I think I said t, but I think it's better if I take an e to the power t. Because if I take t, it is not necessarily greater than equal to this class k function because at t equal to zero, that is a problem. Okay, t equal to zero, that is a problem. Or else, of course, I can in this case, I can do something simpler. I will define this as t plus one. Okay, this is greater than equal to x transpose A x for all t greater than equal to zero. All right, so I have this to be true. Okay, so I've already have a, you know, like a time independent function on the right hand side. The only question I want to ask is whether this is a, you know, this is a class k function or not, whether this is a class k function or not. Now, one thing I know for sure is that for a symmetric matrix A, I can write this as some y transpose lambda y. Right? Where, I apologize, where you have this lambda is the diagonal matrix matrix of eigenvalues. Okay, lambda is the diagonal matrix of eigenvalues. Okay, so right, this should be again, this should be something that's evident to us that lambda is the, this is basically the standard Jordan decomposition for a symmetric matrix. So lambda here is the diagonal matrix of eigenvalues of the matrix A. Right? So what do I know about x transpose A x, right? So this, this guy x transpose A x squared is simply x transpose y transpose lambda y x. Okay, so this is, yeah, so this is basically y x transpose, sorry, let me be careful here. This is not y, this is, I apologize, this is not some vector y, this is not some vector y, this is actually equal to some matrix. So m is eigenvectors of A. Okay, the eigenvector matrix, it's not, so this is not quite right. Let me redo this. So this is x transpose m transpose lambda m x and that's basically m x transpose lambda m x. Okay, so m is of course inverse is equal to m transpose in fact. Okay, m inverse is m transpose for the symmetric matrix case. So the inverse actually exists and it's, it's in fact equal to the transpose. Okay, so you have m x transpose lambda m x. So you know that, what do I know? So I know that this is basically some kind of a function. So let me call y equal to m x, then this is equal to y transpose lambda y, which is simply a summation over i lambda i yi squared. Yeah, because where lambda i's, where these small lambda i's are eigenvalues of eigenvalues of A, right? Now, because A is positive definite, I know that lambda i is a positive, right? So this is basically, what can I say? I can say that this is essentially just a quadratic summation of quadratics, just like what you have in a quadratic form. This is what it means to be a quadratic form. Okay, so basically this is something like a lambda i yi squared. Okay, so though not in the original states, but in some transformed states, notice this transformation is very nice because m is an invertible matrix, right? This is a very nice transformation, right? And through this transformation, I get this summation lambda i yi squared, right? And I claim that this is in fact a class k function in the original variables also, okay? Because again, I mean, it's not, I'm not going to do the rest of the math here because I have an easier test for doing positive definiteness, which I'll come to soon. But my claim is that this is a x transpose A x is in fact a class k function, okay? So this is my claim belongs to class k. Now, but I can already see that it is a sum of quadratics, all right? And because it is a sum of quadratics, it is also going to be a sum of quadratics in x equivalent, right? And therefore, this is, and you know that the sum of quadratics in x is a class k function, right? Basically, if I give you a function of the form x1 square plus x2 square, we know it's a class k function. Excuse me. All right. Great, great. So this is not complete. I know that you are not yet convinced that this is a positive definite function. But you see that if I'm given a positive definite matrix, I have positive eigenvalues, right? And I get a quadratic in a sort of a modified state, which is y equal to mx, right? So this much, let's remember this much. This claim maybe we come to later. Well, let's not worry about that. So the point is there is a clear connection, okay? And we'll establish that connection sooner than you think, yeah? So that is where we come to the easier conditions. So one of the issues with this condition that we have, that is v t x has to dominate a class k function, one of the issues with this is that it's very difficult to verify, right? Because you have to actually find a class k function which the v dominates. Now, the other issue is if you're unable to find such a class k function, that is not enough to claim that v is not positive definite, okay? This is because you may have missed being able to find it. It could also be our own incompetence that we could not find a class k function, yeah? So just because we could not find a class k function does not automatically imply that v is not positive definite. All we can say is that if we do find a class k function, v is positive definite. But if we don't find a class k function, we cannot for sure say that it is not a positive definite function, okay? So we want easier conditions where we can say for certainty. So we have two different conditions. The first one is when the v which we are now denoting w depends only on the state, okay? If the function depends only on the state and not explicitly on time, okay? So therefore, we have used a different sort of symbol here, w. But it doesn't matter, call it v or w, call it z, your call, okay? So this function w now depends only on the state. Therefore, its argument is just br and it maps to real numbers with x being mapped to wx. And it if it satisfies two conditions, okay? The first is that it is 0 at 0, which is the same as this guy. This is easy to verify. So we are not really modifying this condition. Again, no time appears here because of course, there's no time argument, right? So we want the w0 to be exactly 0. And next, we want wx to be strictly positive for all nonzero values of the states, okay? We want w to be strictly positive for all nonzero values of the states, all right? Okay. So then the function is said to be positive definite, okay? So let me go to the second definition before I go back to our matrix example again. Now if we do have a function which depends on both state and time, this may be unavoidable for certain dynamical systems, especially dynamical systems where the vector field also explicitly depends on time. As we have been considering in our stability definitions, it might become impossible to avoid having a time argument explicitly in the Lyapunov candidate construction. So this v function construction, right? So in those cases, what do we require? As usual, the first condition remains because this is easy to verify, yeah? And the second condition requires that this vtx dominate a positive definite w. Simple. We've used the, sort of used this previous guy right here, yeah? Because v is a function of the state also, of the time, of time also explicitly. And we have no direct test for dealing with that. What do we do? We say that vtx just has to dominate a positive definite function. Just a positive definite one. We are not saying a class k function. Remember, you're just saying positive definite function. And this is much easier, right? Because I can verify positive definiteness with this easy tests. And once I have that easy test satisfied and we dominate this positive definite function, then vtx is also said to be positive definite. Okay? So and this domination, of course, has to happen for all t in r and for all x, which is in, in b, r by 0. So this is what we need. Okay? So these are the two conditions. Of course, the notation is that, you know, v is greater than 0 is positive. But if we want to talk about negative definiteness, we just say that minus v needs to be positive definite. And in negative definite cases, we didn't, we use this notation. v is less than 0. So whenever I use a function and say it's less than 0 or greater than 0, I mean it's positive or negative definite. Now let's look at our matrix example again. Okay? So, yeah, we want to sort of complete uh, apologize, discussing our matrix example. Okay? So what was the function? We took v of x as well, we took v of tx as t plus 1 x transpose, sorry, I really apologize for that. What did I take it as? I think I, right, I took it as x transpose ax. Okay? And the first thing I know I can do is that this is greater than equal to wx, which is exactly equal to x transpose ax. Okay? So I think what I'll do is I will not claim, you know, earlier I said that I claim positive class k. So I'll not do that anymore because we don't really need that. And I'm not going to be able to directly prove it anyway. So I'm not going to claim that. What I'm going to instead claim that this is in fact positive definite. Okay? So let's look at, we've already seen wx is x transpose ax is actually equal to summation over i lambda i yi squared. Okay? Where y is basically equals to mx and a is written as m transpose lambda m. So this is the eigenvalue decomposition. And this is just the eigenvalue transformation, eigenvector transformation. Okay? And of course, we know that m is invertible, right? We know that m is invertible and it is exactly equal to m transpose. All right? So m is invertible and it is exactly equal to m transpose. All right? So, so what do I have? I have basically this sort of an expression here, right? So let me sort of right, make a nice big box around it. Always end up doing this. Okay? Let's see. Okay? So this is what we have now. Okay? Now, what do I need to show for wx to be positive definite? I need to show that w zero is zero. That's obvious. And because if I plug x equal to zero here, it's definitely zero. No problem. The second point is the more difficult one. I want to show that wx is strictly positive for all x not equal to zero. I'm not using any br here because this is, there is no br. Yeah. In fact, I will, I can even say this. I'm in this case, I'm trying to prove that. Yeah. If I remove the origin from Rn, then except for the origin everywhere else, this wx has to be strictly positive. And it is not difficult to see that it is. Okay? Because of this guy, because of this transformation, you have that x not equal to zero is equivalent to y not equal to zero. Okay? I hope this is obvious to you simply because m is an invertible matrix because m is invertible. A non-zero x is equivalent to a non-zero y and vice versa. And whenever I say zero, I am very casually putting the zero, but I hope all of you are very clear with this notation. Whenever I say non-zero, it means that I'm talking about the zero vector, zero vector. So when I say x is not zero, I mean not every element of x is zero. Some elements are still allowed to be zero, of course. All right? So here we are speaking about just a zero vector. That is every element is zero. And a non-zero vector means that at least some elements are non-zero. Okay? At least some, not every element non-zero. Okay? This is not component vice or anything. This is exactly how it's written in typical mathematics. Right? A non-zero vector means there are some elements for sure which are non-zero. All right? Excellent, excellent. So once I know this, right? Once I understand that x not equal to zero implies y not equal to zero. And that is these are equivalent. So whenever x is, whenever x is not zero, y is not zero. So, so I can simply say from here that this implies that y, I don't know. So x not equal zero equivalent to y not equal to zero, which implies summation i lambda i yi square not equal to zero since lambda i's are strictly positive. Right? Because I assume there is positive definite. Therefore, all lambda i's are strictly positive. And so i equal to 1 to n if you want to make it more precise. Yeah? So you have that every lambda is positive. Therefore, if y is not a zero vector, there is some element which is non-zero. And therefore, this cannot be zero because each of them is, you know, contributing something. There is no subtractions here. Right? It's only summations. Right? So once I have this argument clear, right? That if x is not zero, then this quadratic form that is this quadratic is non-zero, which implies that in fact not just zero, this is in fact, I should make it more clear that this is in fact greater than zero. It's not just not zero. It is greater than zero. Simply because this quantity can never be less than zero. Yeah? This quadratic form can never be less than zero. It is greater than zero or equal to zero. Therefore, this immediately is equivalent to this. Okay? Therefore, I immediately say that wx is strictly positive if x not equal to zero. And I'm done. So wx is a positive definite function. And vtx dominates this wx. Right? Because this is just going to be more than greater than or equal to one. Right? And therefore, because vtx dominates this wx and wx has been proven to be a positive definite function. Therefore, I have a positive definite vtx. Right? So what have we concluded? That remark positive definite matrices lead to positive definite function constructions. Okay? Okay? So just in the, we probably want to see at least one more example right now. So let's look at another example. So let's say vtx. In this case, I specify it as x1 x2. Times x1 x2 are scalars, of course. So if I look at this as a t plus 1 times, in fact, why don't I just do this? I don't need this. I just use x and this is, here I say norm x square divided by 1 plus norm x squared. Okay? So this is again greater than or equal to norm x square divided by 1 plus norm x squared. Okay? So this is same as wx. So this is, of course, again positive definite function. This is again a positive definite function. Why? You look at w0 is 0. Okay? If, so this is the first point. Now if norm x not equal to 0. Yeah? So equivalent to x not equal to 0. Okay? x is not equal to 0 means norm of this x was also not 0. All right? So then, so if x is not, if norm x is not 0, you can see that w of x is strictly positive. And very easy to conclude, right? In fact, in fact, I can, you know, it's even easier if you notice that wx can be written as 1 minus 1 over 1 plus norm x squared. Okay? So if x is, in fact, I don't even need to write it in this form. Even in this form, it's obvious. If norm x is nonzero, then this is strictly positive. This is strictly positive. This is strictly positive. But then this is just dividing this guy. So the numerator and denominator are both strictly positive. So this is a fraction which is strictly positive. Right? So I'm done. So implies, of course, that positive definite or v greater than 0 in our notation. Okay? Great. So what have we talked about today? We continued our discussion on definiteness. And we connected the notion of definiteness of matrices to that of functions. And we saw that positive definite matrices yield positive definite functions. And this is a very, very standard way of constructing Lyapunov functions. In fact, so these v's are essentially, we'll see Lyapunov functions called Lyapunov functions. Right? So, and then we of course saw, you know, these alternate tests for positive definiteness which are easier to actually verify. And we used this to construct, you know, some examples of positive definite functions. So we'll again continue on, you know, this line a little bit more. And yeah, I mean, we will, of course, this is again leading up to the Lyapunov theorems. All right? Excellent. So that's all for this session. Thank you and we'll meet again.