 folks. So, welcome to yet another session on our NPTEL on nonlinear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. As always, we look at our inspirational image to motivate us to develop algorithms that will allow us to drive systems such as these on Mars and Moon and other such explorations autonomously. So, without delaying any further, we move on to our lecture material. So, last time, until last time, we were looking at, I mean, we first looked at completing the proof of the norm requirements for the two norm, which is the Euclidean norm. In the process, we also saw a short little proof of Cauchy Schwarz inequality, which is rather specific to this Euclidean sort of space. So, today, of course, you will look at a little bit more general proof of this. Then, we looked at the notions of convergence, Cauchy sequence, what is this convergence? What is Cauchy sequence? And we also saw some examples of Cauchy sequence and convergence not being really an identical concept per se, but there being small differences between the two and the possibility of constructing a Cauchy sequence that is in fact not convergent by creating a sort of weird vector space. Now, we also spoke about complete norm linear space or Banach spaces until now. And the good news was that all the spaces that were really in consideration or that are going to be in consideration in the future in this course are all going to be Banach spaces. So, this is definitely something that's comforting for us that we don't have to actually verify every time whether the spaces we are working with are Banach spaces or not. So, we don't deal with such very specific, very special cases in this course and in most applications, you would not find such cases. Then, we looked at the notion of an inner product space, which is slightly more general than a norm linear space. We also saw the definitions of what is an inner product. And we in fact also saw that if you're given an inner product space, there is also a corresponding norm which can be defined as the norm of x with x in this capital X being defined as the inner product of x with itself. So, this is sort of where we were last time. So, I will actually just start here and call this lecture five. So, I think let's see. I think I did not label this completely because we started somewhere here, I would say. So, let's not worry so much about these numberings per se, but this is sort of where we started the lecture four and so we are at lecture five now. So, just like the notion of a complete non-linear space, there is of course also the equivalent notion of a complete inner product linear space. So, when do we say that this space is a Hilbert space? We call the space Hilbert space, if first of all, we have an inner product space that is x with the inner product operation, which is complete with the corresponding norm. So, not with any arbitrary norm. So, maybe the inner product space admits a different norm also, but we are not concerned with that. If the inner product space is such that the associated norm defined as this, in this associated norm defined in this way, the vector space is complete, that is all Cauchy sequences converge, then the space is called a Hilbert space. Then the space is called a Hilbert space. Again, this is not difficult, we will let's go back again. I mean, if we take our usual Rn, let's see, write an example, as our usual Rn with the inner product being defined by x transpose y, x comma y in Rn, if this is the inner product space we are considering, then the norm that we get, so the norm that we get, in fact, let me be very careful, I think there is a slight error in how we have done this. So, this norm, in fact, should have a one half, there should be a one half. In this definition of the norm, there should be a inner product of x with x to the power half. So, the example that we were giving is that, if I have Rn with x transpose y, that is the usual dot product, scalar dot product that we know being the corresponding inner product, then if you look at the norm, the associated norm, it is simply, let's see, let me write it in this top product notation, it's simply going to be x transpose x to the power half and this is nothing but the two norm of the vector as we have defined until now. This is exactly the two norm of the vector and we already know that Rn with the two norm, as you can see from this example, Rn with the two norm is in fact a complete linear space. And therefore, Rn with the two norm is a complete linear space and hence, this inner product space satisfies all the requirements for being a Hilbert space. So, again, the good news is that most of the space times we work with Euclidean spaces, that is Rn, RP, Rk and so on. And in all these cases, the vector space, the inner product that we consider is the dot product. If we take the dot product as the corresponding inner product, then Rn with this dot product is a Hilbert space. And therefore, we are able to handle most of the cases that we are going to cover in this course. So, most of the examples that we covered in this course are in fact going to be Hilbert spaces. So, moving forward, until now we did look at the matrix norm. So, let me again write this, we already introduced this, but not completely. So, we are going to try to do a better job of defining and explaining what the terms mean. So, we defined the induced matrix norm earlier as somehow representing the maximum magnification that a matrix provides to any vector in the vector space. So, the induced matrix norm, so I mean I will actually want to write this definition a little bit more generally as this is the soup over all x in a vector space. So, we essentially have to have not just a vector space, but a non-linear vector space. It is not just enough to have a vector space, but we also need the norm without which we cannot actually write these quantities. So, the important thing to note is that the induced matrix norm is defined as this that is it is the somehow the maximum magnification provided by the matrix to any vector in the vector space. So, although I have written it in this way, it is for the purpose of whatever we are going to do, it is sufficient for us to have x as some Rn. So, that is what we are going to assume that x is actually some Euclidean space some Rn. So, one of the things to remember is of course, we are we have to assume that it is not the zero vector because otherwise I have a division by zero and it does not make sense anymore. And other thing is I am somehow introducing this notation of a supremum a soup. So, what is the soup? The soup or the supremum is simply the least upper bound. So, the supremum is the purpose of the supremum is to sort of generalize the notion of a maximum. So, the purpose of the supremum is to generalize the notion of a maximum. So, what is the supremum? The supremum as we say here is the smallest value y such that for all x in s, x is less than equal to 1. Now, suppose I did not write and remove this from the definition. Suppose I remove this particular qualifier from the definition that is the smallest value, then this is nothing but the definition of a upper bound of any set. This should be obvious to you. However, the fact that we have added this qualifier makes it the supremum. That is, it is the least upper bound. It is not any arbitrary upper bound. For example, if I took any set, let us say I took a set of the form, I took an open set 0 comma 1. So, what is an upper bound? How many upper bounds? 1 is an upper bound, 2 is an upper bound, 3 is an upper bound. So, y equal to 1, 2, 3. Anything works. However, if you are looking at the least upper bound, this is the only answer that you can actually say is correct is 1. Nothing else works. You can sort of think about this very carefully. So, you can think about just make a thought experiment. Suppose I take, let me call my soup. So, soup, if soup of 0, 1 is equal to say 1 minus epsilon. Some epsilon positive. Suppose I took some, suppose I claim that my supremum is, in fact, 1 minus epsilon for some positive epsilon. Now, the question is, is this even positive? So, think about it. This quantity 1 minus epsilon should understand that because this set is an open set 0, 1 is an open set. So, what does it mean to be an open set? It means that any term arbitrarily close to the boundary is also in the set. So, it should be obvious to you that, it should be obvious to you that you have 1 minus epsilon plus delta belongs to 0, 1. So, where some delta positive, and 1 minus, as long as 1 minus epsilon plus delta is less than 1. As long as these two conditions are satisfied, as long as these two conditions are satisfied, such 1 minus epsilon plus delta is contained in the set 0, 1. This is by virtue of the fact that this is an open set, which means that if any number which is arbitrarily close to 1, but less than 1 is contained in the set. So, you can imagine this is always possible to choose. I can always choose a delta positive such that 1 minus epsilon plus delta is less than 1, because all you need is that, what do I need? I just need that. This essentially implies that delta has to be less than epsilon. As long as I choose a delta which is less than epsilon, I am good. Now, the amazing thing is this 1 minus epsilon plus delta. Now, the funny thing is 1 minus epsilon plus delta is greater than 1 minus epsilon. So, 1 minus epsilon plus delta is greater than 1 minus epsilon. Now, this is a problem, because we were claiming that 1 minus epsilon is a supremum, is the supremum, is the supremum. The supremum is unique, is the supremum. But we just show that it is not possible, because there exists a term in the set which is larger than this supposed to be claimed supremum. Now, how can something be a supremum when it is not even an upper bound? So, arguing like this, you can very easily conclude that 1 is in fact the supremum of the set. So, you can argue that hoop of 0, 1 is exactly equal to 1. So, this is something that I hope is clear to you. So, this should sort of illustrate to you what is a supremum. It is not essential that the supremum of a set line the set itself. Usually, when this happens, we replace the term supremum by the term maximum. We no longer use supremum, but we use maximum. So, I hope this much is sort of clear to you. Another example to illustrate the idea of a supremum for continuous functions is to look at say a function of the kind fx equals 1 minus e to the power minus x, where x is a non-negative real number. Now, how do we compute supremum? Now, we have been defining supremum of sets only, but it is not very difficult to generalize to functions, because what will I do? Supremum is simply the, a supremum of the function sup of f is simply sup of image of f. When I am looking at supremum of f, I am just looking at supremum of the image of f. That is, I take all the points that are of the form fx. This is what is called image of f. And I take the supremum of this set, so, supremum of image of f. Now, the important thing to see is that the set E, so, let me be careful. The set E, that the way I have defined, and the way I have defined the set E, that is, image of f is actually equal to 0, 1, which is close dot 0 and open at 1. What does it mean? That is, the set contains 0. Why does it contain 0? If I plug in for x equal to 0, then f of x is 0. And it goes almost to 1, but never goes to 1, because when does the value of the function become 1, you get the function value at 1 when this is 0, which means x has to go to infinity. And infinity is not contained in real numbers. This is something that all of you should know, if you do not should remember. So, that infinity not part of reals and therefore, you have this implication that 1 is not part of the set E. So, what is then the soup? So, what is then the soup of this? The soup of image of f is actually equal to 1, just like this previous example. Here it was open on both ends. Here it is open on one end and closed on the other, but it does not matter. We are looking at the supremum. So, only this end matters. So, this function actually has a supremum at 1 and this 1 is not in the set E. So, this is important to remember. So, there are two conditions under which soup becomes max. Well, the most general condition is when the supremum is contained in the set itself. That is i E if a soup of E belongs to E or if you have a set of finite number of elements. In fact, if the supremum of E is contained in E and the finite number of if a set as finite number of elements then the soup of E is contained in E. Therefore, this is in fact the most general condition under which the soup is written as a max. When the supremum is contained within the set itself that is soup E belongs to E, then and only then you write the supremum as a maximum. Otherwise, the notation supremum is used. So, I really hope you understand what is the meaning of supremum. In our context, what we now need to realize is that finding the supremum here is not too easy. It is not very obvious how I would find the supremum. In fact, if you just ask me an ad hoc question as to find a supremum of this kind of a function, what I would do is the brute force way would be to sweep over all sorts of vectors x and then compute this and keep trying to find the maximum. This is a rather painstaking process. However, if you have some knowledge of eigenvalues and some smart tricks, you can of course get simple formulae in some cases and we will discuss what those cases are very soon. We will discuss those cases soon. So, we are also interested in a few important matrix properties. Most of the times we will deal with symmetric matrices, symmetric square matrices especially I mean when I say most of the times, most of the times in analysis in Lyapunov analysis, we are dealing with symmetric square matrices and so we are interested in properties of these symmetric square matrices. So, what are these properties? The first is that all eigenvalues are real. I hope you know this. So, most of these things you should already have known or seen. A matrix A is called positive definite that is a symmetric square matrix A is called positive definite if and only if the corresponding quadratic form is strictly positive for all non-zero vectors alpha. So, if I take any non-zero vector alpha and Rn, I am already assuming A is in N by N matrix, then I compute alpha transpose A alpha, it comes out to be positive and this holds for every possible alpha in Rn. Now, again this is another condition which is not easy to verify because then I have to actually sweep over all possible values of alpha, seems ridiculous. So, there are simpler and equivalent conditions. First one of them is that all eigenvalues of A are strictly positive. This is where the fact that eigenvalues are real plays a role. If the eigenvalues are not real, I cannot talk about positivity of the eigenvalues. The second is there exists a non-singular Q such that A can be decomposed as QQ transpose and the third equivalent condition is that every principal minor of A is positive. So, these are the three equivalent conditions for this original condition. Finally, there is a very nice inequality that we use quite often in all our results, in all our derivations and that is that the quadratic form alpha transpose A alpha for any symmetric square matrix is lower and upper bounded by, is lower bounded by lambda mean A alpha transpose alpha and upper bounded by lambda max A alpha transpose alpha. So, what is this lambda mean and lambda max? These are the smallest and the largest eigenvalues of A. These are the smallest and the largest eigenvalues of A. I hope these properties are very clear to you. So, these are very, very critical properties and we will regularly invoke these especially this one, especially this one. All right. Then we want to look at some of the simpler to compute induced norms. I will come to the first one in the end. The first one is the infinity norm and the infinity norm in. So, notice that if I want to compute any induced norm, any particular induced norm say P, then I just have to take the P vector norm in all of this. That is it. So, if I want to, if I want to compute the infinity norm, then it is simply the maximum absolute row sum. So, I take the absolute row sum and whichever is the maximum value that is the infinity norm of the matrix. The second is the A1 norm and that is the maximum absolute column sum. So, I take the column sum and then whichever is the largest column sum that is the one norm. The two norm, the two induced matrix norm is the largest singular value. That is it is the largest eigenvalue of A transpose A square root it. So, these are the three norms. So, let us look at a quick example. All right. Let us look at a quick example of a matrix and what will be the norm. Suppose I take A as, again in this case, the matrix does not have to be a square matrix. I will take a rectangular matrix 2, 3, 1, 1, 0, 0. Let me make my life a little bit simpler. All right. So, what will be the infinity norm? In fact, let me put some different sign also. So, that I do not write. So, what is the infinity norm? So, I just compute the, let me compute the row sums and column sums. So, this is the row sum. The absolute row sum is 5. Absolute row sum is 2. Absolute row sum is 0. Absolute column sum is 3 and absolute column sum is 4. So, as per our formula, the infinity norm of A is what? Is the largest absolute row sum. So, that is 5. What is the one norm? It is the largest absolute column sum and that is 4. Now, what is the two norm? Now, here I have to do some work. I have to compute A transpose A. So, let us see. I have to compute A transpose A which is 2 minus 3 minus 1, 1, 0, 0 and whatever, 2 minus 3 minus 1, 1, 0, 0. Next. So, let me actually get this guy here. So, what is this? So, this is actually equal to, this will first of all be a 2 by 2 vector. So, we all have to remember because it is a 2 by 3 times 3 by 2 matrix. It will be 2 by 2 matrix. I am sorry. So, this is 4 plus 1, 5. Then you have minus 6, minus 1, minus 7. Then you have minus 6, minus 1, minus 7 because of symmetry 9 and 1, 10. So, this is A transpose A. So, now I am going to do the rest. I am sorry. In the interest of time, all I have to do is compute the largest eigenvalue of 5 minus 7, minus 7 and 10. All right. So, that is what it is. All right. So, what did we talk about today? We spoke about the notion of the Hilbert spaces. Then we discussed in a little bit of detail the definition of the induced matrix norm. What is the meaning of the supremum? And how to compute the supremum? When does the supremum become a maximum? So on. Then we looked a little bit at the relevant matrix properties for symmetric square matrices that interest us. And finally, we also saw how to compute the induced norm for some special norm, for some special cases like the 1 norm, 2 norm and the infinity norm, which is mostly what we will end up using. And finally also, we did see how to, well, I mean, we have not seen it yet, but we do plan to look at the Cauchy Schwartz inequality and a more general proof of the Cauchy Schwartz inequality. All right. All right, folks. Thank you. That will be all for today.