 Hello everyone. Welcome to another session in non-linear control. So, we were looking at some preliminary material since last time. And so, some of this, I mean we didn't of course cover this material. We were looking at a few myths and temptations in non-linear control. This was sort of one of the first things that we looked at basically meaning to say that the function convergence doesn't mean the derivative convergence and the derivative convergence doesn't mean the function convergence and so on. This was sort of the first thing that we did. And then we moved on to some preliminary material on vector and matrix norms. So, obviously we started with the vector norms. So, if you have a fixed vector, then you can define these infinity norm or any P norm in this way. And using these vector norms, we can in fact graduate over to matrix induced norms as well. So, the matrix induced norm is defined using the supremum and the vector norms. And of course, there are also simpler formulae in some sense for computing these. So, we actually saw that there are these simple expressions for this induced norms. Of course, we have the Cauchy Schwarz inequality on the induced norms. We have these very simplified expressions for the infinity 1 and 2 matrix induced norm. Otherwise, it would be rather difficult to compute the supremum and so on. So, we also looked at some properties of symmetric matrices. And in general, we looked at the notions of what is a normed linear space. So, some more abstract content is what we discussed. So, this is where ideas on what is the notion of a vector space or a linear space having a norm. Essentially, a notion of length. And of course, we also showed proofs where we saw a little bit of the proof of when these particular norms that are defined actually satisfy the norm properties. Primarily, the triangle inequality because the rest of the properties are relatively easy to verify. And we saw those as well. And then we looked at what is the notion of convergence and Cauchy sequence and so on. And this led us to notions of complete non-linear space. So, what is the meaning of? So, there is a vector space and then there is the epithet that is a normed linear space or normed vector space. And then there is the idea of a complete non-linear space. And that is what is called a Banach space. Essentially, in such spaces, the notions of convergence and Cauchy convergence become identical. Again, we saw examples. I mean, an Rn, which is the sort of vector space that we deal with for most of this course, of course, is a Banach space. So, we also saw a more advanced notion of inner product space. So, we looked at normed linear space, which is the idea of norm, which is the idea of a length for vector spaces, for general vector spaces. And then one is also interested in operation between vectors. So, how does one vector operate on the other? So, that is one particular operation that is defined is an inner product. And so, a vector space that is a normed space or in fact, you do not necessarily have to norm. But anyway, the normed vector space, if it is endowed with a inner product, it is called an inner product space. And of course, the inner product also has a few properties. It has a symmetry property, distributivity property, scalar multiplication property. And the fact that the inner product of the vector with itself is non-negative and 0, only if the vector itself is 0. So, then just like the normed linear space, we looked at the completeness. We are also interested in completeness of the inner product space. And that is the idea of a Hilbert space. So, what we say is that an inner product space, which is complete with the associated norm, it is evident that if I am given an inner product, then if I operate, if the inner product takes two vectors as you can see. So, if I put in the same vectors, x comma x, then I get a norm. It can be shown that this is in fact a norm. And so, the idea is that if the vector space is in fact complete with this particular norm that is generated from the inner product, then what we have is called a Hilbert space. Again, as always, RN is an obvious example. So, once we looked at this, we wanted to get signal norms, we sort of went over this in not so much detail. And so, that is the first thing we want to do today is that we want to look at the signal norms in a little bit more detail. So, what is the signal norm? So, until now we have been looking at vector norms and matrix norms. Signal norm is also a norm that is defined on a vector space. So, but the only difference is we are now talking about a vector signal, which means that it is a map from time to RN. And that is what most of the states that we will be looking at subsequently are. Eventually, once you solve a nonlinear differential equation, what you get is a function of time. And it is typically a vector function of time, because you will have multiple states. You will have more than one state. So, you typically have a vector function of time. And so, what we then do is we define these signal norms. So, the P signal norm, which is denoted in this way, is defined using the vector norm. Please note that the vector norm contains the time argument, because it has to, because the vector is the vector norms are for fixed vectors. So, until I fix a time t, x of t is not a fixed value. So therefore, in order to evaluate a vector norm, I will need to specify the time. Therefore, whenever I am talking about a vector norm of a signal, of a time varying quantity, I will have the time argument in here. It is almost like saying that I have a signal and I am looking at the value at some particular time. So, that is the idea. So, using this vector norm and note that this is arbitrary vector norm. We did not say that it is the 1, non, 2, norm, infinity, norm or anything, because you see no subscript here. So, this is flexible. So, we take this vector norm, take its power to the power p and integrate from 0 to infinity over time and then take 1 over p. That is what is the P signal norm. Similarly, we have the infinity norm, which is defined slightly differently using the supremum. It just says supremum over all time greater than equal to 0, norm vector norm of x t. Now, as I said, the vector norm used is arbitrary, but typically for a single problem, for evaluating a single particular problem or a control question, you would always use the same vector norm for all the vectors you have. Otherwise, you will end up getting really ridiculous results. It is important that you are consistent. You use the same vector norm everywhere, but the choice of that particular vector norm that you use for the entire problem is completely free. You are free to choose which vector norm you want to use. Now, so obviously, that is what we are saying here. The choice of the vector norm does not matter, it does not matter, but do not switch. So, be consistent throughout the problem. Now, one of the important things that we define here is that if a particular signal norm is finite, for any given p from 1 to infinity, then we say that x belongs to this capital script lp space. This is a very large class of functions, which is called the lp class of functions. These are very important classes. They appear everywhere in analysis, Fourier series. These are essentially some kind of advanced integrability type conditions. Because as you can see, each of these norms is defined using integral of some power of the vector norm. So, these are like integrability conditions. So, if you take p equal to 1, this looks like classic integrability condition, but if you take p equal to 2 and so on, they are just advanced versions of the same integrability condition. So, please remember that these define a very, very large class of functions and very, very large and very, very useful class of functions. So, one of the important things that we realize immediately is that when we say that x belongs to l infinity, we are just referring to bounded signal. Why? Because the infinity norm is just defined by this supremum over all time. So, it is easy to see, easy to prove something like this. How do we go about it? If x of t is bounded for all time, it means that there exists some constant m such that the vector norm of x for a particular time t is always less than equal to m for all t. This holds for all t. If you fix up time, then the vector norm x t is always going to be less than equal to m. Again, this m may vary depending on which vector norm you chose, but there exists such an m. So, we do not have to worry because we are going to be consistent. We are going to use the same vector norm all the time. Now, once you know that the vector norm x of t is less than equal to m for all t, the supremum also has to be less than equal to m. Because if at every instant in time, I evaluate the vector norm and it is less than equal to m, then the supremum also has to be less than equal to m. Because supremum is nothing but the least upper bound. So, I am saying m is an upper bound for all time. Therefore, m also has to upper bound to supremum, which means that there is a bound on the supremum norm or the infinity norm. Therefore, x belongs to l infinity. Looking at the other side of the argument, if the function… If I say that the infinity norm is in fact equal to m, then I know that supremum is equal to m, which means for all time, the vector norm x of t is going to be less than equal to m. Because again, infinity norm or supremum essentially is the least upper bound. So, it is in fact an upper bound, whether it be the least or the largest, it does not matter, but it is an upper bound for the signal. Therefore, this upper bound will always hold and this indicates that the signal is a bounded signal. So, it is a very easy proof and you can claim that the signal is a bounded signal. So, like I said, the LP space appears in quite a few places in mathematics. Anyway, so, typically the LP can be seen as a regularity condition and typically appear in several convergence type results that you will see. And small LP is a discrete counterpart. So, if you have not a continuous function of time like we are using, but a discrete function of time that you just have the function value at step 1, step 2, step 3, step 4 and all that, then you use summations instead of integration, then you have the small LP's and the same notions apply there as well. Now, as far as the notation goes, let us be careful. The vector norm like I said, it is frozen in time signal because the vector norm can only be evaluated if your function is fixed. So, therefore, the vector norm will always contain the time argument in there. It is a time frozen quantity. On the other hand, the signal norm, if you notice, either I take a supremum over all time or I take an integral over all time, which means that the time argument goes away, vanishes from this quantity. Therefore, in the left hand side, there can be no time argument. It would be ridiculous to say that it is xt norm of p. So, therefore, the signal norms will always have no time argument, just something like this and a subscript maybe. Now, one of the things that we sort of know about vector norms is this notion of a norm equivalence. For vector norms, we have this very, very nice result which essentially says that if you take any two vector norms, they are comparable by a constant. What does it mean? If I take the q norm, then I can always find constants alpha beta such that I can compare it with a p norm like this. I can bound it on both sides of the p norm with using constants alpha and beta. You can see I can always flip this argument. I can always say that x is greater than equal to 1 by beta, xp is greater than equal to 1 by beta xq and similarly x is less than 1 by alpha xq. Therefore, the p norm can also be bounded on both sides with a q norm. So, this norm equivalence is very standard and holds for vector norms. However, such an equivalence is not possible for signal norms. This in short sort of means that if I take any signal, that is any vector function of time, then there is no guarantee that if it is belonging to an L1 space, that it will belong to L2 space or if it is L infinity space, it will belong to L1 and L2 space. There is no such guarantee. So, these are completely distinct class of functions in general is what it means. And where does the problem come? Because you are sort of trying to integrate over all time or you are not trying to take supremum over all time. This is where the problem arises. And let us see some examples of this. So, the first very, very standard example is this function, vector function xt is cosine t sin t. And what is the infinity norm? So, I am going to take it is my choice what vector norm I choose. I choose to take the 2 norm because you can see that the 2 norm is very easy to compute in this case. So, the infinity norm is just supremum of the 2 norm over all time. And what is the 2 norm? It is just 1. So, the supremum is actually equal to 1. The supremum is actually equal to 1 as you can see here. Now, this means that the supremum is bounded. Therefore, the infinity norm exists. Therefore, x belongs to L infinity as per our definition. If a function has a finite LP norm, then it belongs to the LP space. It has a finite infinity norm. Therefore, it belongs to L infinity space. Now, let us evaluate the 1 norm or the x1. How will it look like? In this case, instead of taking the supremum, you are just integrating from 0 to infinity. And this quantity is still 1 because nothing has changed. I have still taken the 2 vector norm. I have chosen to take the 2 vector norm. I am choosing to be consistent. Therefore, the 2 vector norm still evaluates to 1. However, now if I integrate this 1 from 0 to infinity, then I get infinity. Therefore, the 1 norm is not finite anymore. Therefore, x does not belong to L1. Therefore, there is no way you can propose any kind of norm equivalence like this. Because one quantity is finite, another quantity is infinite. Therefore, there is no way there can be an equivalent. Because there can be no such constants relating a finite quantity and an infinite quantity. There exists no such constant. Which is pretty obvious that signal norms are slightly more evolved or involved notions where this sort of norm equivalence kind of things do not hold. So, the only thing the norm equivalence basically says for these kind of examples is that instead of the 2 norm, if I had chosen some other norm, say some 5 norm or 3 norm, nothing would have changed. The constant would have changed a little bit. That is it. The constant here would have changed. That is it. And that is all we are saying with norm equivalence here. It did not. It does not mean that this would not be infinity. So, it would still have been infinity. So, let us look at some other examples. I mean, we just looked at an example where a function is bounded or L infinity, but not L1. What about the other cases? What about a function which is L2 and not L1? So, this is one such example where a function is f of x is defined as 1 over x for all x greater than or equal to 1 and 0 otherwise. So, let us evaluate. I mean, this is a scalar function. So, there is no choice of vector norm or anything like that. The norm is just the absolute value. So, what is the L2 norm? So, if I want to say that I want to compute the 2 norm, then I will just have 0 to infinity f of x absolute value squared to the power half. And what is f of x squared? So, this will actually reduce to what? This will just be 1 to infinity 1 by x squared dx to the power half. And you already know this is nice. It is minus 1 by x. It is just minus 1 by x evaluated from at 1 and infinity and to the power half. And this is basically 1. So, the 2 norm is just 1. What about the 1 norm? What about the 1 norm? This is a problem. So, the only difference that will happen is that this will become 1 to infinity 1 by x dx. And this is the problem why? Because this is going to be log x from 1 to infinity. And this is infinite. So, therefore, this is not right. So, f is not in L1. I hope that is evident. Similarly, you have another example which is where you have this just a second. Just remove this. So, we have a clean place to write. So, yeah. So, this is a case where you have a function f which is in L1, but not in L2. So, actually this should be the other way around. Just a second. I will fix this L1 not in L2. So, again not difficult to evaluate I guess that this function is 11 not in L2. Again you integrate for the 1 norm you are going to just do 0 to infinity. And you have f going from and this will actually reduce to this is just 0 to 1. Interesting thing is this is only from this is not actually at 0, but at 1. But we still do this integral like this. So, this is just 1 over square root of x dx. And this will become I believe this will become 2 square root of x 0 to 1. This is going to become 2. Yeah, I think that should be fine. Yeah, I think that should be fine. Anyway, we can check is just this factor of 2 that you have to check, but otherwise I think this is fine. And if I do the f2, now what happens is 0 to 1 1 over x dx to the power half. And again this will land you in trouble because this will become log of x 0 to 1. Yeah, and the problem is at x equal to 0 this is undefined. It is minus infinity. So, this is again going to go to infinity. So, that is a problem again. So, that is not. So, these are just some nice examples that works for one space. It is an L1 or not an L2, L2 not an L1, L infinity not an NELP, any other LP and so on. So, you can create many such counter examples. So, basically to indicate that norm equivalence does not hold in general. So, in signal in the case of signal norms and expected not since we are talking about general much more general norms. So, what I wanted to look at is since we have looked at so much of the norms we have looked at non-linear spaces. We have looked at the idea of the fact that the norms follow triangle inequality. Obviously, one of the key properties of norms is triangle inequality, but one of the other properties and then you also saw it for the matrix norms is this sort of Cauchy Schwartz type of an inequality that the norms really follow. So, this Cauchy Schwartz inequality of course we stated it without any proof here for the matrix case, but I wanted to work out a simple proof of a Cauchy Schwartz inequality for the general case. So, this is the notation that you have two vectors X and Y which belong to a normed linear space. So, there is a vector space X and there is an inner product sorry in fact it is not a normed linear space it is an inner product space. And the inner product and this is in fact the Cauchy Schwartz inequality that we want to prove for the matrix case you already seen that and this is the more general one. So, this is a very nice nifty simple proof. So, if you take any vector U it can be written as these two components that is a component in a direction of some vector V and something orthogonal to that. So, how do you get the component along V? You just take the inner product and divide it by norm V squared. So, essentially it is the inner product is seen as a projection, the inner product is seen as a projection and then you have some vector W which is orthogonal to V. So, you can always break any vector in these two components if you may in these two components a component in the direction of any arbitrary vector V and something that is orthogonal to it. We are not even saying that this is we are not trying to even define what this is more explicitly because we do not need it. Now, if I take the inner product of U with itself then you can just write this formula again. So, basically this is U. So, I just repeat the same thing here and then I expand it using the inner product ideas how inner product works. So, from the first two I will have U V over norm V squared whole squared basically this multiplied by this and then I will have a V inner product with V and then I will have a mixed term which is twice U V divided by norm V squared. I believe this should be norm V squared because of that. It should be norm V squared and then there will be a V inner product with W and then you have a W inner product with W as the last term. So, now anyway we are not too concerned about what this is going to look like and so on. This is this last term, but we know that this guy is going to 0 because they are orthogonal. Remember that I could have expanded this term very easily as well. If I simply say I mean if I instead of W I use a you know W bar and and then I say that I take right. So, this would be this term right instead of W I could have simply written this term, but the point is that U and W are still orthogonal. So, that would have still been the case sorry sorry not U and W, but V and W are still orthogonal right and that that would have still boiled down to the same expression. So, that is why we are not expanding, but we could have if you wanted right. All right great. So, basically using the fact that any vector can be written in components of a vector V and something orthogonal to it. And this orthogonality is being defined by the inner product. It is defined by the inner product that is it. I mean it is not necessarily 90 degrees or anything like that. It is being defined by the inner product. Yeah, you may choose some funny inner product for which it is not 90 degrees right. Now, once we have that we know that this is a non-negative quantity. It is a norm omega squared a norm W squared. Similarly, this is norm V squared right and this norm V squared will cancel with this norm V to the power 4 to. So, I am left with norm V squared. This quantity is greater than equal to 0. So, what do I know? I know that U square is in fact greater than equal to just the first term. And I immediately get the Cauchy Schwarz inequality. Yeah exactly as I wanted right. In fact, in Rn you can do something even simpler. You have this triangle inequality right from the typical norm in Rn and then you just expand both sides right. Just you take a square basically you take a square and then you expand this side you get this. On the left hand side you have a norm U plus V squared which is basically this quantity in an inner product space. Yeah, you are using the norm that is being induced by the inner product right. And so, you get what? U U inner product of U U inner product of V V plus twice U V right. And so, this guy cancels with this guy. This guy cancels with this guy. And so, I am left with and the two cancels out here. And so, I am left with U V less than equal to norm U times norm V alright. So, this is how you would prove the Cauchy Schwarz inequality in general right. Great.