 Hello all. Welcome again to our NPTEL on non-linear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. So as always we want to be motivated by this very very exciting background image that we have. We want to eventually be able to figure out how to design algorithms that drive systems for autonomous motion such as this. So without delaying any further we continue with our lectures. So the last time we looked at a couple of myths and temptations which sort of plague typical asymptotic analysis. And after that we moved on to some vector and matrix norms basically giving us the ability to sort of deal with objects that evolve on non-linear spaces. So we defined norms on vector spaces and therefore gave them a structure which is that of a non-linear space. We also defined what such norms are. We looked at a few examples of these norms. So after that we also spoke about the matrix induced norm basically a norm that can be obtained by using a vector norm in order to construct a matrix norm. And this is called the matrix induced norm. So finally we of course saw what is this non-linear space structure and what is rather comforting for us is to sort of see that whatever spaces that we are usually going to be dealing with in this course are in fact going to be non-linear spaces. So now we continue and with our lecture so we are on lecture number 3. And what we want to do is we want to look at a little bit more detail into the norms that we have constructed until now. So it is very important that we get a better feel for the norms that we have designed. One of the key things in this direction is to actually prove that the norms we have defined are in fact are indeed norms, valid norms as per the definition. So if you see the definition contained four aspects one is the non-negativity. Then we had the notion of the norm being 0 only when the vector itself is 0. Then we had the scalar multiplication property and then finally the very key and very important triangle inequality property. So we want to prove this the fact that this infinity norm and also a particular p-norm are valid norms just to see how such proofs would go. So let us see first we have this infinity norm which is essentially the largest component of any vector that is given to us. So if I have any vector in R n so in this case we are assuming that x is in R n and we are computing the infinity norm by taking the largest component of this or largest element of this vector x. So since it is a finite dimensional vector this is very easy. Remember we talked about the notion of supremums and maximums also lost time. So if the set is finite computing a maximum is very easy just have to compare. We also saw a couple of examples in this direction in fact. So let us look at the proof. So the first is the fact that the infinity norm is greater than equal to 0 is very obvious. Why? Because we are comparing absolute values of the components. We take every component we compute its absolute value and notice that this quantity is always positive. So since this quantity is always positive the maximum of this quantity also has to be positive. Well non-negative. So if there is an element 0 then obviously the absolute value is also 0. However this is always non-negative. So that is rather critical and since we are taking a maximum over non-negative numbers the output is also non-negative. So the first requirement is very easily satisfied. The second requirement is that it has to be 0 if and only if the vector itself is 0. And that is what I have written here. I have just written the statement but it should be obvious that if I sort of try to expand a little bit. So this if I have this guy. If I know that the infinity norm is 0 then what am I saying? I am saying that max of every element is 0. If the max of every element is 0 remember the absolute value is in fact non-negative. So if the maximum of all absolute value of xi is 0 then the only way this is possible is that xi is 0. On the other hand if I start from here and I am given that xi itself is 0 then it is obvious that absolute value of xi is 0 and this means that max of absolute xi is 0 and that is it. This is the infinity norm. So we have very easily proven that the infinity norm being 0 is equivalent to each component being 0 that is the vector itself being 0. The third property that is the scalar multiplication property is rather obvious. It is nothing to do because again you have an absolute value. So the alpha will come out here and the alpha can be pulled here. So the scalar multiplication property is very straightforward. So I am not really elaborating on it. Now we come to the rather critical triangle inequality property and as always whichever is the critical property is what is difficult to prove. So always more challenging. So let us look at this and let us try to do this. So in order to prove that you have a triangle inequality property I need to show that norm of x plus y the infinity norm of x plus y is less than equal to infinity norm of x plus infinity norm of y. So that is the first step that is the last step. Now I am just writing the definition here. The infinity norm of x plus y is simply max over 1 to n absolute value of xi plus yi. Now it should be obvious to you that since I am computing the max of xi plus yi for some k this is the max. So for some k this is in fact not less than equal to but you can even say this is exactly equal to. I can even say this is exactly equal to. So what I can do is I can simply erase this. The infinity norm that is the max of xi plus yi is in fact exactly equal to absolute value of xk plus yk for some k because after all I am taking a max. So it has to be true for some i equal to k and that is what this is. Now by the triangle inequality property of the absolute value. So the absolute value actually satisfies the triangle inequality. This is very well known. I am not going to prove it easily verifiable also. So from the triangle inequality on the absolute value property I get that absolute value of xk plus yk is less than equal to absolute value of xk plus absolute value of yk. Again for some k in 1 to n. All this y this quantity is in the left by the way. So I started with an equality again inequality. Now I have an inequality but all the while this quantity remains on the left hand side. So this is how we do a lot of inequality proving. So get used to it. This is how we do a lot of inequality. We start with a quantity and keeping this on the left hand side we keep doing inequalities. Equalities and inequalities and so on and so forth. Usually one directional. If you are doing less than equal to it will always be less than equal to if it is greater than equal to it will always be greater than equal to on every step or equal to. It will never switch between less than and greater than because then you cannot actually prove anything. So this is a very standard method of proving things. So this is xk plus yk for some k in 1 to n and this should be obvious that this is less than equal to max from 1 to n xi absolute value and max from 1 to n yi in absolute value. Because if I take any xk arbitrary for some arbitrary k in 1 to n and I take its absolute value it has to be smaller than if I take the max over all possible such case which is what this is. And it is now evident that this quantity is x infinity and this quantity is y infinity. So we started with this. We did a lot of inequalities and we have proven that x plus y infinity is in fact less than x infinity plus y infinity. So that is sort of the end of the proof. So we have proven that the infinity norm as defined satisfies all the norm properties and therefore is a valid norm. So now we want to do something very similar for our two norm. So let us see if we can in fact do something like this. So how is the two norm defined? So we are not of course dealing, I mean similar proofs can be done for all p norms but we are focusing on these infinity norm and two norm because these are rather important norms. So the two norm is the like I mentioned last time is a Euclidean distance the way we know it, the way we know how to measure distance. And the infinity norm is of course rather important norm. So of course you can do the same exercise for all other norms. So I am not going to do it for all possible norms. So it is defined as summation over 1 to n absolute value of x i square and then I take a square root. So immediately very, very in passing I say that the first three properties are obviously satisfied. So I mean I will still write a little bit here now, I am still going to write a little bit. So if you look at non negativity that is, so this should be sort of evident to you because again if you look at you are taking square of absolute values. So it is a positive quantity then I am taking summation of this positive quantity then I am taking a square root of that again a positive quantity. So this is fine, this is good. Then the next one is that the two norm of x is 0 if and only if x itself is the 0 vector. This is again something that is not too difficult to verify. So if I sort of try to compute, if I sort of try to write what is the square of a two norm then I will get something like x 1 square plus x 2 square plus x n square. And suppose I say that this is 0 and the only way this is possible is that each component is 0. It is very easy to argue this because if any of them is non-zero then the outcome is non-zero. Why? Simply because we are only adding things, we are never subtracting anything here and nothing gets subtracted here, only things get added here. Say if anything is non-zero is definitely going to add something and this will be not 0. So the only way that norm x 2 square is 0 is if x i itself is 0 which means that the vector x is 0. And similarly if the vector x is 0 the fact that I mean that is going from this way to this that is given my vector being 0 vector then the norm is 0 is very obvious. So the first two properties are this then you have the scalar multiplication property which is that the two norm of alpha x has to come out like this any scalar multiplication has to come out like this. Now it should be obvious that this guy will expand to square root of summation alpha square x i squared and this is simply that and so that is it we have proved this property very straightforward which is why I said that they are obviously satisfying but here you go I mean just to be clear we have in fact proved it. So now the triangle inequality as always our key or difficult property if you may. So we want to talk about the triangle inequality property. So if you look at the square of the two norm for x plus y again I have taken the quantity that I want on the left right here it is going to stay like this. Then this can be expanded or this can be actually written as summation over from 1 to n i equal to 1 to n x i plus y i squared alright ok. Now this quantity actually evaluates to this the quantity inside the summation the quantity inside the summation evaluates to this right. So let me let me sort of verify if this is the case alright yeah yeah. So the quantity inside the summation will actually evaluate to this yeah and now what do I know I know that this quantity right here is less than this guy yeah not difficult to verify right because the 2 x i y i is only going to reduce x i squared plus y i squared yeah and square makes the signs positive so the 2 x i y i is always going to bring down x i squared plus y i squared it is I mean you cannot guarantee that 2 x i y i is sign definite that is 2 x i y i cannot be guaranteed to be positive right therefore it can be positive which would be great right yeah let see let see if this is correct let us sort of think about this a little bit more carefully I think I said that in sort of passing so if I evaluate this this is x i squared plus y i squared plus twice x i y i yeah so I think I should try to populate some of the intermediate steps which would be better so this guy is can I say that this guy is I do not think this would be quite okay yeah in fact how I would say it yeah so this is I would like to sort of rearrange how I say things here so this is not you know sort of completely right I feel so what you want to say here is that this is actually less than equal to x i squared plus y i squared plus twice x i y i this is what makes sense yeah so this is not completely right so this is there is an additional term here alright alright I hope I hope that is sort of obvious to you yeah I hope that is obvious to you now the rest of the steps I have to sort of rethink this logic will not work anymore so this is now I would like to work on this term yeah I would like to work on this term and what I would want to claim is that this guy is less than or equal to twice norm x and norm y okay I would like to claim that this is what is happening alright this is what I would like to claim okay does that make sense because because if that happens if this is true okay so if so let us see if true then this right hand side entire thing becomes actually equal to or less than equal to so if you see this quantity is equal to norm of x squared and this quantity is two norm of y squared so this is actually something like two norm of x squared plus the two norm of y squared plus twice the two norm of x and the two norm of y alright alright so now and this is basically equal to norm of x 2 plus norm of y 2 whole squared and my proof is done because if you see on the left hand side I had two norm of x plus y squared and here I have two norm of x plus two norm of y whole squared so if I remove the square on both sides my proof is done alright alright so now what am I left to prove I am proof left to prove this guy okay alright let us see if we can we will try to do it if I cannot I will probably show it to you next time alright but but let us give it a shot right now so I have twice summation over i x i and y i on the left hand side alright so which is and the right hand side so this is the LHS the RHS the RHS is twice summation over i x i squared multiplied by summation over i y i squared and the I think I am missing a square root here so I am missing a square root alright so this does this look very easy obvious this is something that I will have to probably get back to you on I will have to think how we will do this proof in a there is some smart use of Koshy Shorts that I can directly do yeah but I do not want to do that yeah I want to do it on this particular case yeah rather than try to employ a general formula okay let us see yeah so you can see that you know this even in the simple proof we sort of get into an interesting sort of term right it is not that it simply goes through like we would want yeah the terms look rather similar yeah the two x i y i and two x y look very similar but actually they are very very different looking terms yeah when you expand them so the LHS actually looks something very simple like this the right hand side is this so it is like a you have square root and so on so I mean how I would think of approaching this is that I will take a square of this alright and then I will have terms like sum x i square and summation of y i squared right and then I would like to see what happens you know what happens to each of these right so I want to see that how to deal with each of these terms right so because then I will also have to take a square here in fact a square outside right and if I take a square outside here then I am left with a square of sum of product right and then I have to sort of correlate these okay then I will have to correlate these yeah so and my aim is to sort of just prove that one is smaller than the other so let me actually think this this piece of proof I will actually cover again next time yeah let us not worry about it but the point is you will eventually see that the two norm also is a valid norm yeah we will actually complete this proof we have already verified the first three properties which were rather easy the triangle inequality property of of course is a little bit more complicated to prove so we will look at this proof so after that we want to look at notions of convergence okay so that is what we want to do next time yeah so that is the plan for next time so once we understand the notions of norms you know how norms work we want to talk about convergence that is because in order to actually define convergence you need the notion of a norm yeah because otherwise you cannot actually convergence means basically going close to something right so like we discussed the entire idea of going close to a point cannot be easily defined in vector sense if you do not have a norm yeah and so once we have a norm and a norm linear space it is easy to prove convergence alright so that is what we are going to do next time right so so what did we see today alright so what we did today is rather basic proofs of which norms are actually you know satisfying these norm properties right so we just get to prove the norm property for one of them right but we looked at how to prove that the infinity norm is actually norm we are still to see the triangle inequality property for the two norm and of course this is all leading up to actually defining convergence Cauchy sequence and you know inner product spaces and so on and so forth yeah so convergence is a rather key property for us which is what we will look at next time alright thank you