 Welcome to another session of the NPTEL on nonlinear and adaptive control. I am Srikanth Sukumar from Systems and Control IIT Bombay. As always, we are looking at a very nice representative background of a rover on Mars. We hope that the algorithms that we sort of learn how to design in this course will eventually help us drive autonomous systems such as the rover on planets such as Mars or the moon and so on. So, without delaying any further, let us go into our lectures. So, where last time is that we had sort of looked at the notion of these nonlinear spaces. We had already defined different kinds of norms and we had also defined the structure which is a nonlinear space. And the idea that we sort of pursued last time was to prove that some of the norms that we have defined are in fact valid norms. What are the conditions? The conditions for a function from the vector space to reals to be a valid norm is that it is nonnegative is 0 only when the vector itself is 0, satisfies the scalar multiplication property and the triangle inequality. So, as we saw, usually the first three properties are not significantly hard to show and the triangle inequality property is what is a little bit more critical and difficult to in fact prove. All right, let us go back. So, we did prove all the conditions for the infinity norm the way we had defined it. So, the infinity norm was in fact proven to be a valid norm on Rn. So, we wanted to mimic the same for the two norm. We in fact started it, we proved the first three properties which is that the two norm is nonnegative, the fact that the two norm is 0 only if the vector itself is 0 and the scalar multiplication property. So, we were actually sort of there was a minor error, there was a slight error in how we were trying to prove the triangle inequality. And so, that is what we want to first complete today. We want to actually complete the proof that the two norm as defined that is the Euclidean distance norm as we said satisfies the triangle inequality property of norms. So, let us look at this. So, we started with the two norm of x plus y squared and this is nothing but the summation from 1 to n absolute value of xi plus yi squared. So, if I expand this, I get something like this that is summation from over i equal to 1 to n, I get xi squared plus yi squared plus twice xi yi. Now, notice that even before I go from here to here, this is actually because I have squared it. So, this is the same as taking the square of the quantity xi plus yi itself without the absolute value. So, I can actually get rid of this absolute value here and that is what I do. So, this is still all equalities. So, the xi plus yi absolute value square is sort of irrelevant here because I already took a square. So, this is also irrelevant. So, I get rid of this in this expression. And now it should be obvious that I can also plug back in the absolute values and this quantity is just the square of the two norm of x. And again, if I look at, I am sorry, I look at this quantity, that is the square of the two norm of y. And I am left with two summation i equal to 1 to n xi yi. So, now, this time we do not do what we were trying to do last time. We will do it in a rather simple way from the knowledge of scalar dot products that all of you would know holds true in the Euclidean space. So, I know that this expression right here is nothing but x dot product with y. So, this is just the scalar dot product of x and y. And what do we know about the scalar dot product? We know a couple of things. One is that the scalar dot product, well, it acts like a projection of one vector on the other if I divide by the unit vector. The other thing that we know is that it is a scalar value. That is why it is called the scalar dot product. So, when I take a dot product of two vectors, it is a scalar value. And that should be obvious. This is also a scalar value. So, this is in fact a scalar quantity. And we also know that the scalar dot product is evaluated as this formula norm of x times norm of y times cosine theta. What is theta? So, theta is simply the angle between the vectors x and y. So, whatever is the angle between these two vectors theta is that angle. And so, you have x dot y is simply norm x times norm y times cosine theta. Now, what do I know about the cosine? We know that the cosine lines between minus 1 and 1 for any value of theta. Therefore, the cosine theta is always less than equal to 1. So, using that exploiting the fact that cosine theta is always less than equal to 1, I can immediately create this inequality. That is, this is less than equal to norm x times norm y. So, we have very, very effortlessly proved that summation xi yi is less than equal to norm x times norm y. And this is exactly what we wanted because I will plug it back in here in place of this. So, it was, if you notice, it was all equalities until this point. But then it becomes an inequality because of this guy. And what do I have here? This remains the same. This remains the same. But here I replace 2 norm x times norm y. And it is easy to see that this is nothing but norm of x plus norm of y whole squared. So, now, if I sort of cancel the squares on both sides, I have exactly the triangle inequality that I wanted to prove. So, this is essentially complete the proof of triangle inequality for the 2 norm. Like I said, we are not really trying to, we are not really going to see proofs of for the 1 norm, 3 norm, 5 norm for any other p norm. But typically, proofs will follow in a similar way. So, one of the important, very critical things to note is that this piece of the proof that you saw here, I am actually going to highlight it because this is a rather critical piece of the proof. So, this piece of the proof that you see here is in fact, a proof of the Cauchy Schwarz inequality and Euclidean space. If you notice, the left-hand side is actually how we write the inner product. We will look at this notation a little bit later. x dot y is actually xy, the inner product of xy. So, this is actually the inner product. So, we have essentially proven that the inner product of two vectors is less than equal to norm x norm y, which is exactly the Cauchy Schwarz inequality. So, we have proven a particular case of the Cauchy Schwarz inequality right here. So, please remember this. So, this is an, by the way, this Cauchy Schwarz inequality is in fact, a rather key inequality which is satisfied by all norms. So, the Cauchy Schwarz inequality is in fact, satisfied by all norms. So, you can also do a general proof, but for now, we have a pretty good specific proof. We are quite happy with it. All right. Okay, let us move on. We have now proved that the norms that we have chosen as vector norms are in fact valid vector norms. Great. Let us look at the next set of ideas that we are sort of very keen on. We already spoke about the notion of convergence very loosely. We kept saying limit of a function or we said that the function converges to a constant, the derivative does not converge. Function converges to a constant derivative does not converge to 0 or the derivative converges to 0, the function does not converge to a constant. So, we use this word convergence a few times already in the lectures preceding right, but we have been using it loosely. In general, whenever even in the past mathematics courses that you would have attended, whenever you would have spoken of convergence, it is always associated with the notion of limits. So, there is of course, they are very closely connected. It makes sense that whenever we talk about convergence, the notion of limits also shows up. It is very natural. However, let us look at a more formal way of defining convergence. So, once we have a non-linear space, all these notions can be very easily defined without a norm because the entire idea of convergence is for terms to get close to a particular point. And there is no way to define closeness without the notion of a norm. So, it was very important for us to have a non-linear space. So, suppose I have a sequence. This is the notation for a sequence. I hope you folks have seen this before. If not, just look at, I mean this is very simple notation. So, each term is basically indexed by a i and i goes from 1 to infinity. So, a sequence inherently has to be infinite. There is no such thing as a finite sequence. As soon as they say sequence, infinitely many terms should come to your mind. So, the terms are all indexed. So, you have terms x1, x2, x3, x4 and so on and so forth. So, sequence in a non-linear space is said to converge to a point in this space x. If for all positive epsilon, there exists a positive integer, not just any integer. I would say z plus there exists a positive integer, such that xi minus x0, the norm of xi minus x0 is less than this epsilon that was given to us for all i greater than equal to this integer n. So, let us sort of reuse it. We want to look at this again. So, things are very clear in our mind. So, what am I saying? I am saying that any sequence in this non-linear space is said to converge to a point. If I am talking about convergence, I have to qualify it with a point to which we are converging, otherwise it does not make sense. So, I have qualified it with a point x0. So, sequence xi in this non-linear space is said to converge to this point x0. If for all epsilon, so if you give me an epsilon, so the user has to give me any epsilon and correspondingly I should be able to find an integer, a positive integer n such that all my terms beyond the nth term, all my terms beyond the nth term are at least epsilon close. So, this is an illustrative picture. This thing on the left here, this is an illustrative picture. And this 1, 2, 3 is the term number. I am basically writing i here. So, i equal to 1, i equal to 2, i equal to 4, 3 and so on. So, you note that as i increases, my terms are getting closer and closer to 0, x0. And this is exactly what is meant by this definition. It is not difficult to look at some examples. So, let us actually look at some examples. So, let us see. Suppose I have xi equal to say 1 over i. So, what does this? So, xi converges to, this is the notation, 0. And you can immediately see how it is connected to limits. Why can I say that it is going close to 0? Because if I have given any epsilon positive and I look at xi minus 0. So, in this case my x0 is 0. So, if I look at xi minus 0 and I want this less than epsilon, that is what is the requirement. I want to find an n such that beyond the nth term, everything, every term is epsilon away from the equilibrium, epsilon away from the point of convergence x0. So, I want to be epsilon away. So, what should I choose my i? It is very easy. Sorry, what should I choose my i as? So, it is very easy. I mean it is, all I have to do is that i should be greater than ceiling function of 1 over epsilon. What is the ceiling function? It is basically a function which gives me the closest integer bigger than a number. So, if I give you 0.7, ceiling is 1. If I give you 0.5, it is still 1. So, let us not worry about the halves. But the ceiling function essentially gives you the closest integer larger than the number. So, if I give you 5.9, 6, 5.26. So, the closest integer larger than the number. So, if I take i larger than ceiling of 1 over epsilon, then what do I know? I know xi is less than epsilon. Yeah, all right. So, I am done. Okay, as simple as that. Take an example xi is, let me repeat, xi is 1 over i. Yeah, I know just by my limit ideas that this is going to go to 0. So, the point of convergence x0 is in fact 0 in this case. So, how do I prove it? Or how do I find the corresponding n? Because in order to prove that any sequence converges, I need to actually be able to give an n. So, suppose I start with any epsilon, epsilon is arbitrary. Notice that epsilon is very much arbitrary. Yeah, and I want to satisfy this sort of inequality for i greater than equal to n. Okay. So, what is it? So, this guy here, I choose as my n. This guy here, I choose as my n just using the seeding function. Basically, I know that I want to have 1 by epsilon, anything larger than 1 by epsilon, but 1 by epsilon may not be an integer. So, I just choose the integer larger than 1 by epsilon. So, I take the seeding function. So, if I choose this kind of any i greater than equal to this n, then I know that x i is less than epsilon. And I am done because this and these are the same because all terms are positive in this case. So, the absolute value is really not much. The absolute value function does not play any role. Okay. Great. So, this is how you sort of look at convergence. Yeah. Of course, this is a very simple example. Little more to do when the example is slightly more complicated, but not significantly more to be honest. Yeah. Okay. What is a Cauchy sequence? Right. So, we've seen convergence. So, Cauchy sequence is a slightly different notion. It says that if I have a sequence x i again in a normal linear space, then it is said to be Cauchy if successive terms start getting closer. That is essentially what is quantified or qualified by these epsilon and n's. Okay. So, the successive terms get closer. What does it mean? That if I am given any epsilon positive, again, I can find an n. Notice all in both these cases, the n depends on epsilon. I mean, you can see n depends on epsilon. So, a sequence said to be Cauchy for all positive epsilon, there exists a positive integer. Let me again do this. Yeah. There exists a positive integer such that successive points are epsilon distance away for ij greater than equal to n. Okay. So, as you go, as your terms become, you know, as your i becomes larger and larger, successive terms are close. Okay. That's essentially what this is. So, you can see this picture. You can see this picture. In fact, the series that we just proposed x i equal to 1 by i is also a Cauchy sequence. It is also a Cauchy sequence. I am not going to prove this. You can actually think about. So, I will actually say this, find n given epsilon. Yeah. Please treat this as an exercise and do this. Yeah. So, if I am giving an epsilon, try to find the n corresponding to this. Okay. So, you want the successive terms to be small. Yeah. So, the n will turn out to be slightly different looking than what you have here. But the fact is this is also a Cauchy sequence. Okay. So, an important thing to remember is that, let's see. I am going to write this in red. Note, convergence implies Cauchy. But Cauchy does not imply convergence. All right. So, sequence is convergent, which is why the example I gave you works. I know. Yeah. Because the sequence is convergent. I know that it is Cauchy. Yeah. In fact, from this definition, right from this definition, you can prove that it's a Cauchy sequence. I am not actually showing that proof, but you can do that. Yeah. But for the specific case, I encourage you to find n given an epsilon to prove that this sequence is in fact a Cauchy sequence. The other way around is not true. If a sequence is Cauchy, it is not necessarily convergent. So, I can always construct funny examples and I have constructed one such funny example for you folks here. Yeah. So, my norm linear space is the open set 0, 1. What is the open set 0, 1? I really hope you know what is an open set. Yeah. Open set 0, 1 is everything inside the 0, 1 except for the end points. Yeah. The end point that is 0 and 1 are not part of the set x. Because I have essentially constructed this, if you may ridicule a set to prove my point that Cauchy does not imply convergence. Okay. So, if I take my sequence as 1 minus 1 by n, what happens is n goes to infinity. One might ask, right. You can see very easily that as n goes to infinity, your xn actually goes to 1. Yeah. So, this is where you converge to. But the thing is that 1 is not part of x. All right. So, this might seem a funny weird trivial sort of example, but it is not so trivial. Yeah. You, the series seems to be tending to a point, but the point is not part of the set. Okay. And if you notice the definition of convergence, it should be, you should, this should remind you of the fine points in any definition. The point where you converge has to be part of the set x. Okay. This is rather critical. I mean, I have constructed sort of a fake example, if you may. But there are, there can be many more realistic examples where the Cauchy sequence will not converge. Yeah. Okay. One good thing is most of the spaces that we concern ourselves with, yeah, they are in these spaces Cauchy sequences in fact converge. Okay. And such spaces are called Banach spaces or complete normed linear spaces. Okay. So, most spaces we consider like the Rn, Rn, Rn, Rm, Rp, whatever. They are all Banach spaces. Yeah. That is all Cauchy sequences converge to some point in the space. Yeah. This is rather useful. Okay. I mean, a set, a sort of normed space does not have this property can be a very troublesome space because we are always interested in doing convergence analysis and seeing where different signals converge to. Now, if, if you don't have a complete space property and your sequence or your signal seems to be tending to a point, but then the point is not part of the set then tending to a point which is not part of the set, then you will land in some trouble as far as I mean convergence analysis goes. Yeah. Because you have defined a, if you have defined a norm only on this set X, right. Outside this set X, you don't know if the norm is satisfied. You don't know if the norm works, if the norm exists. Okay. Say again, don't go just by this example. This is a sort of constructed cooped up example, but there can be more realistic and funny scenarios where you don't have completeness. Yeah. But also as an aside, I must clarify that everything we consider in this course are going to be complete normed linear space, which is essentially also called Banach spaces. Yeah. So, like we said, examples are Rn with the infinity norm or the 2 norm or any, in fact, any norm and it's all Banach spaces. Okay. So, so you've seen quite a few notions, maybe quite a little bit of structure. Once we have the structure of a norm on a vector space, we see that we can talk about convergence, right. So, which is rather nice, we can talk of Cauchy sequences, completeness and so on. Another structure which is a little bit more general than a norm is the inner product. Okay. So, again, we just saw it in the previous slide. I used this notation where we said that we talked about the dot product. Okay. So, dot product is the sort of the simplest inner product that we all know. Okay. In Rn, the dot product is the standard inner product. Yeah. And then it's also denoted like this. So, that's what is an inner product space. An inner product space is again a linear space. Yeah, we call it a special norm linear space, but it's actually a linear space, right, with an inner product operation. Okay. What is it? It's a function which takes two elements of the vector space and maps to the field. Okay. By field, I mean, where do each component of the vectors belong? Like for example, when I have two vectors in Rn, every component is a real number. So, the field is the field of reals. Okay. Vectors in the field of reals. And then you have a few properties of these inner products. Okay. So, what are these, just like we had properties for the norm function. Similarly, for the inner product function, also we have certain properties. Okay. And what are these properties? The first is that it is symmetric. Yeah. Again, we are talking about spaces in the reals. Okay. So, otherwise, there will be some conjugates and so on and so forth. Okay. So, the first property is that the inner product is symmetric. Yeah. So, inner product of x with y is the same as inner product of y with x. So, sequences doesn't matter. Then it has a distributivity property. So, x comma y plus z inner product is the same as inner product of x, y plus inner product of x, z. Okay. The third is the scalar multiplication property. The alpha, the scalar multiplier just comes out of the inner product. Okay. And the final property is that the inner product of x with respect to itself is non-negative and 0 if and only if x equal to 0. Okay. So, this last statement that you see should already start to remind you of norm because this is one of the norm properties. Yeah. Here you took one element x and you said that this function is now non-negative and 0 only when the vector itself is 0. Okay. So, this is also property of the norms if you notice. All right. So, what is an example I already said in the mystery is gone. So, I already said the scalar dot product on R n is a valid inner product in the field of reals. Yeah. So, and like I also said also hinted in fact, right, the x with this inner product is also a non-linear space. As soon as you give you an inner product space, you can always construct a non-linear space by defining the norm as inner product of x with itself. All right. So, given an inner product space, you always have a non-linear space. Okay. So, I will stop here. So, in summary, what we looked at today is the sort of we completed the proof of the fact that the two norm the way we define is a valid norm. Then once we have the structure of norm linear space, we were able to talk about convergence and we did. Then we looked at what is a Cauchy sequence and how the two are not exactly equivalent always. And then if they're equivalent, the fact that there's a Banach space. And then we defined an additional structure or a new structure, which is the notion of the inner products. Okay. And we also saw that the standard dot product that we have been talking about is in fact, the inner product. And this inner product space naturally leads to a non-linear space also just by virtue of this construction. All right. So, yeah. All right. So, that is where we will stop today. Thank you.