 So, Banach spaces, Hilbert spaces, we have already done the setup on what kind of space we want. So, we also need notions of matrix norms like I said. So, we work with what is called induced matrix norm. It is induced by the corresponding vector norm and hence it is called the induced matrix norm. So, what is the matrix norm for a matrix A? By the way, this matrix does not need to be square or invertible or anything like that. You can compute the norm for any matrix. So, the P norm for a matrix is basically defined using the supremum over x not equal to 0. The P norm of A times x divided by the P norm of x. See, I am sorry. This is the matrix norm. This is how you, this is the notation for the matrix norm and this is the definition for the matrix norm. This is exactly how you define it. It is the supremum over all x not 0 A x P norm of A x and divided by norm of x, P norm of x. So, obviously, you have to imagine that the x has to be compatible with A. I cannot really take arbitrary, you know, x and arbitrary vector spaces. I mean, yeah, if it is a 1 cross 2 matrix, I cannot take a three dimensional vector. So, x has to be something that you can actually make this multiplication happen. But remember that the vector, the size of the vector in the numerator and the size of the vector in the denominator do not have to be the same. You understand that you can compute the P norm for any vector space. So, it does not matter what is the size of the vector itself. So, you compute the P norm of the numerator and P norm of the denominator since x is not 0. Therefore, this is, you know, this is a valid operation. And you look at the ratio. This ratio and when you look at the supremum, it is somehow the maximum, it is a generalization of the maximum. So, when you look at the ratio and you take a supremum, you are essentially measuring the maximum magnification that a matrix gives to a vector. Somehow measuring the maximum magnification in some sense. That is how much the vector gets enlarged in some sense, lengthened in some sense, yeah, by a particular matrix. That is really what you intuitively get out of a matrix norm, ok. Makes sense because vector norms also measure the size of the vector or length of the vector. Similarly, here the matrix induced norm measures somehow the size of the matrix, but in how it influences the particular vector space, ok. If you change the vector space, you might get a different answer, alright. So, therefore, it is induced, yeah, remember, ok. So, what is the supremum? The supremum is defined as the least upper bound, ok. This is the best definition of supremum you can have. It is called the least upper bound, yeah. So, of course, you can imagine there is an upper bound always, yeah. But then you are looking for the least upper bound. This is what is the supremum. And the more formal definition is that the supremum for a set S in the r closure, which is basically r union infinity, yeah, is the smallest value of y such that for all x in S, x is less than equal to 1, ok. Exactly the least upper bound written in mathematical terms, ok. It is not just an upper bound, ok. If I take this set, if I take this 0, 1 open interval as my set S, then you know that 1, 2, 3, 4, 5, all of them are upper bounds. Everything is an upper bound. But the supremum is the least upper bound, ok, yeah, just to, you know, sort of do a thought experiment. If I say the supremum is actually 1 minus epsilon for some positive epsilon, ok. But then I know that 1 minus epsilon plus delta can also belong to 0, 1, correct. I just have to choose my delta appropriately. As long as delta is less than epsilon, 1 minus epsilon plus delta also belongs to this 0, 1 open interval. But then 1 minus epsilon plus delta is greater than 1 minus epsilon. So, 1 minus epsilon cannot be a supremum, ok. It is a simple idea. So, 1 minus epsilon is not a supremum for any positive epsilon, ok. So, the only possibility is that epsilon is exactly 0 and 1 is the supremum, ok, ok, least upper bound, yeah. Again, for most of you this may be very intuitive in this example that 1 is the supremum and why did I make this proof, you know, whatever, funny looking proof. But in some more complicated cases, you have to use these kinds of proof, ok. Anyway, so in this case it is very obvious that for an open interval 0, 1, 1 has to be the supremum because everything below 1 is part of the set. That is the idea of an open set, ok. 1 is not included in the set, very nice, very good point, ok. So, if I thought of 0, 1 open interval as my vector space, then the supremum is not in the set, ok. This is the difference between a supremum and a maximum. A maximum will always be part of the set, ok. So, therefore, there are many scenarios of sets for which you cannot define a maximum and hence you have to talk about a supremum. That is the whole reason for talking about a supremum, ok, that the maximum may not exist in the set, right, ok. Yes, I say that again? Absolutely, that is also possible. So, that is what I am saying. So, there is couple of reasons why you talk about the supremum. One is that the maximum may not be part of the set, yeah. So, see whenever I talk about a vector space, alright, once I define a vector space, you have to almost think that my world ends there, yeah. Although you know that 2 is there, 3 is there, minus 1 is there, but for all intents and purposes, between 0, 1 is my world. That is the end, because all my analysis, everything I do is inside that set. So, anything that does not lie in that set is a problem for me in some sense or, yeah, I do not understand it, ok. So, supremum is a way of sort of giving this extension, ok. It talks about, but again, all of this works if there is a superset in some sense, ok. There is something beyond 0, 1, so everything works, yeah. So, of course, like you are saying, there are sets where there is no maximum if I think about set 0, infinity, yeah. If I think of a set 0, infinity, then you will say that the supremum is infinity, ok. So, infinity is allowed. So, therefore, when we talk about supremum, infinity is allowed, yeah. When we talk real numbers, infinity is not allowed, yeah. So, if you give me a set 0, infinity, I will still say there is a supremum, but it is infinity, ok, which is useless sort of. So, I mean, mostly you cannot do anything with it, but anyway, we still say that supremum is infinity. Alright, let us look at another example. In this case, it is a function, ok. It is a set created from an image of a function, ok. So, if I look at the function f of x, which is 1 minus e minus x and x is actually non-negative real numbers, yeah. And I look at the set e, which is actually the image of f, yeah. I hope you understand what is the image of f. It is just all the values that f takes, yeah. So, the image of f is actually exactly this, right. What is the image of f? It is this way, because it is a nice continuous function and by smooth function, yeah. So, the image of f is exactly this. So, what is the supremum? 1, yeah, but it is not contained in e, ok. So, basically, when the supremum is contained in the set, you can just replace the soup notation with the max notation, ok. Also, when the supremum is contained in a set, folks who have done real analysis and hopefully SC 63 and we know that the set is, if the set contains its supremum and infimum, then the set is what? Set is a closed set, ok. If the set contains its supremum and infimum or if the other way around, if the supremum and infimum of a set are in the set, then it is a closed set, ok. So, we talk about a few matrix properties also. We use a lot of these symmetric matrices, symmetric square matrices to design these Lyapunov functions and so on and so forth. So, we like to know some properties of these. First, all eigenvalues of symmetric matrices are real, yeah. Most of you should know all of this. A is said to be, A is positive definite if and only if any one of these is satisfied, yeah. That alpha transpose A alpha is strictly positive for nonzero alpha, all eigenvalues of A are strictly positive. There exists a non-singular decomposition Q, looks like that and every principal minor of A is positive, ok. So, a symmetric matrix is positive definite if this happens, yeah. We never talk about definiteness of non-symmetric matrices, ok. One can extend that definition, but remember the eigenvalues of non-symmetric matrices are possibly complex, ok. So, it does not give you nice results, ok. So, whenever we say positive definite matrices, we are invariably talking about symmetric matrices, ok, all right. The other thing for symmetric matrices is this inequality which we use extensively, extensively. I mean this inequality has some name also, but I forget it, yeah. But basically, it says that if you take a quadratic product or a quadratic form using this matrix, so this is a what is a quadratic form, yeah. Then it is lower bounded by lambda min alpha transpose alpha and upper bounded by lambda max alpha transpose alpha, ok. So, very, very important inequality, very simple, but very important inequality we keep using this regularly in our Lyapunov analysis, ok. So, please remember this. The other important thing to remember is that this expression is virtually impossible to compute by hand, yeah. If I ask you to compute a supremum of Ax over x and all that, you will actually have to write some code and do some kind of a search or do some optimization to actually find this answer, yeah. So, very painful. So, there are actually simpler formulae here, well-known formulae for particular matrix norms, matrix induced norms. So, the infinity norm is max absolute row sum, the one norm is max absolute column sum and the two norm is largest singular value. So, square root of lambda max A transpose A, ok. So, these, more or less these three are the ones that get used more often than not, so we are not concerned. And then you have this anyway, the Cauchy-Schwarz inequality. I have said find the general proof, but actually the general proof is here, there is a proof here. So, we will look at that later, very quickly. But the Cauchy-Schwarz inequality is a general inequality for all norms and obviously it is valid also for matrix induced norms. Matrix induced norm is also a valid norm. By the way, as soon as I made a norm definition for matrices, I hope you understand that matrices are also vector space. So, I mean they form the linear norm linear space, yeah. So, this is evident because superposition works and if you take any two same dimensional matrices and then you take a linear combination, it is the same dimension matrix, yeah. So, matrices also form vector space, I mean not different dimensional matrices and all, but yeah, as long as your matrix dimension is fixed, you are fine, ok, yeah. So, of course, there are simple examples and I do not know if I should go to these, but yeah, I mean you have the row sum, the column sum and so on. So, one norm, infinity norm and two norm are rather easy to compute, yeah. Basically, you just have to apply these formula, yeah, very straightforward. All I have done is compute the absolute row sum, absolute column sum, taken the maximum absolute row sum, that is the infinity norm, taken the absolute maximum column sum, that is the one norm and then the two norm is you have to compute A transpose A and compute the largest I mean, yeah, little bit more work, all right, great, great. Any questions? Yes, all right. So, the nice thing is, I mean I hope you could have guessed by this formula itself, yeah, that though we use the supremum there, it sort of turns out to have some kind of a maximum expression eventually, yeah. So, these for when you are talking about real valued matrices, so these nice things happen. The definitions are meant for general vector spaces and supremum is by definition, I mean I cannot, I mean I cannot say why not, and cannot be the maximum, yeah, for general vector spaces, okay. So, but for again real valued symmetric matrices, well actually this, I am sorry, I apologize. No, no, I think these are separate, yeah, yeah. For these you do not need any symmetry or anything, yeah, I mean I think that is pretty clear, I think only this much is for symmetric and this is for the induced norm, all right. So, for the real valued ones, yes, you have any way it becomes a maximum. In general, no, right, I cannot make that definition, yeah, it is the supremum, it will be what it will be, yeah, if it is not in the set, it is not in the set, all right. Supremum always exists, can be infinity, infinity is an option. So, that is not a issue as such, all right, great. Now that we have done vector norms and matrix norms, yeah, we have to go to signal norms, yeah. I promise you the final, final norm, yeah, no more, yeah. So, see we are progressing pretty linearly, yeah. We have states, we have systems, we have states, yeah. Rn is to talk about size of states, distance of states from other states and things like that, yeah. And now states are vectors, so therefore we looked at vector norms. But then if you think of linear system or if you think of Lyapunov function, there are matrices involved, so therefore we also need to talk about matrix norms. Finally, when we solve these states and create trajectories, they are functions of time, therefore they are signals, right. So, we have to talk about signal norms, yeah. So, the vector norms give you some kind of a point wise behavior. So, once I freeze time for example, for example, I say that I want to look at the behavior at 5 seconds, then my states are a vector. And then I can look at all these vector norms, yeah. But once if I want to look at the behavior of, you know, of a signal over a period of time, then I need signal norms, okay. And that is what these signal norms do, you will see. So, if I am given a vector signal, yeah, it is almost looking like it is a function of time, you get some non-negative real inputs and you get scalar, sorry, vector valued outputs, yeah. Then the signal norm is the p signal norm or xp is basically the integral over 0 to infinity. Why 0 to infinity? Because it is time going from 0 to infinity and you take any arbitrary vector norm here, x is any arbitrary vector norm, not necessarily the p norm, okay. This is an arbitrary vector norm, not necessarily the p norm, though you are computing the p norm here, okay, do not get confused. So, the p signal norm is integral 0 to infinity, vector norm to the power p dt and take the pth root of this integral. Similarly, the infinity norm again has the supremum, right, is supremum over all time of this vector norm, okay, supremum over all time of this vector norm. Like I said, the vector norm is arbitrary, does not have to be the, for example, if you are computing the two signal norm, the vector norm you use does not have to be the two norm, it can be the one norm, yeah. The only thing you have to remember is for a single problem that you are working out, stick to the same vector norm, yeah, do not switch between 1, 2, 3 and this like, you know, yeah, then you are not going to get any consistent results, all your results will be wrong, yeah. But you are free to choose any vector norm, yeah, not restricted by this p or by this infinity, okay, does not have to be the infinity norm here, yeah. In fact, that would be wrong. So, if you have infinity norm here, you use infinity norm here and then if you have p norm here, you use the p norm here. So, basically you ended up using different vector norms in the same problem, that is not okay. You have to use the same vector norm for the entire problem, yeah, you can stick to any vector norm, yeah, is that clear, yeah, yes, okay, all right. So, like I said, in all the above definitions x signifies any vector norm, the choice does not matter, however, never switch the norm in between, be consistent, okay. Whenever p signal norm is finite, we say that the signal belongs to LT space, yeah. So, signal norms actually define a rather big space of functions, okay, rather big class of functions, yeah. So, these are called, this is called the LT space and this has significance not just in control and so on, yeah. This has significance in a much larger array of math, yeah. For example, whenever you talk about any kind of function approximation using series, you need LP assumptions, yeah, Fourier series for example. When does the Fourier series converge? It only converges when you are in some particular LP space, okay. So, convergence of series and when do Taylor series converge, when do Fourier series converge? So, these sort of tools require you to have some kind of an LP space. So, what I am saying is this LP space has much, much wider application than just in controls, we are just using it for some very small purpose is what I would say, okay. So, the other thing to remember is that L infinity, that is when the infinity norm is bounded, it is exactly the same as being as saying that x is itself a bounded signal, okay. Proof is pretty straightforward, yeah. I am not sure if I, if you need to, I mean, anyway I can look at one side of it. So, the infinity norm is this guy, yeah. If a function is bounded, it means that there exists some positive number such that the norm is smaller than that positive number. So, if the norm is smaller than m, the vector norm is smaller than m. Notice, this is the vector norm notation, yeah. Now, whenever I put the time argument, then I am computing a vector norm because I froze time. Whenever there is no time argument, notice, because of, because I integrated or I took supremum, the time argument is vanished, right. Therefore, on the left hand side, you never see any time argument, okay. Whenever I am computing a signal norm, there is no time argument, okay. So, it is important notation wise, remember, yeah. So, no time argument here, but in all the vector norms here, you see the time argument. Yeah, because without freezing time, I do not have a vector, I have a signal, right. So, once I put some particular time in here, I have a vector, then I can compute a vector norm, okay. So, this will be our notation. Whenever I compute a vector norm, the time argument will be evident, will appear. And if I am computing a signal norm, there will be no time argument, all right. So, when I say here that a function signal is bounded, it means that in the norm, it is less than equal to some upper bound, yeah, which means that supremum itself is also upper bound. And because for all time, this is less than equal to m, therefore, the supremum itself is upper bound, yeah. So, very straightforward, yeah, which means that x belongs to L infinity. Similarly, if the infinity norm is some constant, it means that the supremum is some constant, which means that every for all time, this has to be less than equal to m. Therefore, function is, the signal is bounded, okay. So, both ways you can prove that essentially L infinity is identical to having a bounded signal, okay. So, L infinity signals are bounded signals. Anything L infinity space is a bounded signal, yeah. The interesting thing to remember is that if you take any two vector norms, there is a property of norm equivalence. What is the norm equivalence? It means that your p norm and q norm can are always relatable by constants alpha and beta, okay. For any value of x t, does not matter what how big x t becomes or how small x t becomes, the two norms, the two norms p and q norms, vector norms are relatable by a constant, yeah. So, alpha norm x tp is less than equal to x tq, less than equal to beta norm x tp, okay. And vice versa, yeah. If you have p in the middle, you can still find some other constants, right. It will be 1 over alpha and 1 over beta, okay. So, the idea is vector norms can always be related to each other, yeah. So, they have what is called norm equivalence. On the other hand, signal norms do not have norm equivalence. If you have a signal which is in L1 space, it may not be in L2 space. If it is in L infinity space, it may not be in L2 space. So, if just because one norm is behaving nicely, doesn't mean the other way, yeah. So, a very simple example is this guy. If I take my signal as this guy, yeah. And I use my and I use the two vector norm. I have decided to use the two vector norm, yeah, because I know that it is easy. The two vector norm of this guy is just 1, right. Do you believe that the two vector norm of x t is just 1, because it is cos square t plus sin square t square root, which is 1, okay. So, if I take the infinity norm, it is just the supremum of this guy. So, it is supremum of cos square t plus sin square t and it is just 1, right. On the other hand, if I take the one norm, then this is integral 0 to infinity of 1, which is infinity. I just took the two norm again here, two norm here too, no problem, yeah. So, the infinity norm is just 1, which essentially indicates that it is a bounded signal and it is, right, obviously bounded signal. But the one norm is actually infinity, it is a no, no, no. Boundedness is only connected to the infinity norm. One norm, two norm do not have any connotations with boundedness, they are not connected to boundedness at all. There you can think of them as different function spaces with different properties, yeah. The infinity norm is bounded, which means that the function is also bounded. However, the function is not in L1 space at all, yeah. And this is a big conundrum, right. I mean you see that it has nice infinity norm, nice L1, yeah. But it is, sorry, nice L infinity norm, but it is not in L1 space, yeah. Similarly, L2 and so on, you can compute L2 norm, any Lp norm for example, for that matter, yeah. This function is definitely in L infinity, but it is not in any Lp, yeah, because it will always have one integrated from 0 to infinity and that will essentially give you infinity, so norm equivalence does not work for signal norms, all right.