 Because any sequence is really like a vector in an infinite dimensional space and the infinity of that dimension is the infinity of the integer. You can think of each sample of the sequence as a coordinate of that so called vector. So a sequence is like a vector in a countably infinite, this is you know may be say in a countably infinite dimensional space whose orthonormal basis is essentially the set of all unit impulse sequences, put a unit impulse at every integer location and each of these gives you a separate perpendicular vector. This is a generalization from what we have in this 5 dimensional space here. You see you can think that if you were to generalize this here to in 5 dimensional space of course you are putting the ones respectively at the locations 0, 1, 2, 3 and 4. But you can keep generalizing this, you can put this at each integer location and each of them gives you a new perpendicular vector and that is how you have an orthonormal basis for the space of sequences. And of course we know very easily how we can express any sequence in terms of this perpendicular basis that we have already done when we were trying to derive the principle of convolution. So we saw that any sequence any vector or sequence x n can be expressed in terms of these that is very easy x n is summation k going from minus to plus infinity x k delta n minus k. You know in signal processing it is very interesting like in poetry. The same line can have many meanings, the same equation can assume many meanings in signal processing in communication and one must learn to appreciate that. So this very equation here had a different meaning when we talked about convolution. There we said that you know you are thinking of it of course in a way it had a you know there is a relationship between that meaning and this but still there you are trying to express all sequences in terms of the unit impulse. Now here what you are trying to do is to show that you have an orthonormal basis and in fact we will show in a moment that it is orthonormal and that the orthonormal basis can allow you to expand any vector in terms of it. So it is the same you know the one equation can have several meanings or several different interpretations or several nuances of interpretation and one must learn to appreciate that as we go along. So coming back to this it is very clear that we can express x n in terms of this orthonormal basis but what we now need to see is why are we calling it orthonormal. So we need to talk about a dot product. You see the dot product between two sequences as we generalize from finite dimensional spaces the dot product would be essentially a sum overall the coordinates. You see each k is a coordinate now each location is a coordinate x 1 n x 1 k x 2 k and for the moment let us assume real sequences x 1 x 2 at the moment we are assuming real sequences and we are assuming the coordinates are real. We essentially generalization from what we did in finite dimensional spaces we took the product of corresponding coordinates and we added up over all the coordinates. But now we need to ask whether this definition is acceptable when the sequences are complex. So for that we need to ask what more do we want from a dot product. Before we do that since we are generalizing the idea of a dot product we would like to use a slightly more general notation for dot product. So the dot product of dot product is also more generally called the inner product and it is denoted by the inner product of x 1 x 2 like this with triangular brackets. You see if you take the dot product of x with x what do you get? In conventional vector algebra we get the magnitude square of the vector and here we would like to use the notation norm. So instead of saying magnitude for a vector in general for this generalized class of vectors we call it the norm. In fact we call it the true norm so do not worry too much about this layer 2 but the upper 2 is essentially a square this is what is called the L2 norm. This is a technicality let us not worry too much about it but let us write it down for completeness. I mean you could have other numbers here also. However in our discussion in this course we shall not keep writing this subscript 2 after this. We shall understand that it is the 2 norm. Now you see one thing this norm is the generalization of the idea of magnitude. So instead of using the word magnitude for this generalized class of vectors we call it the norm the same idea. But you see what do we want of a norm? We definitely want that a norm or a norm squared be non-negative. In fact we want the norm to be not only non-negative if the vector is 0 only then we would allow it to be 0. If there is a non-zero vector we definitely do not expect that it should have a 0 magnitude. So we want the dot product of x with x to be greater than equal to 0 and we want dot product of x with x to be equal to 0 if and only if the vector x is itself 0 which means x m is equal to 0 for all n. Is that correct? Do you see if you take the definition of the dot product to be inner product that we just used a few minutes ago if we take the dot product or the inner product of x 1, x 2 to be summation k overall the integers x 1, k x 2, k for complex x 1, x 2 as well if we do this then what are we going to land up with? We are going to land up in the troublesome situation that the b product of x with x is simply summation k going from minus to plus infinity x squared n but x squared n when x n is complex could be complex and summation over k x squared of course this should be k sorry. So summation over k x, k will need not be 0 need not be non-negative. In fact it can be complex in general and we definitely do not want that. We want it to be a magnitude squared so that can only be non-negative and it should be 0 only if x itself is 0. Is that correct? Now you see how do we then correct the situation? It is very clear that we want each of these terms in the summation to contribute a non-negative quantity and that can be done only if you have a modulus in each of the terms and that modulus can appear only if you bring in a complex conjugate and therefore we redefine the dot product in general for complex x1, x2. We need to redefine the dot product of x1 with x2 to be summation k going from minus to plus infinity x1, k x2, k complex conjugate. No doubt this makes no difference for real x1, x2 at all. So the definition for real x1, x2 is as it is. Is that clear? Yes. You see now it is very clear. Now of course once we have done this we have taken care of the problem. So now we have the inner product of x with x is equal to summation k going from minus to plus infinity mod xk square. It is xk, xk complex conjugate which is summation k mod xk square. Then of course greater than or equal to 0. It is 0 only if, in fact if and only if xk is equal to 0 for all k. Unless each of those terms is 0 this cannot sum to 0 because each term there is non-negative and therefore we have achieved what we want out of the dot product. We have been able to redefine the dot product in such a way that it takes care of this use of the dot product for calculating the magnitude squared of a vector.