 Welcome to the ninth session in the second module on the core signals and systems. You will recall that in the last few sessions we have been extolling the virtues of a sinusoidal input to a linear shift invariant system, particularly a linear shift invariant stable system. Now even after looking at the sinusoidal input, we looked at the two components so to speak of a sinusoidal input, namely the two oppositely rotating complex numbers or phasors. Now all this is very well, if it were fairly general, if it were fairly common that one would have sinusoidal inputs to linear shift invariant systems, otherwise this just seemed like an exercise in a very specific domain of application. What we are now going to develop over the next few sessions is a few ideas which would later convince us that the sinusoidal input is fairly generic. In other words, there is a wide class of inputs which can then be expressed as sinusoidal inputs and therefore one could analyze the response to a large class of inputs by looking at these inputs as a combination of sinusoids. And since the system is linear, one could look at the output to each sinusoid individually and linearly combine the outputs. Now to do that, we need to move to a totally different way of looking at signals and that is what I am going to do in this step. I am going to talk about the relation signals and vectors. So signals which could be either continuous variable signals or continuous independent variable signals or discrete independent variable signals and vectors. If we think about them are essentially combinations of coordinates. So actually it is not too difficult to see that there is a relationship between signals and vectors. You see, let us take a two dimensional vector. Let me draw a two dimensional vector on the plane of this page. So I have this vector. I can decompose the vector into its two perpendicular components, assuming that your so-called axis perpendicular axis are as shown. That means you have unit vectors as shown in green here along these two perpendicular irons. Now, how would you get the component of this vector which I have shown in black? Let me call it V. I have written V here to denote this particular vector. How would I get the components of this vector V along these two dimensions, along these two directions here? I would essentially get it by using what we call a dot product. In fact, let me explain that separately. So how do I take a dot product? Let me expand the view and then explain. If I have this vector which I will call V and if I wish to take the dot product of this vector with another vector u which I am showing here, u may or may not be a unit vector. In other words, we may or may not associate a magnitude of 1 with u. Anyway, the dot products of V and u can be thought of as the magnitude of V multiplied by the magnitude of u multiplied by the cosine of the angle between them. And here, let me denote that angle by theta. And of course, if u is a unit vector, if magnitude of u is 1, then the dot product is simply the magnitude of V multiplied by the cosine of the angle between them cos theta. So essentially you see that what a dot product does is to give you the length of the perpendicular component in the direction of a vector u. Here, u is a unit vector, then of course, it just gives you the component itself. If u is not a unit vector, then you need to divide by the magnitude of u to get that component. In other words, another way to think about it is create a unit vector in the direction in which you want to project V and then take a dot product of V with that unit vector. Now, if I have perpendicular vectors, so as I did in the previous situation, which we showed you, here we had perpendicular vectors. So, if you looked at this one and this one, you could find these two components, the red components here as we showed them, this one and this one individually by taking the dot product with a unit vector in each direction. So, there is a certain decoupling. If you have perpendicular components, there is a certain decoupling of calculation. You can calculate the components individually. On the other hand, you could also have a projection of a vector V along two directions which are not perpendicular. We shall not go into that at this point in time. We need to say that that calculation is not as easy as it is when these two unit vectors are perpendicular. So, right now, let us try and look for situations where we can create such perpendicular vectors. Now, we say this paper on which we are writing is two dimensional because I need only two unit vectors which are perpendicular to one another to completely span the page. That means by taking linear combinations of those two unit vectors, I could create any vector on that page. So, we say a collection of vectors spans a certain space. If all possible linear combinations of that collection give you any possible vector in that space. And if there is only one way in which you can combine those linearly to get the specific vector that you want and this is true for every such vector in that space, what I mean is give me an arbitrary vector V in that space and you claim that there is a collection of vectors u 1 to u n which span the space. Then if it is true that there is only one specific linear combination of u 1 to u n which gives me V, then we also notice that u 1 to u n are in some sense independent linearly independent as we call them. There is no ambiguity about the components. Another way to understand it is there is no way to combine these components u 1 to u n to result in the zero vector. So, that uniqueness of decomposition is possible only if these vectors u 1 to u n are linearly independent and then they are called a basis. And even further, if any two elements picked from that basis are mutually perpendicular, then we say that is an orthogonal basis or a perpendicular basis. Perpendicular is a word used more in high school geometry, a more formal word which we use in functional analysis or in the context of signals and systems or in many courses that form the foundations of electrical engineering is the word orthogonal. So, what we have seen in this discussion is the importance of perpendicular vectors. Now, why are we talking about all this? We are talking about all this because we are going to build a relation between vectors and signals. In fact, let me just give you a hint in this discussion and we will proceed more in detail in the next. What is really a two-dimensional vector? A two-dimensional vector is uniquely described by two components. What is a three-dimensional vector? A three-dimensional vector is uniquely described by three components. So, if you say u 1 to u n form a basis for your space, then n is the dimension of that space. Of course, here we are talking about finite dimensional spaces. Now, let us take a discrete signal first. In a discrete signal, you have points at which you define values. Suppose those points are finite, suppose you have only four points at which you have non-zero values. You can see you can liken that to a four-dimensional vector. So, it is very easy to see that if you have a finite length discrete signal, it is like a vector of finite dimension. Well, think about this. We will come back to this in the next discussion. Thank you.