 A warm welcome to the fourth lecture on the subject of wavelets and multirate digital signal processing, in which we intend to build further the connection between signals or functions in L2R and vectors and therefore, we wish to build further the idea of thinking of functions as belonging to linear spaces and characterizing them in a manner slightly different from what we were doing in the previous lecture. So, just to put our discussion in perspective, this is the fourth lecture on the subject of wavelets and multirate digital signal processing and what we intend to discuss in this lecture is the following. Let me put down the points one by one. The first thing that I wish to talk about today is to think of functions as generalized vectors. This idea is going to be useful to us in many different contexts in this course. So, we need to understand this connection between functions or signals and vectors in depth. We shall spend some time on it today. Similarly, the connection between L2R functions, connection or connections between L2R functions and sequences, we wish to understand this in greater depth. So, what we are going to show in the latter part of this lecture is that one can intimately relate processing of a function to processing of an equivalent sequence and whatever we are doing to try and gain information from or modify a function can be done by equivalently processing or modifying that sequence corresponding to the function. Let us then embark on the first of these two objectives now. You see, let us begin by asking what characterizes a vector? After all, let us take a minute and reflect. What characterizes a two-dimensional vector, for example, a two-dimensional vector is essentially characterized by two coordinates which are independent. We call them perpendicular coordinates. Actually, the idea of perpendicularity there is also intimately related to the idea of independence. So, for example, let me treat the plane of the paper as a two-dimensional space. So, the two-dimensional space corresponding to this paper. Well, let us take any vector on this two-dimensional space. Let this vector be V and marking it as V. There are many different ways to characterize this vector. In fact, notionally an infinite number of ways and one of those ways is to choose the following two so-called perpendicular axes. So, we choose one axis like this and another axis like this and choose a unit vector along each of them. So, I have say a unit vector, let me call it U1 cap along this axis and another unit vector U2 cap along this axis and then I could write V1 or I could write this sorry just the vector V uniquely as say V1 times U1 cap plus V2 times U2 cap, whereby V1 and V2 characterize this vector V uniquely in this two-dimensional space with respect to the coordinate system generated by U1 and U2 and there is an infinity of such coordinate systems. In fact, one infinity of such coordinate systems can be generated simply by rotating this coordinate system of U1, U2. It is very easy to see that if I take this structure U1, U2 and rotate it by any angle in this two-dimensional plane, it would give me a new coordinate system. So, there is an infinity of orthogonal coordinate systems in two-dimensional space and in fact, there is also a relation between all these infinite orthogonal coordinate systems, simple enough and orthogonal coordinate systems are not the only kinds of coordinate systems for a two-dimensional vector. So, for example, the same two-dimensional space can be described by the following different coordinate systems. So, I will draw the same vector V and it is perfectly alright to choose a coordinate system something like this. I could choose one coordinate like this and another coordinate like this and of course, I could again have the unit vectors in these two directions U1 cap so to speak, U2 cap and I could express V in terms of U1 cap and U2 cap indeed I could complete a parallelogram here. So, using the parallelogram law, I could draw a line parallel from the tip of this vector to this U2 and another one parallel to U1 from the tip of the vector and it is very easy to see that this dot dash vector here plus this dot dash vector here gives me V. So, let me highlight that dot dash vector this vector here plus this vector here gives me V. So, let me call this V1 till day and it is a vector and let me call this V2 till day that is again a vector of course, we have V is V1 till day plus V2 till day and it is very easy to see that V1 till day as a vector is some multiple of U1 and similarly, V2 till day as a vector is some multiple of U2. Thereupon I have V is some multiple of U1 plus some other multiple of U2 k1 U1 plus k2 U2 the only catch is determining k1 and k2 is a little more difficult than determining the constants in the previous representation. In fact, let me go back to that previous representation I had this representation previously where V is V1 U1 cap plus V2 U2 cap and remember V1 and V2 here of course, are constants and very easy to obtain because I can simply obtain them by taking a dot product of V with U1 cap and V with U2 cap. So, in fact, in the sense of dot products V1 is indeed V dot U1 cap and V2 I mean V1 as a coordinate not as a vector V2 as a coordinate is the dot product of V with U2 cap simple enough such a simple relationship does not exist in this context while we are not hard put to describe the process by which we obtain k1 and k2 it simply says construct a parallelogram expressing this analytically is a bit of work. So, it is definitely very clear from this example that an orthogonal or a perpendicular coordinate system has its advantages it is always nice to have a perpendicular coordinate system in two dimensional space to represent any two dimensional vector the same idea can of course, be extended to three dimensions to and then one could also conceive of more than three dimensions four dimensions n dimensions and then in principle an infinite number of dimensions to. Now, there again when we talk about infinite dimensional situations we have countably infinite and uncountably infinite finer points, but for the moment infinite is difficult enough. So, infinite dimensional vectors in fact, lead us to the idea of functions now it is a little difficult to understand infinite dimensional vectors all at once. So, to progress towards infinite dimensional vectors it is easier first to start from finite dimensional vectors of larger and larger dimension and all that we need to do is to understand that what characterizes the dimension of a vector is really the number of independent coordinates that it has. For example, a three dimensional vector has three independent coordinates a four dimensional vector would have four an n dimensional vector n and an a countably infinite dimensional vector would have a countably infinite number of dimensions or a countably infinite number of coordinates. By countable we mean we can put the coordinates or the dimensions in one to one correspondence with the set of integers. So, we can talk about the zeroth coordinate we can talk about the one th coordinate we can talk about the minus one th coordinate the minus two th coordinate and so on so forth. What are we talking about here then if we talk about an infinite dimensional vector we are in fact talking about sequences. So, we will build up the idea from there. So, here we are let us make a note of this an infinite dimensional vector or rather an infinite countably infinite dimensional vector is essentially a sequence. So, for example, we have a sequence x of n where n belongs to a set of integers over all the integers recall that this script z is a representation of the set of integers and this is called the index variable. So, now we have a different interpretation for sequences a sequence is like a vector and each n is a different dimension of that vector. I think that is important enough for us to write down explicitly. So, a sequence is a vector each n is a different dimension of the vector and once we have this analogy then extending other ideas of vectors to this context is not difficult at all. For example, adding two vectors simple at the sequences point by point multiplying a vector by a constant very simple multiply each point of that sequence by that constant. What we would like to do now is to extend some of the other ideas of vectors that we have some of the geometrical ideas to this context of infinite dimensional vectors and one of the very useful ideas that we have in the context of vectors is the idea of a dot product. How do we take the dot product of two vectors in two dimensional space? So, let us recall. So, suppose for example, we choose a pair of orthogonal coordinates. So, we have u 1 cap and u 2 cap as we did some time ago orthogonal to one another perpendicular to one another and we have two vectors let us call them e 1 which has the coordinates e 1 1 and e 1 2. So, e 1 is e 1 1 u cap u 1 cap plus e 1 2 u 2 cap and similarly e 2 as a vector has the coordinates e 2 1 u 1 cap plus e 2 2 u 2 cap. Then the dot product of e 1 and e 2 e 1 dot e 2 as we write it is essentially e 1 1 e 2 1 plus e 1 2 e 2 2. So, it is the sum of products of corresponding coordinates two dimensions easy enough to understand three dimensions easy to extend. In fact, n dimensions equally easy to extend. Suppose, we had two n dimensional vectors characterized by coordinates say e 1 1 to e 1 n. So, you have two n dimensional vectors e 1 characterized by coordinates e 1 1 e 1 2 up to e 1 n and similarly e 2 characterized by the coordinates e 2 1 e 2 2 up to e 2 n. Then of course, e 1 dot e 2 is easy to express if we generalize this it is essentially summation k from 1 to n e 1 k times e 2 k. So, dot product generalized to n dimensions of course, we assume these are orthogonal coordinates. Now, we can even take this to infinite dimensions. So, we can think of the dot product of two sequences let us say x 1 and x 2. So, we have here for example, two sequences x 1 n and x 2 n defined over the set of integers n over all the integers. They are so called dot product or inner product as the formal name is. So, you see instead of dot product now, you would like to use the term inner product to generalize and we denote the inner product this way. For the moment let us assume these are real sequences for the moment. In that case if we generalize it is easy to see that the inner product inner product of x 1 and x 2 is simply summation on n, n running from all the way from minus to plus infinity x 1 n times x 2 n. And of course, it is clear that the dot product or the inner product as we are going to call it in this generalized situation is commutative. That means, if I interchange the roles of x 1 and x 2 the result does not change. However, we would like this inner product or dot product notion to give us some of the powers and some of the conveniences that the dot product offers in the context of vectors. One so called convenience or one such so called interpretation or meaning that we derive from the dot product is the notion of magnitude. So, in fact one could think of the notion of magnitude as induced from a dot product if one desires or in other words one could calculate the magnitude of a vector by using the notion of a dot product. This is one path towards the calculation of magnitude. Incidentally, the word magnitude of vectors is used for small dimensional vectors like 2 and 3 dimensional. But when we go to these generalized situations of n dimensional vectors or countably infinite dimensional vectors we replace the word magnitude by the word norm. So, we say that we would like the squared norm of x to be the dot product of x with x as is the case with vectors. So, if you recall a dot a where a is a vector in 2 or 3 dimensions for that matter is the magnitude squared of a. The same should hold good here. When we take the dot product of a sequence with itself it should give us the squared norm of that sequence where norm is a more general word for the magnitude. In fact, in L 2 r the norm is representative of the energy. But at this moment we are not talking about L 2 r because we have not yet come to the situation where we are dealing with functions of continuous variables. So, we will postpone that interpretation for a minute not very far away from now and once again come back to sequences. Even for sequences when we take the dot product of a real sequence with itself we indeed get something that we like it to an energy of the sequence. So, it is not uncommon to refer to the dot product of a real sequence or for that matter sequence with itself as the energy in that sequence. Anyway, I kept emphasizing real for a good reason. When we talk about the magnitude of a vector or for that matter the more generalized word norm what is it that we expect of a magnitude? We want the magnitude or the norm to be a non negative number and in fact strictly positive if that vector is non zero. So, there are the following things that we demand of this concept of norm or magnitude. Let us write them down. This is a useful and a powerful idea to have around us. So, what do we want of a norm? So, if I have a vector x essentially a sequence x n, n over the set of integers then it is norm which we shall denote in the following way we will denote it like this should be essentially the dot product of x with x square root and further we would want norm of x to be non negative and if at all the norm of x is 0 that implies and is implied by the sequence itself being 0 everywhere that is x of n is equal to 0 for all n belonging to the set of integers this is important. So, we do not want that norm to be 0 unless the sequence itself is the 0 sequence. A non zero sequence even if it is non zero at one point must have a non zero norm and a zero sequence must have a zero norm. Does our dot product satisfy this? Well for real sequences it does if x n is real or rather if x 1, x 2 are real and we take the following definition the dot product of x 1 and x 2 is essentially summation on n going from minus to plus infinity x 1 n x 2 n then the dot product of x with x is essentially summation n running from minus to plus infinity x square n and as long as x n is real for all n belonging to z this is non this satisfies the requirements of norm it is non negative and it is 0 if and only if the sequences identically 0. But what if this is complex? So, we have to allow complex sequences too one of the coordinates could be complex and in fact the situation could be such that x square n could be plus 1 for one of the coordinates and minus 1 for some other coordinate in that case because when you square a complex number nothing guarantees the output is going to be non negative in fact nothing even guarantees the output is going to be real where is the question of non negative. So, this definition is not going to work when x 1 and x 2 are complex sequences in general and we need to tweak the definition a little well it is too that difficult after all what we want is that for every coordinate you must get a non negative quantity when you take point by point products. So, all that we need to do for that purpose is to complex conjugate the second argument in that summation. So, the small change for complex sequences will do our job dot product of x 1 with x 2 is summation over all n x 1 n is equal to x 2 bar n where bar denotes the complex conjugate. Now, one point to note here when we make this little change is that that commutativity property is lost. So, if I take the inner product x 1 with x 2 and then if I take the inner product x 2 with x 1 there is a complex conjugate relationship and this is the more general requirement of a dot product. In fact, this is the simplest way in which one can define a dot product between sequences. There are many other ways again an infinite number of ways, but at this moment we shall not go into the other ways they will only confuse us. This is what is called the standard inner product, but one can have many other non standard inner products which obey the following conditions. The first condition is this that we write down here the inner product of x 1 with x 2 is the complex conjugate of the inner product of x 2 with x 1. Secondly, the inner product is linear in the first argument. In other words, if I take a 1 x 1 plus a 2 x 2 where in general a 1 and a 2 could be complex and take the inner product with x 3 it is essentially a 1 times the inner product of x 1 with x 3 plus a 2 times the inner product of x 2 with x 3. This is the second requirement of an inner product linearity in the first argument. The third requirement of the inner product is what we have been building towards all this while namely what is called the positivity or non negativity. In fact, positivity is more appropriate, positive definiteness namely the inner product of x with x is always greater than equal to 0 and x equal to 0 implies and is implied by the inner product of x with x being 0. In fact, any operation between two sequences x 1 and x 2 is always equal which obeys these three conditions is called an inner product and the standard inner product that we have just described is one such which we shall use very frequently. So, in the discussions hence forth when we say inner product of sequences we mean the standard inner product unless otherwise specified. So, let us just verify this for completeness let us verify this for the standard inner product. The inner product of two sequences x 1 x 2 is essentially the sum n going from minus to plus infinity x 1 n x 2 bar n definition. The first property as we said is complex conjugate easy to verify. So, in fact I leave it to you as an exercise verify the properties of what is called conjugate commutativity the first property and linearity. Linearity in the first argument I leave it as an exercise easy enough to do, but we shall because it is so important verify the third property the positive definiteness. Indeed if we take the dot product of x with x it is summation n going from minus to plus infinity x n x n bar which is summation n going from minus to plus infinity mod x n squared and it is very easy to see that this is equal to 0 if and only if x n equal to 0 for all n even if one of the coordinates is non zero that particular mod x n squared is going to be non zero and it is going to contribute a positive term and of course it is very easy to see that each term for every n I mean is strictly positive if x n is non zero so far so good. So, now we have built up the idea of inner product or dot product between two sequences which is going to be useful to us. So, we move from two dimensional to three dimensional to n dimensional n is finite and then to countably infinite dimension. Now, let us move to uncountably infinite dimension. So, suppose I take a function of the continuous variable t how can I extend these notions so extension to uncountably infinite dimension while this is going to be very difficult in general but very easy in particular if we simply accept that every t for real t is a different dimension simple. So, if you have a function x of t t over the real numbers x of t for a particular t with coordinates so to speak and there is an uncountably infinite number of such coordinates indexed by the real numbers. So, in principle in a given function you have complete liberty to put down the value of x t at every different point t the only catch is we have agreed that we would like to make the function square integrable. So, that does put some restriction on x t but not a very serious one even so. Now, you know dealing with infinite dimensional spaces if we wish to do it very rigorously and very very carefully and you know to satisfy the fast tedious mathematician is a difficult job and we do not really intend to do that all the way in this course. If some of us do wish to take that puritanical perspective one of course would benefit from it in some ways and one could look up a book on functional analysis but what we wish to do is rather to give intuitive understanding of some of the concepts at different places. The intuitive understanding will not be different from a more rigorous understanding for those specific situations but it may not quite be complete even so we would not suffer too much in our study of wavelets in our applications of wavelets if we take this intuitive path to some extent not not all the time I mean to some extent in the context of dealing with infinite dimensional spaces. So, with that little prelude let us come back to this uncountably infinite dimensional space of functions on the real line in which case we can generalize. So, we can generalize the notion of a dot product or inner product between two functions essentially if I take two functions x and y both on the variable t that dot product is not going to be a summation anymore but an integral. So, x t y bar t d t taking that idea further of multiplying corresponding coordinates and instead of summing you now integrate. So, the integral replaces the operation of summation here. Now, of course it is easy to verify and I leave that as an exercise to you the properties of linearity and the commutativity and so on. So, I leave it to you as an exercise here verify the properties of conjugate commutativity. In other words if I interchange the order of the arguments there is a complex conjugation involved second of linearity in the first argument. So, if I take a linear combination of two vectors or two functions in the first argument then the corresponding inner products are also similarly linearly combined and third positive definiteness. So, I leave this to you as an exercise but what I wish to emphasize at this point is the famed passables theorem of which we are aware in the context of the Fourier transform. So, let me recapitulate that very important theorem in the context of the Fourier transform and let us also give an interpretation to it. You see the passables theorem as we know it for continuous function says that if x t has the Fourier transform now I am going to use the frequency hertz frequency variable. So, this is the hertz frequency variable mu. In other words what I mean by that is that the Fourier transform of x t is essentially integral x t e raised to the power j 2 pi mu t d t integrated over all time t. So, this is the hertz frequency variable in hertz. Recall that you can also have an angular frequency variable. So, for example, you could write x cap of omega. Now, I will use this capital omega when we are talking about continuous time. We are going to follow some notions of different notation for continuous time and discrete time. So, we will use this as the angular frequency variable for continuous time in which case x cap omega is x of t e raised to the power minus j omega t d t and there is a simple relation between capital omega and mu. Omega is 2 pi mu angular frequency and hertz frequency. Well simple things, but we should put down all our cards in the beginning. So, we do not get confused later. Now, again this is a little bit of abuse of notation because I am using x cap of capital omega here and I am using x cap of nu there and depending on the context I must interpret either hertz frequency in the argument or angular frequency in radians per second in the argument. Normally from the context it shall be clear and if there is some confusion likely we will make it clear by explicit statement, but remember that from the context we should be clear whether we are dealing with hertz frequency or angular frequency radians per second. Anyway with these details let us come back to the passable's theorem. What is the passable's theorem say? The passable's theorem says the following if you have the Fourier transforms of x and y. So, if x t has the Fourier transform let us use the hertz frequency variable x cap mu and y t has the Fourier transform y cap mu. This arrow denotes the Fourier transform. Then there is an equivalence of the Fourier transform inner product and the time inner product that is what the passable's theorem says in our language now. So, the inner product in time so to speak equal to the inner product in frequency. In other words if you take x cap and y cap and construct their inner product in the same way treating the frequency as the independent variable or the argument. Now this is a very beautiful and a very powerful interpretation of the passable's theorem. When we talk about the inner product perspective we have a very different way of looking at passable's theorem. And in fact if we really think of it a little more deeply passable's theorem becomes so much more intuitive when we talk in terms of inner products. And let me take a minute now to show you why it is so intuitive. Indeed what passable's theorem says in the language of inner products is this and let us do the same in two dimensions and then it will be absolutely amply clear. So, I have two vectors let us call them x and y. Now what passable's theorem says is x dot y is independent of the coordinate system simple enough. What coordinate system we chose to represent x and y does not affect the inner product that is what passable's theorem says in a way. And to strengthen you see it may not be obvious to you why passable's theorem relates to this statement. It is obvious for two dimensional vectors that the inner product is or the dot product is independent of the coordinate system. What is not obvious is why is this related to the passable's theorem? Well towards that we need to go back to what x cap new really is in a way and that will become clear if we write down the inverse Fourier transform. So, we can write down x t in terms of the inverse Fourier transform as x cap new e raise the power j new t d new new is the Hertz frequency variable again. So, in a way what we are saying is we are reconstructing x t from its components each of the x cap new for different values of new is a component here. And this is the way we have reconstructed x t from its components and in reconstruction we have used these vectors each of the e raise the power j new t is like a vector is a function of the real axis. The only catch is e raise the power j new t is not an L 2 r function. So, we have to deviate little bit there from our discussion, but if we choose to ignore that fact we have essentially taken these coordinates multiplied them by the corresponding so called functions along each of the coordinates new and added them to get the function x t. So, each of the x cap new is like a different expression of the same vector x in a different coordinate system. So, what we are saying in passable theorem is that the dot product is independent of the coordinate system whether we choose to use the standard coordinate system of time to represent the function or the slightly less obvious coordinate system of frequency to represent the same function the dot product remains the same. So, these and some other such interpretations are what are offered when we represent functions in terms of vectors or when we think of functions as generalizations of the ideas of vectors. And now for the last remark in this lecture which we shall build on even greater in depth in the next namely what is the connection between functions and sequences continuous functions and sequences. Just to initiate the discussion here without completing it completing it or rather taking it further we shall do in the next lecture, but just to initiate the discussion let us go back to the idea of piecewise constant approximation. So, suppose we have this piecewise constant approximation of this function on intervals of length 1. So, I take the standard unit intervals and I make a piecewise constant representation of a function. So, I have this. So, let the values be let us say c minus 1 here, c 0 there, c 1 there and so on so forth. Now it is very easy to see that if I take the basic function phi t described this way, 1 between 0 and 1 and 0 elsewhere. Then this piecewise constant representation can be written as c minus 1 among other terms phi t plus 1 plus c 0 phi t plus c 1 phi t minus 1 plus c 0 phi t plus c 1 phi t minus 1 and what have you afterwards. So, to conclude just this introduction of this correspondence we can note that equivalent to this piecewise constant representation that I have here, this function in v 0 that we talked about last time. Equivalent to that function is the set of values c minus 1, c 0, c 1 and so on. So, the sequence c n, n over all the integers is equivalent to that piecewise constant function in v 0, any of them can be constructed from the other. From that piecewise constant function we can construct the sequence, from the sequence we can construct the piecewise constant function given phi t. Now this equivalence is what we shall take further and delve into deeper in the next lecture and in the next lecture we shall also build further these ideas of vectors, functions and sequences. Thank you.