 A warm welcome to the 5th lecture on the subject of wavelets and multirate digital signal processing. To put the discussion of the current lecture in perspective, let us recapitulate what we did in the previous lecture. In the previous lecture, we had looked at the equivalence between functions and vectors in depth. In fact, in that equivalence, we saw that by bringing in the notions of the inner product between functions and of course, the inner product between sequences and then noticing that Passive's theorem is a version of a similar statement on the inner product, we understood that this analogy between functions and vectors leaves us much to gain because it helps us picturize very well, it helps us visualize very well what we were talking about when we talked about the ladder of subspaces. In general, in functional analysis, this is a very serious analogy, so it is not just a simile, so to speak, it is a metaphor. We can actually think of functions as generalized vectors and gain a lot from that kind of an analogy rather that kind of an equivalence or a generalization. And now, we need to bring in another dimension to this discussion, which we had briefly begun in the previous lecture. The dimension of replacing work with functions with work with sequences. Sequences are easier for us to deal with. In fact, sequences can be dealt with using a computer. You could store the samples of a sequence in memory point after point. You could process them in discrete time step by step and produce an output, which is again a sequence and lo and behold, if whatever you are doing with a sequence maps exactly what you wish to do with the original continuous time functions, then this is an added advantage. In fact, this is true for the spaces v 0 contained in v 1 contained in v 2 and v minus 1 contained in v 0 and so on, as we saw briefly in the previous lecture, but which we shall now go into greater depth today. So, let us come back to that discussion. How could we think of a function in v 0 as a sequence, as an equivalence sequence? Well, very simple, essentially what we are saying is look at the coefficients in the expansion of that function with respect to the basis of the space v 0. So, there you are. We have the following standard basis for v 0 given by phi of t minus n, n over all the integers and let us sketch a typical phi of t minus n, it is 1 between n and n plus 1 and 0 else, 0 everywhere else. This is how phi of t minus n looks. Now, in fact, this is also an orthonormal basis, I introduce that word orthonormal basis. What do we mean by that? If I take the dot product of two of them, phi of t minus n and phi of t minus m for any two integers n and m, n and m belong to a set of integers, then this dot product is equal to 0 if n is not equal to m and 1 if n is equal to m, very easy to check. It does not require too much of working to prove this and I leave it as an exercise for you to show. All that you need to do is to calculate the integral in the product. After all, phi of t minus n and phi of t minus m are in fact, non overlapping when n is not equal to m and if n is equal to m, they overlap completely and then of course, the integral is just the integral of a rectangle of unit height over unit length, so it is 1. Whatever it be, let us consider a function in v 0 again to fix our ideas of the connection between functions and sequences. So, let us take this example again a little bit of repetition from the previous lecture, but let us fix our ideas with it. So, we have this function, let us say x of t given by the following graphical representation. So, let us take the value let us say half for some variety between minus 1 and 0, minus 3 by 4 in the region from 0 to 1, 3 by 2 in the region 1 to 2 and let us say 4 in the region 2 to 3, so it is very easy to see that x of t can be written as half phi of t plus 1 plus minus 3 by 4 times phi of t plus 3 by 2 times phi of t minus 1 plus 4 phi of t minus 2 and so on. So, you know you could continue this beyond 3 and you could continue it before minus 1 and what we said in the previous lecture was that equivalent to this continuous function is the sequence constructed by dot dot dot and then half at minus 1 minus 3 by 4 at 0, 3 by 2 at 1, 4 at 2 and so on. So, in other words there is an equivalence between x t and the sequence. Now, again just to recapitulate the notation for a sequence we write the 0 point and mark the arrow 0 here to show that this is the sample at n equal to 0 and then of course you have 3 by 2 at 1, 4 at 2, half at minus 1 and so on. So, when we write a sequence in this notation what we mean is that this is the sample at 0 n equal to 0 and then the other samples are arranged in the right order. So, for example this is the sample at n equal to 1, the sample at n equal to 2, the sample at n equal to minus 1 and so on in the correct order around the sample that we have marked. Now, as an alternative we could have as well written the same sequence in the following way this is just to introduce notation properly. So, we could also have written so you could mark 1 for example, the sample at point number 1 and then the sample at point number 2 is 4, the sample at 0 is minus 3 by 4, the sample at minus 1 is half and so on. So, it is the same sequence written differently just for some variety in notation, sometimes we might prefer to do something like this anyway coming to the point and stressing it once again there is an equivalence there is an equivalence between the function and the sequence. This function belongs to v 0 and the sequence now belongs to what is called the set of square integrable sequences. So, let us call the sequence x of n in that case the square integrability of x t in other words the fact that integral from minus to plus infinity mod x t squared dt is finite means summation from n going minus to plus infinity is also finite with the argument as mod x n squared this is also finite and therefore, we use a term for this. We say x t belongs to l 2 r implies x of n belongs to small l 2 z. So, introduce this notation this is a new term we have introduced small l 2 z. So, small l 2 see just as you have capital L to denote spaces with continuous arguments you have small l to denote spaces with discrete arguments and again we would like to define L p z in general. So, in general L p z this z of course refers to set of integers. So, L p z is the set of sequences. In fact, it is the linear space of sequences let the sequences be written as x n such that such that. So, this is continued such that summation n going from minus to plus infinity mod x n to the power p is finite. So, in particular if you put p equal to 2 you get small l 2 z. Now, what we have just shown is that there is a correspondence. So, we have just shown a correspondence if x belongs to v 0 which in turn is a subspace of l 2 r of course, here we are talking about a continuous time function x of t. Then we have the corresponding x of n belonging to small l 2 z of course we can make a similar inference for other values of p, but that is not of consequence at the moment. So, we shall not go into it. What is important is when we talk about inner products now if you have an orthonormal basis. So, here for example, we have an orthonormal basis you see. So, x of in fact, here x n is a set of coefficients is the is a sequence of coefficients of expansion with respect to an orthonormal basis. Now, you know the orthonormal is important here. If the basis is not orthonormal what we are now going to say very soon is not going to be true. So, the orthonormal basis is important here. If x n is the sequence of coefficients of expansion with respect to an orthonormal basis of course of x t m in then there is also mapping between the inner products that is what is interesting. So, not only is there just an equivalence there is also a mapping of the other operations. So, if you have 2 such functions in v 0. So, suppose you had x t and y t both belonging to v 0 both of them belong to v 0 which of course, belong in turn to l 2 r and they correspond to the sequences x square n and y square n square meaning square bracket the dot product of x t with y t in continuous time. So, inner product of x t and y t is understood on the continuous time axis which is of course, given by summation minus to plus infinity integral x t y bar t d t is a multiple is some constant times summation n going from minus to plus infinity x n y n bar this is important. So, there is also a carry on word of the equivalence to the domain of inner products. So, whatever we are doing in the context of the continuous functions can be done can be equivalently done or can be equivalently derived or brought forward to the context of the sequences which are associated. So, we can go to the extent of forgetting about the underlying continuous functions and deal with the sequences directly. Now, what is the physical or the practical meaning of this? Well, let us go back to the first lecture where we motivated the very idea of a piece wise constant representation and we started with a two dimensional situation. We said let us take images and we said let us divide those images into very small areas which we represent using picture elements or pixels. On each of the pixels we put down one number corresponding to the average intensity of the picture in that pixel area and we said that is reasonably representative of the image or the picture if those areas are small. Now, of course, we are representing the original image by a piece wise constant function and equivalently we could now think of the image as being represented by a two dimensional sequence. So, sequence indexed with two integer variables let us say n 1 and n 2 n 1 going from 0 to 511 and n 2 also going from 0 to 511 in case we have a 512 cross 512 picture representation or resolution on the computer screen good. So, what we are saying now is easy to understand. Here if we wanted to do with the picture we can equivalently do with these two two dimensional sequences. If you have two pictures and you want to mix and match or do whatever you want to do you could do it equivalently with these two dimensional sequences. If you want to gain some inferences you could gain the same inferences by looking at these sequences good, but where does this take us. Now, what is going to be useful to us is to see how we can move from one resolution to the next that is what is of interest you see ultimately our movement in all this discussion is to extract incremental information and incremental information is extracted by going from one subspace of L 2 R to the next in the ladder. So, let it be the function belong to V 1 consider a function let us say y t belonging to V 1. Now, V 1 you will recapitulate is the space of functions piecewise constant on intervals of length half. So, let us take one such let us take a function from minus 1 to let us say 3. So, minus 1 minus half 0 half 1 3 by 2 2 5 by 2 and 3 select I will just write down I would not sketch I will just write down values the piecewise constant values in each of these intervals. So, in this interval the value can be let us say 4 in this interval the value is 7 in this interval the value is 10 in this interval the value is 16 let us say in this interval the value is 14 11 let us say 3 and minus 1 for some variety these are the piecewise constant values in these respective intervals. So, in fact we have the sequence here the corresponding sequence. So, I use the notation with the standard marker of 0. So, at 0 I have 10 and then I have in order 16 14 11 3 minus 1 7 and 4 this is my corresponding sequence. So, in fact what we are saying in effect is y t if I call the sequence y of n what we are saying in effect is y t is summation n over all the integers y n not phi of 2 t minus n that is important notice that phi of 2 t minus n is by definition going to be equal to 1 between 2 t minus n equal to 0 that is t equal to n by 2 and 2 t minus n is equal to 1 which means t is n plus 1 by 2. So, for example, phi 2 t minus 1 is going to be 1 between 1 by 2 and 2 by 2 half and 1 and that agrees with what we just wrote down. Now, suppose we decompose so we know that you know we have this decomposition of v 1 into w 1 and v 0 or rather w 0 and v 0 I am sorry. So, we said that we have the space w 0 given by the span of psi t minus n n over all the integers and v 0 of course is the span of phi t minus n for all integer n and now we are going to introduce the notion of what is called an orthogonal complement. So, we are going to say well v 1 is the orthogonal sum this represents orthogonal sum and we are going to explain this in more detail. We say that the space v 1 is the orthogonal sum of the spaces v 0 and w 0 if there is a unique way to take a vector in v 1 and decompose it as a sum of a vector in v 0 and a vector in w 0 with the vectors from v 0 and w 0 being perpendicular being orthogonal. In other words there are inner products being 0. So, what is this idea of an orthogonal sum? The idea of an orthogonal sum is to decompose a linear space into subspaces of course of smaller dimension where you also have mutual perpendicularity or mutual orthogonality between the space between the vectors in these two spaces of decomposition. Now I shall give you a simple example from a three dimensional situation. Let us take the three dimensional space of the room that we are in at the moment and let us take the two dimensional space of the floor. The three dimensional space of the room is the orthogonal sum of the two dimensional space of the floor and a one dimensional space comprised of all multiples of a vector perpendicular to the floor. So, as expected a three dimensional space is an orthogonal sum of a two dimensional space and a one dimensional space. The one dimensional space is formed by all multiples of a vector perpendicular to the floor. The two dimensional space is formed by all vectors lying on the floor. So, if you take any vector lying on the floor and any vector in that one dimensional space of multiples of a unit vector perpendicular to the floor, these two vectors are orthogonal perpendicular in three dimensions very easy to visualize. Now you can always generalize to n dimensions. So for example, if you have a ten dimensional space it could be an orthogonal sum of a four dimensional space and another six dimensional space where if you take a vector from that four dimensional space and a vector from the six dimensional space they are perpendicular to one another. Perpendicular as understood by taking the inner product between these two vectors in that ten dimensional space. Now I must at this point make a little remark. The inner product allows us to bring in the notion of an angle between functions a more general version of orthogonality. So, we say two vectors are perpendicular if their inner product is zero. In general we can also define the angle between two vectors using the inner product and we shall do exactly that in a minute. So, we can talk about the angle and in fact once we bring in the notion of angle between functions we can also bring in the notion of angle between the corresponding sequences. If you have two functions x t and y t of course belonging to L to R we are going to confine ourselves that space then the angle between x t and y t is essentially defined by the following. You see let it be theta we say cos theta is essentially the inner product of x t with y t divided by the norm in x 2 of in L 2 of x and the norm of y in L 2 multiplied together. Now this is very similar to the idea of a dot product between two vectors. So, if you recall if you have two vectors let us say v 1 and v 2 then v 1 dot v 2 divided by the magnitude of v 1 and the magnitude of v 2 gives us the cosine of the angle between v 1 and v 2. So, in a restricted sense you do have the notion of angle between functions and whatever you did to construct the angle for the functions can also be done for the corresponding sequences associated with the functions and therefore, you have the notion of angle even between the corresponding sequences. And I ask a question and I leave it to you to ponder over the answer are those two angles the same do they actually match I think they should should not they and I leave it to you as an exercise to actually show that they do. So, exercise establish a correspondence between the angle between the functions and the corresponding sequences correspondence has become deeper and deeper and whatever we have been doing with the functions we discover can now be done with the sequences. Now the next step is to ask can we also think of decomposition in terms of decomposition of the sequences. So, for example, let us go back to that function in v 1 that we had a few minutes ago we had this function in v 1 and we could then decompose v 1 into v 0 the orthogonal sum of v 0 and w 0. Now just for a minute let us keep aside the discussion of this particular function and let us look at a typical function in v 1 a typical function in v 0 and a typical function in w 0. So, let us look at typical functions in v 1 v 0 and w 0 now these are piecewise constant on the standard unit intervals these are linear combinations of psi t minus n for integer n and these are piecewise constant on n by 2 n plus 1 by 2 for integer n suppose we take a given function in v 0 and a given function in w 0. Now what for the dot product p? So, what I am saying is suppose we consider say x 1 t belonging to v 0 and x 2 t belonging to w 0 let us focus our attention on a particular interval let us say n to n plus 1 let us draw the function x 1 t by a solid line and this one by a dot dash line what for the typical function look like the x 1 t function would look like this and the x 2 function might look something like this. If I multiply these functions together and integrated you can visualize the integral piece by piece on each of these intervals n to n plus 1 the integral on any one of these pieces is obviously 0 because the positive and the negative areas are equal and this can be seen to be true of all the intervals. And therefore, obviously these two functions are perpendicular or orthogonal because their dot product is 0. So, the dot product of x 1 t plus 1 and x 2 is 0 they are perpendicular or orthogonal you know we do not use the word perpendicular anymore when we talk about functions we should use the word orthogonal. What is more take any particular function in v 1 now. So, let us take the function let us say again you know focus on any one particular interval a function v 1 let us take the interval n to n plus 1 and let the function have the value let us say c 1 for the first half interval and c 2 for the second half it is very easy to see that this can be treated as a function belonging to v 0 plus a function belonging to v 1 where the corresponding function belonging to v 0 looks like this. So, this function is equal to this function c 1 plus c 2 on the interval c 1 plus c 2 by 2 over the interval n to n plus 1 plus the function c 1 minus c 2 over the first half interval and the negative of the same thing over the second half interval. So, if you take this point to be n to n plus 1 by 2 so to speak the middle of the interval and this height here is c 1 minus c 2 by 2. So, this is you see I am what I am saying is the function coming from v 0 and the function coming from w 0 I am just showing one segment of each of these functions the same thing can be done for each of the intervals n to n plus 1. What I am saying is that this function whose segment over n to n plus 1 I have shown here being c 1 on the first half interval and c 2 on the second half interval of course, it belongs to v 1 is equal to the sum of this function c 1 plus c 2 by 2 on the entire interval belonging to v 0 plus this function belonging to w 0 which is c 1 minus c 2 by 2 on the first half interval and the negative of the same thing on the second half interval. So, it is very easy to see that we can in general decompose a function in v 1 into a function in v 0 plus a function in w 0 in a unique way and therefore, the orthogonal decomposition of v 1 into v 0 and w 0 in a unique way v 0 is easy to construct. Now, can we also make a corresponding construction on the sequences and in fact, to some extent we have already answered the question if you look at it here c 1 would be the value of the sequence at 2 n the sequence corresponding to the function in v 1 and c 2 would be the value of that sequence at 2 n plus 1 interestingly the value of the sequence corresponding to the function in v 1 at the point 2 n and 2 n plus 1 relate to the values of the sequences corresponding to the functions in v 0 and w 0, but at the points n and not 2 n. So, you have the value c 1 plus c 2 by 2 for the sequence corresponding to this function at the point n not 2 n and you have the value c 1 minus c 2 by 2 corresponding to the function in in in w 0, but at the point n not 2 n. What I am saying is now if you think in terms of sequences let us do that. So, you had this function in v 1. So, let us say the function is now you know for variety let us use p t belong in to v 1 and the corresponding sequence p n you have the corresponding function p 0 t let us call it and the corresponding sequence p 0 n where p 0 is the component in in v 0. So, to speak and you have q 0 let us say as the component in w 0 and the corresponding sequence q 0 n. So, what I am saying is p t is of course, equal to p 0 t plus q 0 t n is not equal to p 0 n plus q 0 n that is not correct that is because the orthonormal basis are different. So, now, we need to establish a relation between p 0 n q 0 n and p n that is the next task that we would like to undertake. So, our next job is relate p n p 0 n and q 0 n and in fact, we already have an answer to that question let me just go back in one step here. So, we have the answer here you see p at the value 2 n is c 1. p at the value 2 n plus 1 is c 2 p 0 at the point n is c 1 plus c 2 by 2 q 0 at the point n is c 1 minus c 2 by 2 let me write down all these formally. So, I have a relationship there I have almost done my job what have we said we have said p at 2 n is c 1. p at 2 n plus 1 is c 2 p 0 is c 1 plus c 2 by 2 and q 0 at n is c 1 minus c 2 by 2. Now, let us combine these equations. So, we have p 0 n is p 2 n plus p 2 n plus 1 by 2 and q 0 n is p 2 n minus p 2 n plus 1 by 2 and q 0 n is p 2 n minus p 2 n plus 1 by 2 and now this brings before us a very beautiful perspective. When we talk about sequences we can also extend that context to talk about discrete time filters acting on sequences can we visualize what we have done here as discrete time filters acting on the sequences. So, suppose you had the following discrete time filter input p x of n and the output p y of n and y of n is half x n and plus x of n plus 1 you know it is a non-causal filter. Let us not worry too much about it for the moment let us accept it even if it is non-causal. What have we done here in this in this relationship? Let us reflect on the connection between for example, this relationship here and the filter we just constructed. If you think of 2 n as a variable let us call it let us say it is l. So, this is p of l plus p of l plus 1 by 2 then in fact this is essentially the filter acting on the sequence p. So, what we are saying is use this filter that we have here and put p n here, but then a little bit of work needs to be done at this point because if you put in p n let us do that let us put in p n. So, if you put in p n there if you put into this filter that we just wrote down here y of n is half x n plus x of n plus 1 what we would get here is half of p n plus p n plus 1 I am sorry p n plus 1. Yeah n plus 1 that is correct, but we do not want p n and p n plus 1 we want to replace here. So, what should we do we should have another system now following this where putting x in and put out x out where x out of n is x in of 2 n we want a system like this. Let us interpret this system what we are saying in this system is x in at n. So, let us write down n minus 4 minus 3 minus 2 minus 1 0 1 2 3 4 and so on and the sequence x in here put at these points. So, as far as x out goes you have the index for x out goes. So, you have x out as far as x out goes 0 comes from 0 1 here comes from 2 2 here comes from 4 minus 1 comes from minus 2 minus 2 comes from minus 4 and so on. So, in other words what are you doing you are retaining the samples at the even locations and 3. Showing away the samples at the odd locations not only that after retaining the samples at the even locations you are putting at putting those samples at half the location number. So, let us summarize this retain even samples and half the location number. Now, this system is a new system as far as a basic course on discrete time signal processing is concerned. We need to christen it we need to give it a name. In fact, let us go back to that system and let us give it both a symbol and a name. The symbol that we shall give it is a down arrow followed by 2 and we shall call it a decimator. You know the word decimate actually has a very cruel meaning. I am told that in the days of wars during the Roman Empire a very cruel thing that warriors used to do was to kill 1 out of 10 or may be 9 out of 10. I do not remember and that was what was called decimation. Take 10 of them and eliminate from each group of 10 that was a cruel way to deal with people, but the word decimate has also percolated down to the literature on digital signal processing. Here decimation means retaining 1 out of so many samples. So, in this case decimation by 2 means retaining 1 out of 2 samples. In fact, the first of each pair of 2 samples out of 0 and 1 you retain 0, out of 2 and 3 you retain 2. Not only that after you retain only 1 out of 2 compress so that the sample number is halved or if you retain 1 out of 3 samples then compress the sample number is multiplied by one third. So, if you are decimating by a factor of 3 for example, the 0 sample will go to 0, the 3 sample will come to 1, the 6 sample will go to 2, the minus 3 sample will come to minus 1 and so on. If you are decimating by a factor of 2, the 0 sample will come to 0, the 2 sample will go to 1, the 4 sample to 2, the minus 2 sample to minus 1 and so on as we just showed. So, what do we have here? We have a filter followed by a decimator and that together helps us construct the sequence p 0 n from the sequence p n. Now, we shall see in the next lecture. That we can similarly construct the sequence q 0 n from the sequence p n by using another filter and a decimator and we shall build up further from there to do something to reconstruct p n from p 0 n and q 0 n and all this shall together lead us to a totally different structure in discrete time signal processing which we shall call a two band filter bank. So, this little trailer for the next lecture, let us conclude the present lecture. Thank you.