 A warm welcome to the fourteenth lecture on the subject of wavelets and multirate digital signal processing. We continue in this lecture to discuss the Dubash or the Abashish's filter bank which we had very briefly introduced in the previous lecture. I would like to put before you the salient points of that filter bank once again and then complete the design that I had begun in the previous lecture. So, recall that we had based the construction of this series of Dubash filter banks on the idea of annulling polynomials of higher and higher degree. So, we said that we wanted more and more factors of the form 1 minus z inverse appearing in the high pass filter. So, for example, let me put down the structure of the next member in the family after the Haar case where instead of 1 you would have two factors. What we said was the second member of the Dubash family would look like this. The high pass filter would have a factor of 1 minus z inverse the whole squared and therefore, if we look at the low pass filter remember the high pass filter was essentially of the form z raised to the power minus d h 0 minus z inverse where h 0 z is the corresponding low pass filter. Of course, I am talking about the analysis side. So, essentially the analysis low pass filter would now have a form like this h 0 z would have the factor 1 plus z inverse squared and then we recall that we had even lengths for the filters and therefore, we have a situation h 0 z has three 0s two already specified obviously they belong to minus 1 and therefore, the third to be determined and where do we determine this third root from well we go back to the requirement of orthogonality to even shifts. So, recall that we had said that the impulse response is orthogonal to even shifts that means if I assume that this filter has the impulse response h 0 h 1 h 2 h 3 at 0 then this is orthogonal to its shifts by 2 4 and so on. So, the non-trivial case the only non-trivial equation that we get is the following. It comes from a shift by 2 h 0 h 1 h 2 h 3 and when this is shifted by 2 you have h 0 h 1 and then you know they are all they are all 0s after this there are 0s before and there are 0s before. So, all that you have is h 0 h 2 plus h 1 h 3 is equal to 0 this is the only non-trivial equation that we get. Now from the location of the 0s and the free parameter we need to express h 0 through h 3 in terms of the free parameter. So, what do we have? We have h 0 plus h 1 z raised to the minus 1 plus h 2 z raised to the minus 2 plus h 3 z raised to the minus 3 is of the form some constant let us say c or c 0 if you please times 1 plus z inverse the whole square times 1 plus b 0 z inverse and we need to compare coefficients on both sides. Now, the constant c 0 does not affect orthogonality. So, we shall just focus on the rest of the expression because we first need to satisfy the requirement of orthogonality to even shift I mean and from there we need to determine the constant c 0 we will see later what helps us determine c 0. So, in fact if you look at it what we are asking for is the following expanded term 1 plus z inverse the whole square times 1 plus b 0 z inverse where b 0 needs to be determined and this as we said could be expanded as 1 plus 2 z inverse plus z raised to the minus 2 times 1 plus b 0 z inverse and if we collect terms it gives us 1 plus 2 z inverse plus z raised to the minus 2 plus b 0 z inverse plus 2 b 0 z raised to the minus 2 plus z raised to the power minus 3 times b 0. So, we will aggregate terms together here we will take the coefficients. So, essentially you know except for the constant h 0 is essentially therefore, 1 h 1 is of the form 2 plus b 0 h 2 is of the form 1 plus 2 b 0 and h 3 is of the form b 0 and now we can write down the orthogonality equation that we seek it says h 0 h 2 plus h 1 h 3 must be 0 implying that 1 plus 2 b 0 plus 2 plus b 0 into b 0 is 0 where we are effectively saying 1 plus 4 b 0 plus b 0 squared is 0. So, there we have an expression for b 0 it is easy to determine. So, b 0 has got constraint as you seen not surprising there was 1 free parameter and 1 non trivial constraint solving the equation we have b 0 is minus b plus minus root b squared minus 4 ac by 2 a and that gives us minus 4 plus minus 2 root 3 divided by 2 and that gives us 2 solutions minus 2 plus minus square root of 3. Now, we have two solutions here which of them should we choose well what distinguishes these two solutions it is very clear from the quadratic equation and let me go back to it you see if you look at this term this tells us the product of the roots the term 1 and the product of the roots clearly has a magnitude of 1. So, if 1 of them lies inside the unit circle the other must be outside. In fact, they cannot lie on the unit circle because none of them has a magnitude of 1 and therefore, 1 must lie inside the unit circle and the other outside let us take note of that. In fact, it is very easy to see which lies inside and which lies outside if you look at the minus 2 minus square root of 3 solution it clearly has a magnitude greater than 1 and the other one has a magnitude less than 1. So, among the solutions b 0 is square root 3 minus 2 lies inside the unit circle and b 0 is minus square root 3 minus 2 lies outside the unit circle. Now, we have a choice we could either choose this or this we shall choose this and there is a reason for it you see very often we like to choose what is called the minimum phase solution there is this idea of minimum phase I should not dwell too much on it at this moment suffice it to say that minimum phase essentially means choosing all 0s inside the unit circle wherever possible. In fact, the idea of minimum phase comes from what is called minimum phase delay. So, when we you know after all one interesting thing is that whether we choose the root inside the unit circle or outside the magnitude of the frequency response would be the same the magnitude would not be different what would be different is the phase and when we put the root outside the unit circle there is going to be an increased phase delay and very often that would also point to an increased group delay increased phase delay increased group delay in the filter. So, when we take the solution inside the unit circle we are reducing the phase or the group delay as much as we can in the filter. So, it is essentially a question of choosing the better solution in terms of phase. So, let us summarize that we take the minimum phase solution that is B 0 inside the unit circle and we have B 0 is square root 3 minus 2 obviously mod B 0 is less than 1 here and one can also evaluate this approximately if you recall square root of 3 is about 1.73. So, this would be 1.73 minus 2 and one can come up with an approximate value that is not really an issue. Anyway, we can now put down the impulse response of the Dobash-Lopas filter therefore, it is essentially well let me read off the impulse response from here let me flash the impulse response before you once again and then we will substitute. So, this was the impulse response now we substitute for B 0. So, 1 2 plus B 0 which is essentially square root 3 and then you have 1 plus twice B 0 which is 1 minus 4 plus 2 root 3 or minus 3 plus 2 square root 3 and finally, B 0 which is just square root 3 minus 2. So, this is H 0 of course well please remember this is to within a constant. So, there is still that constant C 0 that needs to be determined. How do we determine the constant C 0? Well we go back to the original equation of kappa 0 z that we had. So, we wanted kappa 0 z defined by H 0 z times H 0 z inverse to obey the following kappa 0 z plus kappa 0 minus z is a constant and in fact, what it really means is we now need to choose this constant choice of C 0 means choosing this constant. The easy thing to do is to ensure that the impulse response has a unit norm in the sense of L 2 because if you look at it the dot kappa 0 z plus kappa 0 z is equal to 0. What product of the impulse response with itself? You know what we are saying essentially is kappa 0 z you know the sequence corresponding to kappa 0 z at the 0th location is essentially the norm in L 2 z the norm squared actually in L 2 z of the impulse response. So, this is a dot product of the impulse response with itself and we could as well make that 1 for convenience. So, now we shall choose C 0 in order that this becomes 1 and in fact, I do not need to carry out the computations. Essentially what I am saying is choose C 0. So, that C 0 squared times H 0 squared plus H 1 squared plus H 2 squared plus H 2 squared plus H 3 squared is equal to 1 with H 0 through H 3 as chosen before without C 0. So, I leave that little calculation for you to do and I would strongly recommend that students look at various texts that list the Dobash filter responses and verify that the Dobash filter response for a filter length of 4 exactly coincides with what we calculate from here. So, much so for the Dobash filter bank. Now, the next step is to build a phi t in a psi t how would we do that. So, there again we go back to that iterative convolution that we talked about. So, essentially you would have a situation where you first construct H 0, H 1, H 2, H 3 lying on you know. So, the situation is that you need to compress and convolve compress and convolve as we did in the hard case as well. So, you would start essentially by putting H 0, H 1, H 2, H 3 like this. Then now remember that you have to write here what we are doing here is to construct phi t the scaling function or the so called father wavelet. If you recall the basic step every time was first to take the sequence and then to take the sequence squeezed on the time axis by a factor of 2. So, you know if you put the sequence at these locations now put the sequence at the following locations. H 0, H 1, H 2, H 3 here these two locations are midway between these two locations here. So, midway between this is this and midway between these is this and put H 0, H 1, H 2, H 3 here. Visualize these to be impulses of strength H 0, H 1, H 2, H 3 and here again impulses located at these places again with strength H 0, H 1, H 2, H 3 and convolve this with this essentially. So, you know it is an iterative convolution and you can visualize that you would get impulses in one step of this convolution from four you would get 8 and in fact this would stretch a little beyond as you can see. In the next one you would again get this squeezed by a factor of 2. Now, in this case it is much more difficult to visualize where this convolution is leading. What I have given you is the mechanism and in fact it is a very important exercise a simple, but very effective computer exercise in this course to actually carry out this convolution. What kind of a fighty would result? The answer is not trivial. So, it is unlike the hard case where you had a very neat answer you got a nice beautiful rectangular pulse of height 1 that is not the case here. The impulse response coefficients are somewhat complicated and when we start convolving them you have four of them you are convolving four with four squeezed by a factor of 2 and this is going to go on. What you can see of course is that ultimately all these impulses are going to come over a finite area that is a point of observation. I want to explain this point to you. It is a subtle point, but not very difficult to understand. You see let me go back to this drawing. The first time you carry out this convolution this with this you are going to have an extension of the length over which these impulses lie. So, after this step you can visualize H 0 coming here. So, the spread would be up to three of these samples going beyond H 3. The next time you would have half of this interval getting added on there. So, if I call this interval L then you have the interval L by 2 here. So, let me draw the situation carefully. Let me draw it for two or three steps. So, what I am saying is it is a convolution something like this. You start with a train of impulse impulses which is spread over a length of L. The next time the same train is squeezed by a factor of 2. So, it spreads over an interval of L by 2. The next time it is squeezed again by a factor of 2. So, you know I will draw them squeezed together like this and this length is L by 4 and when you convolve this length L with length L by 2 you get at this stage length L plus L by 2 and when you convolve this with this you would get L plus L by 2 plus L by 4 and so on so forth. So, you can see that the length is not going to go to infinity but it is going to converge and what is it going to converge to? Very simple as we carry out this iteration to infinity the length is going to converge to L plus L by 2 plus L by 4 and so on. It is a geometric series and the sum is very easy to calculate it is 2 L. So, what we can see for sure is that this leads to what is called a compactly supported scaling function. When we iterate these impulses with a contraction every time we ultimately get a function lying on a compactly supported region or a more accurate way of saying that is the function the scaling function is compactly supported. It has a finite part of the time or the independent variable axis on which it is non-zero. This is such an important thing that we should write it down and we should emphasize it. We converge towards a compactly supported scaling function and what does that mean? We are essentially saying the region the independent variable region over which it is non-zero. So, I will continue this over which the scaling function is non-zero is finite. Now, you know this is the most important contribution that Dabash made. Before Dabash introduced this set of filter banks the idea of neatly constructing a family of compactly supported multi resolution analysis was not existent in the literature. So, I would put this as a very very powerful contribution that does not mean that the seeds were not there. In fact, the subjects of wavelets and filter banks filter banks as a multirate signal processing paradigm had almost developed in parallel, but using filter banks in an effective way to construct compactly supported scaling functions and wavelets is an important contribution emerging out of Dabash work the work of Dabash machines. Not only just compact support successive scaling functions here have some additional property in terms of their behavior with respect to polynomials as you can see. For example, what would happen in the second member of the Dabash family? Let us reason it out a little better. The second member this second member would annihilate or kill or do away with or make 0 whatever you wish to call that annihilate polynomials of degree 1. Essentially sequences of the form some say q 0 plus q 1 n q 0 and q 1 are constants. What I mean by that is the following. If you look at the overall sequence being given to the filter bank at the input and if you think of that sequence as possessing one component of this kind and the remaining essentially a residual component only the residual component would come out on the high pass branch. This polynomial component would only be present in the low pass branch. So, other way of saying it is a few more smoother terms are retained on the low pass branch and removed from the high pass branch. Another way of saying it is well the high pass branch becomes even more and more high pass you know. So, what I am saying effectively is that it really behaves more as a high pass filter than does the case of Haar of length 2. Dabash of length 2 is the Haar and that does not behave as well as a high pass filter as does this. In fact, I would like to put down two exercises for the class now with this discussion that we have had and I strongly recommend as a part of this course that students work out both of these exercises to understand what I am saying. Working out these exercises does mean using a computer. It would help to use a computer to evaluate the expression finally to get a good feel. It cannot be done by hand, but it is worth doing. So, the exercise is as follows. I shall explain the exercise here. Exercise 1, work out the iteration to move towards the DAWB 4 as they call it. You know this is a nomenclature that we would like to introduce here. DAWB 4 means the Dabash filter bank with the filters of length 4. So, here what we have just been talking about is the DAWB 4 set of filters or DAWB 4 filter bank. So, we have already explained how to carry out the iterated convolution, but it would be worth actually carrying it out to move towards DAWB 4 scaling function. One would notice that the function that emerges is a continuous function, but it would not be expressible in what is called closed form. So, one would not be able to express it as some sin t or sin e raise the power t or something of that kind, but it would be a continuous function. It would converge to a continuous function. Now, you know this is something important here. We have just set up some kind of a filter bank and we have started iteratively convolving the impulse response with its own compressed versions. What is it that guarantees that when you carry this iteration to infinity, there is going to be some semblance of convergence in that process, nothing inherently. So, if we take an arbitrary filter bank like this with an H 0, H 1, G 0 and G 1 and if you were to take the H 0 and carry out an iterated convolution like this, you might land up with what is called a fractal function. In fact, when I say fractal function, the word function is a misnomer. It would mean that that iterated convolution process would not converge to a function at all or at least definitely not to a continuous function. That could very well happen. In the hard case, we had a neat beautiful rectangular pulse to which it converged. In the Dobash 4 case again, we are going to converge towards a continuous function. I am assuring you even before you carry out this exercise. We will soon see that for the higher order members of the Dobash family, you would again converge to continuous functions. However, if you just picked some arbitrary low pass filter and started iterating it like this, may be even a low pass filter which satisfies that orthogonality to even translates that we have asked for. There is no guarantee that any such arbitrary filter would converge in this iterated convolution. So, what is it about this filter which allows convergence? In the literature on wavelets, they speak of this as a property called regularity. So, we say that the filters need to obey for the iterated convolution to converge. Converge to what? Well, converge to a function which is either continuous or at least continuous almost everywhere except for an isolated finite number of points. So, you know if you really want to look at it that way, the hard scaling function is not continuous, but it is only two points at which it is discontinuous, but an infinite number of points. And we do not want that situation of this iteration taking us to a quote unquote function or object which has infinite points of discontinuity. That is what we are trying to say. So, whatever it be, this regularity in this case comes from the presence of zeros. So, one guaranteed way of forcing regularity is introduction of factors. So, the more zeros you have at z equal to minus 1 of course, in the low pass filter. And therefore, correspondingly you would have zeros at plus 1 in the high pass filter. Again to give a physical significance, when you put z equal to minus 1, you are talking about e raised to the power j omega being equal to e raised to the power j plus minus pi. So, omega being equal to pi the extreme frequency, extreme high frequency. So, in the low pass filter we are saying put zeros at the extreme high frequency. And correspondingly therefore, in the high pass filter we put zeros at the extreme low frequency namely 0, omega equal to 0, omega equal to 0 corresponds to z equal to plus 1, e raised to the power j 0 is 1, simple. So, one way to force regularity is to put zeros at minus 1 and that is what we are doing in the Dobash family. R 1 0, Dob 4 2 zeros, Dob 6 that is the next member of the family, length 6 would have 3 zeros and so on and so forth. And you know what we say? We say the higher you go in the Dobash family in terms of length, the more regular your filters are, the more regular. What that means is the functions to which we converge on iterated convolution become smoother and smoother. They have more and more derivatives that are continuous. So, if you look at Dob 4, it is differentiability in the traditional sense is under scanner. It is continuous, but as far as differentiability goes there are issues, but when you go to higher order Dobash that is also taken care of. So, the functions become smoother and smoother. And now you can see how to do this for higher order Dobash filters. I mean whether it is length 6 or length 8 or length 10, we know exactly how to carry out an iterated convolution. Put all those impulses together. The impulse response coefficients as impulses uniformly spaced. Squeeze that set of impulses by a factor of 2, convolve it with the first set. Again squeeze by a factor of 2, convolve it again and this can continue and continue and continue. So, I leave it as I said as an exercise to complete this iterated convolution. So, I repeat the exercise which all of us must do, work out the iterated convolution to move towards the Dob 4 scaling function. The second exercise which I would like to ask the class to do is the following. Obtain the frequency response of Dob 4 low pass filter and of course therefore also of the high pass. So, just for completeness let me write down the expression for the frequency response. What I am saying is obtain h 0 plus h 1 erases the power j omega plus h 2 well erases the power minus j omega, h 2 erases the power minus j 2 omega plus h 3 erases the power minus j 3 omega. Evaluate at many finely spaced omega. So, maybe you know between 0 and pi one could take 1000 points and evaluate this expression to get a feel of the frequency response. And the idea is to compare it with the frequency response of the har. So, what we are specifically looking for is this. Har gave us essentially this cos omega by 2 kind of response between 0 and pi. In Dob 4 are we going closer to ideal? What ideal are we talking about? You remember what the ideal was? The ideal was essentially this. This is the ideal, an ideal low pass filter with a cut off of pi by 2. Now, let me tell you how to build the next member of the Dobash family. To do that then we would put one more 0 at z equal to minus 1 in the low pass filter. So, next member of Dobash family essentially take h 0 z to be of the following form 1 plus z inverse the whole cubed. And there would be 2 more degrees of freedom now or you could call it b 0 till day if you like just to distinguish it from the b 0 that we have calculated here b 1 till day z inverse. Remember that this member would have a low pass filter of length 5 I mean sorry of degree 5 and therefore, length 6. And since it is degree 5 or length 6 3 of the 0's are constrained 2 of them are free. So, you have 2 free parameters here b 0 till day and b 1 till day. And what are the constraints in this case? The constraints are I mean the non-trivial ones I mean orthogonality to translation by 2 and 4. In other words let me put it down explicitly. You have h 0, h 1, h 2, 3, 4, 5. Translation by 2 would mean this the rest of them are 0's we do not need to bother. Translation by 4 would mean this. So, take the dot product of these 2 and put it equal to 0. Take the dot product of these 2 and put it equal to 0. These are 2 constraints. So, I will read them off h 0 times h 2 plus h 1 times h 3 plus h 2 times h 4 plus h 3 times h 5 is 0 in this and h 0 times h 4 plus h 1 times h 5 is 0 in this. And this is the only 2 non-trivial constraints that we have. 2 constraints, 2 free parameters one can determine them. Simple quadratic equations this time they are simultaneous quadratic equations involved little more work, but doable. And one can keep doing this for higher and higher order members now. By the way there are different ways of building this family of tobash filters. This is one way. There are more convenient ways too or what might be seen as more convenient by some. It is not our objective to dwell on those methods in this lecture, but rather to make a more general remark now about this class of filter banks that we are talking about namely the conjugate quadrature filters. So, we wish to put down what we call the minimal requirements of design in a conjugate quadrature filter bank. Incidentally you might wonder where this name comes from conjugate quadrature. Actually it is the quadrature word which is important there. The quadrature word comes in some sense from the idea of a 90 degrees shift. So, you know in a certain sense what we have done is to relate the high pass filter and the low pass filter frequency responses by a shift of pi on the frequency axis. So, notionally what we are saying is a low pass filter with cut off pi by 2 aspires to become a high pass filter with cut off pi by 2 in this relationship replacing z by minus z essentially. So, this replacement of z by minus z to relate the low and high pass filter brings what is called a quadrature relationship that about the name, but anyway. So, what we have is essentially the following equation. The principal equation governing the quadrature filter bank is this kappa 0 z plus kappa 0 minus z is a constant where kappa 0 z is h 0 z h 0 z inverse. And therefore, if you look at it the frequency domain says kappa 0 e raised the power j omega is h 0 e raised the power j omega h 0 e raised the power minus j omega essentially. And with a real impulse response what do we have with a real impulse response we have mod h 0 e raised the power j omega squared is kappa 0 e raised the power j omega so, in other words now we have a very clear design problem before us designing a conjugate quadrature filter bank is equivalent to designing essentially one filter kappa 0 e raised the power j omega. So, the design problem is design kappa 0 e raised the power j omega and if you look at kappa 0 e raised the power j omega it is a non negative frequency response as you can see mod h 0 e raised the power j omega the whole squared. And this non negativity can only come from an even a real and even response. So, we are saying kappa 0 z corresponds to a real and even impulse response the constraint that even samples must be 0 except the 0. So, you know this equation that we had here kappa 0 z plus kappa 0 minus z is a constant essentially says the even samples other than the 0 sample is are all 0. So, you are trying to design a low pass filter. So, in fact we should qualify this further non negative low pass frequency response and what kind of a low pass frequency response with a cut of pi by 2. So, now we have the design problem very clear design and even impulse response a low pass aspiring to be a low pass filter with cut of pi by 2 non negative with the constraint that the even samples of the impulse response are 0 except for the 0 at some there are many different ways in which one can design finite impulse response filters and any optimization which allows us to design kappa 0 with these constraints is acceptable to design kappa 0. And once we have kappa 0 then you look at its roots for each root you have pairs of reciprocal roots h 0 z h 0 z inverse out of each pair of reciprocal roots put one in h 0 and the other one automatically goes to h 0 z inverse. So, this is the general strategy to design conjugate quadrature filters and the Dobash family is just one of many such families. So, with this then we have put down a whole family of multi resolution analysis or filter banks for you. In the next lecture we shall ask what is it that we are looking for in these families in other words is there some fundamental limit is there some fundamental two domain requirements that we are trying to seek and fulfill. Thank you.