 A warm welcome to the 35th lecture on the subject of wavelets and multirate digital signal processing. In the last lecture, we had developed a computationally efficient structure to realize an orthogonal filter bank. We called it the lattice structure. The word lattice relates to a uniform periodic repetition of a given module or a given modular piece. Now, in that lattice, we notice that we had a little bit of complexity, the minimum possible where the two inputs to the lattice interacted with one another to produce the two outputs. In fact, let me put down the structure and let me also put before you the theme of the lecture today. The theme of the lecture today is to build one step ahead of the lattice. In the lattice, there was a simultaneous working on the two inputs to produce the two outputs. What we are going to do today is to simplify that even further to make it even more modular or even more unitary in terms of operation. So, we are going to decompose that lattice stage into two sub stages which have even more elementary operations involved. That is going to lead us to what we call a lifting structure. So, today we intend to talk about what is called the lifting structure or a decomposition of the lattice into even more elementary operations and subsequently we are going to introduce a greater generalization. So, we start by a specific modular structure and we are going to a more general formulation of what we expect when we want to do computationally efficient realizations. And that is going to lead us to the idea of polyphase matrices as we mentioned in the title here. Now, what we have done last time was to build a structure that looked something like this. You know the unit or the module had an appearance of this kind and here this was what we called a one lattice stage and this was called the lattice parameter. This was what distinguished one stage from another the value of k. We also put down a systematic procedure for constructing the lattice and in fact, we illustrated the calculations involved in constructing that lattice for a length 4 orthogonal filter. I had left it to you to generalize to longer lengths. We had given the basic approach and we had shown that this generalization was possible. Now, if you look at this lattice stage once again what we notice is that you are doing two computations at once. You know you are taking a combination of these and producing this output and you are taking a combination of these and producing this output. So, in some sense there is a crisscross involved here. We wish to think of this in terms of matrices. So, let us call this input 1 in 1 and let us call this in 2 and let us call this out 1 and let us call this out 2 here. Let us put down a relation between in and out. So, it is very clear you see what I wish to do is to think of a 2 cross 2 matrix that relates the Z transforms of the outputs to the Z transforms of the inputs and this is what we are going to call a polyphase matrix here. So, you see the idea of polyphase I would like to introduce that idea in a little greater depth before I begin the details. Remember even before we put this module in place here you know this stage is repeated with different values of k one after the other in cascade, but at the beginning of all these stages is the following operation. You have the input let us say X of n you subject this to a one sample delay and then you down sample. Now, if you speak the language of Z transforms here. So, if I denote the Z transform of the input by capital X of Z then one could use the symbols capital X 0 of Z for the Z transform the sequence that appears here and capital X 1 Z for the Z transform of the sequence that appears here and we would like to relate capital X 0 capital X 1 and capital X. So, let us put down the problem we see that we seek a relation let us put down graphically the sequences that produce capital X 0 and capital X 1. So, you know let us write down the indices n a few sample ones. So, minus 6 minus 5 let us begin with minus 6 let us say for variety I am putting more on the negative side and so I of course, I have the corresponding samples. So, for example, you have X sub 0 here X sub 1 here X sub 2 there and so on X sub minus 1 here and so on. Now, what is going to happen on this branch? So, if I look at this branch what I am going to get graphically is X 0 followed by X 2 preceded by X minus 2 and then X minus 4 behind X minus 6. So, let me write down graphically what I get on the X 0 branch I get X 0 followed by X 2 X 4 X 6 and so on this way X minus 2 X minus 4 and so on this way and similarly on the X 1 branch I get X 1 first you know here again you have to be little careful you get X 1 followed by X 3 followed by X 5 and then X 7 and so on in this direction and X minus 1 X minus 2 and so on in this direction the only catch is where do you put the 0. Now, what we will do is rather than to call the Z transforms here as X 0 and X 1 we will say they are related to the Z transforms X 0 and X 1 where we shall give a specific meaning to capital X 0 and capital X 1. So, for capital X 1 we shall consider the sequence which is equal to X 1 at 0 and then subsequently X 3 X 5 X 7 as shown here and X minus 1 X minus 3 and so on behind this should be X minus 3 there is a little correction required similarly here on the X 0 branch what I am asking for is a sequence that is X 0 at the point 0 X 2 X 4 X 6 subsequently X minus 2 X minus 4 and X minus 6 and so on behind. So, what I am saying in effect is if I consider the sequence X the sequence X is obtained by interleaving the sequence on the X 0 branch and that on the X 1 branch. So, what I am saying is essentially if you want the sequence X you would take this point from the X 0 branch followed by this point from the X 1 branch followed by this point from the X 0 branch and followed by this point from the X 1 branch and so on. Of course, an easy way of understanding that is to call these sequences let us call this sequence X small X 0 n and let us call the sequence small X 1 n. So, in other words we can define X 0 n and X 1 n in terms of the sequence X n X 0 n is simply X of 2 n for all integer n as regards X 1 n it is X of 2 n plus 1 for all integer n and it is very clear that X z X of z which is summation n going from minus to plus infinity X n z raised to the minus n can be decomposed into two summations. It is 2 n z raised to the power minus 2 n plus summation n going from minus to plus infinity X 2 n plus 1 z raised to the power minus 2 n plus 1 by definition and very clearly since X 2 n is essentially X 0 of n and X 2 n plus 1 is essentially X 1 of n let me put back that definition before you X 2 n is X 0 of n and X 2 n plus 1 is X 1 of n and therefore, this becomes X 0 n into z squared raised to the minus n plus summation n going from minus to plus infinity X 1 n z inverse and then we can call this sequence small X 1 n. And then z squared raised to the minus n all that I have done here is to rewrite this a little bit rewrite this a little bit note that this is essentially X 0 of n and this is essentially X 1 of n to get here. Now, here I am trying to bring out a new idea the idea of what is called poly phase components I shall now define these components X 0 n and X 1 n that we speak about here are called the poly phase components of X. Now, the word poly phase comes from the idea of a number of phases being operated in parallel. So, you know if you think about it in the sense of a machine suppose you had the sequence of X coming sample after sample and you wanted to extract the sequence X 0 and the sequence X 1 think of two branches two streams going out of X 1 is X. So, X comes in here X 0 goes on one branch and X 1 goes on the other it is as if you had a switch a poly phase switch you know this idea of poly phase comes from the idea of switching from one phase to the other. So, you switch on to one phase for one sample to the other phase for the other sample and again to the first phase for the next sample and then to the second phase for the other sample that is where the idea of poly phase components comes from a switching mechanism between phases that is essentially what is happening in constructing X 0 and X 1 it is very clear that the z transforms of the poly phase components relate to the z transform of the input in the following way capital X of z we have just shown is capital X 0 of z squared plus z inverse capital X 1 of z squared. So, essentially it is the relation between the poly z transforms of the poly phase components and the z transform of the input what we are doing in one stage or one module of the lattice is to operate on the poly phase components. So, in fact you could think of the whole of the analysis filter bank of that matter the whole of the synthesis filter bank also as an operation essentially on poly phase components instead of thinking of it as an operation on the original sequence it could be thought of as an operation on a 2 cross 2 sequence. So, if you look at it carefully when we do this in the beginning and we wish to express what comes out here and here in the language of z transforms noting that X of z is X 0 z squared plus z inverse X 1 z squared what comes out on this branch is essentially X 0 of z. Now, what comes at this point is z inverse of X z and therefore, you have a z inverse factor multiplying this which puts it into odd locations. So, when we down sample those are all destroyed, but this becomes z raise the power minus 2 times X 1 z squared. So, when we down sample what we get here is essentially z inverse X 1 z that is interesting. So, what we are doing essentially is to operate on X 0 z and X 1 z. So, you know the first stage of the lattice of that matter any stage is essentially an operation on these poly phase components. So, we could think of each stage of the lattice as essentially a matrix a 2 cross 2 matrix operation on the poly phase components. Let us try and speak this language of matrices. For example, when we simply delay the lower branch with a z inverse and leave the upper branch unchanged the corresponding matrix is essentially a diagonal matrix. So, what I am saying is this stage here I would not keep drawing these. So, the criss cross stage here is one stage and this stage is another where we have a z inverse here. So, we have 2 sub stages again. So, you can call it operation 1 and operation 2 with a k n minus k there. The matrix corresponding to operation 1 is essentially keep the upper branch as it is and multiply the lower branch by z inverse simple enough. So, what I am saying is you know if you call this in and out in 1 in 2 and out 1 and out 2 not quite there, but intermediate. So, this is in 1 and in 2 and what I am producing at this point is essentially these 2 outputs here. The matrix is this correspondingly the matrix corresponding to operation 2 can equally easily be written down. So, we have taken the outputs of operation 1 and own them to produce the upper branch. We have taken 1 times this and k times the lower branch for the lower branch it is minus k times the upper and 1 times the lower and this gives us out 1 and out 2 look back again at the drawing to convince yourself. So, when you take these 2 and produce out 1 and out 2 what we are saying is that out 1 is produced by 1 times this plus k times this and out 2 is produced by minus k times this plus 1 times this and that is precisely what is reflected in the matrix here 1 times this plus k times this minus k times this plus 1 times this. So, composite operator between in 1 in 2 out 1 and out 2 is as follows out 2 out 1 is 1 k minus k 1 1 0 0 z 1 minus k 1 1 0 inverse in 1 and in 2. Now, this entire operator here a matrix operator acting on the polyphase components is called a polyphase operator or polyphase matrix. So, in fact each lattice stage has a polyphase matrix corresponding to it and when we cascade these lattice stages we are placing these polyphase matrices in cascade 2 we are essentially multiplying the matrices one after the other that is the interpretation in the language of matrices. When we look at it from a matrix perspective things become much easier to understand we also see what is very elementary about these operations. These matrices are indeed very simple matrices for example, one of them is a diagonal matrix. The other matrix is essentially you know it is essentially with only one parameter two of the entries are 1 that means there is no multiplication involved the other two entries are identical up to a sign change. So, these are very elementary simple matrices and what we have done in the process of constructing the lattice structure is to break down the entire low pass and high pass filter on the analysis side into small matrix operators of this kind. Now, what we are going to do now is to ask whether we can even further simplify and break up these matrix operators. So, let us look back at the operator here. The operator here is just too simple to break down any further there is little that we can do to simplify this, but we could possibly consider simplifying this. This is still in some sense a full matrix a matrix of full rank you know all the entries are non zero and the entries none of the entries are trivial in well these two entries are trivial, but you know you have a full amount of computation to do here. We can decompose this matrix into a product of two even simpler matrices. So, we could consider decomposing 1 k minus k 1 into two sub matrices like this. One matrix which has a 0 here and non zero entries there and one matrix which has non zero entries here and a 0 entry there this is called an upper triangular matrix and this is called a lower triangular matrix. So, what we are asking for is essentially a decomposition of this rather simple two cross two matrix into even simpler two cross two matrices one upper triangular and one lower triangular. What does it mean in terms of computation? When we have let us say for example, an upper triangular matrix as is here. So, let us let us put numbers in this matrix or at least symbols if not numbers. So, consider the upper triangular matrix let it have the form a b c a 0 here. What is the computation meant? The computation meant is the following. We are saying the upper product of the lower branch is a times this plus b times this. The lower branch is just c times this. So, this is what is essentially a so called upper triangular computation and similarly we can put down a lower triangular computation. So, a typical lower triangular matrix would look like this with the associated computation looking as follows. What is interesting about both the upper and the lower triangular computation here is that we have further simplified what was essentially a crisscross operation before this. So, it is the most elementary thing that you can do if you want to do something that can help you progress. It essentially means make a linear combination or make an operation on only one of the branches and combine it with the other. So, at one time we do only one combination. We do not let branches interact both with each other at once. This is even more elementary than the lattice in some sense and what we want to investigate is whether it is possible to decompose each lattice stage into these upper and lower triangular forms. So, let us in fact assume that it is possible and then find a set of values for it. So, let if possible this elementary part 1 k minus k and 1 be equal to a, b, c and p, q, r written this way first a lower triangular operation followed by an upper triangular operation. Of course, one could possibly conceive a reversal of role here, but that is a different issue. So, in fact now we have four equations and six variables. So, obviously this has some degrees of freedom in the solution. We will exploit those degrees of freedom in a minute, but let us write down the equations that we get from here explicitly. So, how would we get the equations? We would get them by comparing term by term. So, for example, 1 is a times p plus b times q minus k is essentially what you get by dot product of this with this. So, c times r and so on. So, let me write down the four equations that we get. a p plus b q is equal to 1, c r is minus k, b r is k and finally, well the last one well just a minute I think there is a slight mistake minus k that is right. So, that is c r, c q in fact c q is minus k. So, you have this into this. So, 0 times p plus c times q. So, c q is minus k, b r is k, b r is k, b r is and c r is 1 is the four equations and as expected we have two degrees of freedom here. We can only solve for four variables. So, we are free to choose two variables in a way that we deem appropriate. Now, if we look at this operation the first operation here the lower triangular operation what we would like to see is whether we can avoid these multiplications here. So, you know we would like to make this lower triangular operation as simple as we can. So, if I go back to the actual computation suppose we could ensure that one of at least some of these are just one. You know these multipliers here if we could make them one and yet get a solution for the others what it means is that for example, if p could be made one we just passing it as it is, if r could be made one we just passing this as it is. So, we exploit in these equations we exploit the degrees of freedom by choosing very simple values for some variables. So, for example, let us consider choosing p equal to one and r equal to one in this lower triangular form. So, make the diagonal entries one here where upon we have a plus b q is one c q is minus k b is k and c is one if we substitute back in these equations. Remember p and r have been made one and this is very easy to solve now c is one. So, of course, q is minus k when q is minus k you know what to do here and b is k. So, all that we need to do is to essentially collapse all these equations into this. So, we have a plus k times minus k is one which means a is one plus k square and of course, the others are known b is k c is one and q is minus k. So, overall we have a decomposition of this matrix one k minus k and one into one plus k square there k here zero and one followed by one on the diagonals here and minus k off diagonal there and in fact, it is very easy to verify this by multiplication one plus k square into one minus k square is indeed one one plus k square into zero plus k into one is indeed k zero times one plus one times minus k is indeed minus k and zero times zero plus one times one is indeed one this is verified. Now, what we have done here in this decomposition is to break up one lattice stage into even simpler stages of an upper diagonal or upper triangular and lower triangular operation. So, what we are saying in effect is that one lattice stage computationally we have broken down this crisscross operation into a cast scale of two operations. So, in the first operation we have one zero obviously, this matrix operates first. So, one zero minus k one essentially meaning one zero is for the upper one. So, this goes as it is minus k one means minus k there and one here. So, the lower triangular part followed by the upper triangular part. So, if you wish I will just put a mark of separation here and the upper triangular part I have one plus k squared k zero and one zero and one essentially means just this pass this as it is to get this. The upper row means one plus k squared here and k there this is the upper triangular part just for neatness I mean this is just for neatness we could make these lines vertical. So, we will just redraw it that way and also include the delay. So, if you remember we had a delay before this let us keep the delay as it is. In fact, you know after all we are making the delay and then we are bringing it as it is here. So, we might as well bring the delay here what I mean is all together I have the following. A delay this going as it is, but going with a minus k here to produce this followed by this going with a one plus k squared there this going with a factor k to add to this and this going just as it is to produce the lower branch. What we have done here is to redraw the structure in a way that makes it very clear that at one time we are essentially just doing one combination operation and this is the central idea behind what is called the lifting stage. In fact, this is called a lifting stage you know the idea of lifting essentially means to lift from no transform to a meaningful transform. So, in fact suppose you were to take just the outputs of the down samples. So, you had this here and the down samples here if you were just to up sample these and then recombine so I could just put a z inverse there. What would I get? It is very easy to see that what we get here at this point is x 0 z where x 0 z has been explained earlier. What we get here is z inverse x 1 z what we get here is this with z replaced by z square. So, all in all what I get here is essentially x 0 of z square and what I get here is z raised the minus 2 times x 1 z square and if I add these 2 what I get here let me call this point y is y z equal to z inverse x 0 z square plus z raised to the minus 2 x 1 z square which is easily seen to be z inverse times x z the input reconstructed with a delay of one sample. So, in some sense this is what we have here is what is called a lazy wavelet transform it does nothing at all if I had no lattice stages at all I would have a structure like this. So, from a structure which does nothing at all we have building up stage by stage to a structure which has a meaningful frequency response each of these lower and upper triangular forms essentially builds to a better and better frequency response. So, it lifts that wavelet transform from an ineffectual useless wavelet transform to a wavelet transform which does a great deal both in time and frequency that is why we use the term lifting that is one interpretation of the word lifting. So, in the literature people have talked about lifting implementations lift one step at a time and what we had in this structure here is essentially two lifting stages the first time you lift by combining the upper with the lower to modify the lower in the second step we combine the lower with the upper to modify the upper and by alternate improvements of lifting of the lower and then the upper we improve the quality of the wavelet transform step by step. What we have built here has been called the lifting structure in the context of discrete wavelet transforms. Now, what I have shown you here is a mechanism to obtain a lifting structure for an orthogonal wavelet transform one can also build corresponding lifting structures for bi orthogonal transforms. So, for example, it is the lifting structure which is recommended for implementation in the jpeg 5 3 filter bank which we had described in elaborate detail a few lectures before this. In fact, that lifting implementation is also possible for many others such bi orthogonal filter banks and one of the reasons why those particular filter banks 5 3 and 9 7 have been chosen in the jpeg 2000 context is because they are amenable to lifting implementation implementation with very elementary operations. In fact, a part of the recommendations in jpeg 2000 is to implement the filter bank with a lifting structure because of its computational efficiency. As a part of the process of building this lifting implementation we have also seen the idea of polyphase matrices essentially matrix operators on the polyphase components. And we can see that essentially an analysis filter bank is a composite or is a total matrix operator 2 cross 2 matrix operator in the z domain on the polyphase components of the input. Similarly, the synthesis part can also be thought of as a 2 cross 2 operator on the synthesizing part to get back the polyphase components of the input or rather to construct the polyphase components of the output starting from the outputs of the analysis polyphase operator. We shall build further on these ideas of polyphase decomposition and polyphase matrices in some subsequent lectures. Thank you.