 A warm welcome to this session continuing the theme of wavelets and multi resolution signal processing and multirate filter banks. It is my proud privilege today to introduce a guest speaker that we have in this series and I would like first to mention some of his biographical details before bringing him before you and also to stress and emphasize the role, the very important role that he has played in the construction of this course, in the construction of this series of sessions and lectures. Professor Aditya Abhyankar is going to come before you today and the most important qualification that he has is that he is very well known in the area of wavelets and filter banks in the country. It has been my proud privilege to have him as a reviewer for this series and he has carefully and meticulously given suggestions on the content and the manner in which the lectures have been constructed and therefore, it is both out of gratitude and pride that I have invited him today to speak on the theme and to present some of his own experience and some of his own thoughts in this broad field of wavelets, multi resolution signal processing, image processing, filter banks and so on. Professor Aditya Abhyankar obtained his degree of Doctor of Philosophy from Clarkson University, New York, USA in 2005. He has for quite some time been on the faculty of the University of Pune. In fact, currently he is a professor and head of department in the University of Pune in the department of technology. In particular as I said he has worked to a great extent on the themes of signal processing pattern recognition and I know for sure that he has made some very important contributions to the area of biometrics specifically fingerprints. You will recall that at some stage we had said little bit about the importance that wavelets have had in fingerprint analysis, fingerprint pattern recognition and so on. Professor Aditya Abhyankar has important contributions in this field. In addition he has been leading several initiatives in the University of Pune and as I said earlier on a very important role that he has played in this series is to review and give us very valuable inputs on these lectures. And therefore without taking too much time in this session I would like straight away to put before you Professor Aditya Abhyankar who will now talk to you on some themes related to wavelets. Thank you. Hello and welcome to this session on the topic of wavelets and in the broader perspective a topic that deals with joint time frequency analysis. It's my proud privilege that I was called upon to review the beautiful video lectures given by Dr. Gadre who is a professor at doubly department IIT Bombay. And there is a reason as to why I am saying this. We have quite a quite a few number of books written on wavelets. In fact today wavelets has become a buzzword. However, almost all the material that has been written on wavelets it is written by mathematicians and predominantly it is written for mathematicians. For engineers to understand the concept behind these mathematical formulas it was required by someone to simplify those mathematical formulas and bring out the physical significance associated with these mathematical formulas. It was pleasant experience going through the beautiful video lectures recorded by Dr. Gadre and that's because he had simplified so many beautiful concepts associated with wavelet transform. It's also my privilege that he has given me an opportunity to record few of my thoughts on this beautiful subject of wavelets and this very first lecture. We have titled this lecture as zoom in and zoom out using wavelet transform. Let's begin. Nowadays wavelet transform has become a buzzword. I honestly believe that if last 100 years were the 100 years of Fourier transform the coming 100 years are going to be 100 years of wavelet transform. And that's because wavelet transform has so many beautiful facets so many beautiful characteristics associated with it. And in this session we try to bring out one such beautiful facet which is the zoom in and zoom out feature of wavelet transform. If we compare wavelet transform with the conventional methods of representing the signals or functions then relatively it's a new field. And then we might pose few questions and those questions will be answered by a beautiful property associated with wavelet transform which is known to us as multi resolution analysis or very popularly it is known as MRA. Let's pose more fundamental question first. Why transform? Not all the operations are called as transforms. For example, when it comes to image processing there are certain operations like histogram equalization. We don't name that operation as histogram equalization transform. There are only very few certain class of operations which are termed as transforms. And why at all take this pain of transforming information from one domain to another domain? There are multiple reasons. However, the strongest reason is pure convenience. For the analyzer it becomes extremely convenient to understand the representation of the information in one domain rather than another domain. And we have already seen one example, the example of music. We don't understand music as just few time domain signals with varying voltage. Not really. We understand music as a sequence of frequencies. And so it makes sense to actually transform the signal into frequency domain and then the analyzer will be more comfortable dealing with those frequencies. Ultimately, we want to design those beautiful filters and it's more convenient to design filters in frequency domain compared to time domain or spatial domain. So, the main theme behind doing most of these transformations is purely the convenience of the analyzer. Wavelet transform in a way is strikingly different than most of the conventional transforms. If we go through most of the conventional transforms that we have studied with the likes of Fourier transform, Laplace transform, Z transform and probably the first transform to which we get formally introduced to is the logarithmic transform. For all these transforms, the basis function comes out of a beautiful constant, a constant E. And in a way, all these transforms, they have some common thread in all of them. All the basis functions, the kernel functions, they are of the nature E to the power some variable. It could be frequency or it could be any variable. However, Wavelet transform differs from all these transforms and we have to go a little beyond the purview of all these transforms to really make sense of the Wavelet framework. So, why at all do this? We already know how to analyze linear time invariant systems. And if we see this particular slide, traditionally we have two methods based on convolution or based on difference equations. And typically, convolution comes out of the fact that given any signal x of n, I can decompose that signal into sequence of shifted impulses. And once I do that, I can very well write x of n as the scale summation of shifted version of impulses. Remember, we can do this because we are talking about linear and shift invariant systems. Because my system is linear, I can go on accumulating the information, I can add up everything because the system would follow superposition theorem, the additive and homogeneity properties. And because my system is time invariant or shift invariant, I can go on shifting my impulses. And once I do this, what we say is if I understand how my system reacts to these shifted versions of impulses, I can characterize my system completely. This is what is known to us as an impulse response. And if I know the impulse response of my system and let us say, we call this impulse response as h of n. And if I excite my linear shift invariant system with an exponential, then what kind of output the system would produce? That was the question that was posed by Dr. Fourier. And if we call this output as y of n, then we realize that this is a very interesting situation that we are in. We have linear shift invariant system. We have characterized that system by virtue of impulse response. And we are stimulating that system with an exponential, with a known frequency e to the power j omega 0 n and omega 0 is the known frequency. This formulates one very interesting pair of Eigen value and Eigen system for linear and shift invariant systems. And that is purely because if I split the term e to the power j omega 0 n minus k, then definitely the summation happens for variable k. And I can take out e to the power j omega 0 n outside the summation and this is very interesting. The system was excited with e to the power j omega 0 n and what comes out of the system is again the same exponential with the same frequency plus something. This exponential e to the power j omega 0 n is known to us as an Eigen function and what it produces along with it is known to us as an Eigen value. And this Eigen value in broader sense is known to us as Fourier transform. What is really interesting is to observe the common thread in all these transforms. And like we said before for all the different transforms like Fourier transform, Laplace transform and Z transform, the kernel function remains almost same. In case of Z transform, the basis function is z to the power minus n, where z is again equal to e to the power j omega. So, we are talking about essentially similar looking kernel function or basis function. Same holds true for Laplace transform and where exactly this constant e comes from that is an interesting thing to notice. It is a combination of the efforts put in by three great genius scientists. That is a vague story about how Dr. Bernoulli was able to invent this constant e. For that matter most of the inventions and discoveries are pure accidents or we can say they are apparent accidents, they appear as accidents. However, there are tremendous dedicated efforts by the respective scientists or researchers that actually goes in and only then the elevated minds will be able to grasp and capture that particular idea. Something similar happened in this case also. This is a vague story and there is no authentic source to this story. However, it is very interesting. Dr. Bernoulli was trying to help out his banker friend and his business was not actually picking up and he gave him a beautiful solution which was based on the formula of compound interest and typically we know how the formula actually looks. The formula goes like this. If you are investing 1 rupee as the principal amount then the formula for compound interest is 1 plus 1 upon n bracket rest to n and Dr. Bernoulli simply brought in the series expansion by putting limit as to when n tends to infinity. We can very well solve this limit. Let us very quickly do a small exercise in MATLAB. As defined for n is equal to 1 to some large number, we cannot go all the way till infinity and then let us implement this formula 1 plus 1 upon n bracket rest to n and let us end this for loop and then we will see that after few iterations it will saturate to value of 2.7183. This is the value which is known to us as the constant E. It is a simple compound interest formula and Dr. Bernoulli was able to help out his banker friend but in the meanwhile he was also able to discover this beautiful constant E. That was not enough and then comes the second genius in this story. As can be seen from the slides, his name is Dr. Euler and Dr. Euler he gave a different meaning altogether to this constant E and he gave us this beautiful identity which is known to us as Euler's identity and the identity goes like this. He told us E to the power i pi is equal to minus 1. This is what is known to us as Euler's identity. For almost last 300 odd years or so mathematicians they have been trying their level best to come up with a formal mathematical proof of this identity but they are not yet successful. So, once again an elevated mind and he was able to capture the essence of that particular idea. What is so special about this identity is an extended version of Euler's identity. He told us that E to the power i theta can be disintegrated into cos theta plus i sin theta and we know cos theta and sin theta together they form an orthogonal system. So, the beauty associated with this exponential curve is you can take any point on this curve and draw a tangent. The y intercept and the slope of this tangent will match and we know an equation of a straight line is y is equal to m x plus c that essentially indicates that on this curve at any point on this curve I can resolve that entity along two axis which are orthogonal in their nature cos and sin and it is this beautiful Euler's identity that made all the transforms orthogonal in their nature. That was also not enough and then came the third genius scientist in the story his name is of course, Dr. Fourier as can be seen from the slide Dr. Fourier told us how to analyze periodic or aperiodic functions or signals. The legacy of transforms that we have with us is a contribution of these three great genius scientists. We can summarize and that can be seen from the slide that we have this constant E which creates an Eigen value Eigen function system and the Eigen value is nothing else, but the Fourier transform which can be implemented using convolution which is at the heart of any DSP processor. This convolution is possible only if we have linear time invariant system and then we are talking about band limited aperiodic signals a specific class of signals. They should obey sampling theorem and then there will be no aliasing in reconstruction which guarantees sparse representation which in a way would guarantee inverse Fourier transform which can then once again be implemented using convolution. The phase changes are marked as the directional changes which is once again a property of being an Eigen function of the system that produces an Eigen value that takes us back to this constant E. So, at the heart of all these transforms we have this beautiful constant E and we have this beautiful legacy of transforms. For periodic signals we have series representation for aperiodic signals or functions we have transforms. So, if we already have series representations and transforms then comes one important question and that important question is why at all wavelet transform. Let us go back to the slides and we will realize that one beautiful feature associated with wavelet transforms is it decomposes signal into two separate series. A single series to represent most course version which leads us to scaling function or which is also very popularly called as the father function and the double series to represent the details or the refined version that leads us to the wavelet function or wavelet function is also popularly known as the mother wavelet function and the father and the mother would then in a way give the whole wavelet family. However, there are two fundamental questions which are still required to be answered. The first is are in the conventional methods to represent signals or functions good enough and what is strikingly special about wavelet representation. Let us take these questions one at a time and let us very quickly revisit the conventional methods that we have with us. Probably the most basic way of representing the signals or the functions is by virtue of using the Taylor series expansion and we probably learn this at the beginning of our calculus course. We know that signal representation is known for a long time and one particular example of Taylor series expansion at x of 0 is equal to 0 is shown for a function e to the power x and that gives us infinite coefficients for this particular series 1 plus x plus x square by 2 and it goes on. Every single coefficient can be looked upon as decomposed piece and we can make use of these decomposed pieces for doing the reconstruction back of the original signal or function. If we make use of only some finite number of coefficients let us say first three coefficients in the series 1 x and x square by 2 and using only these three coefficients if we try and reconstruct the original signal then this is how it is going to look and it is shown in the diagram on the left hand side. This dotted curve essentially represents the reconstruction using only first three coefficients and this line in blue essentially is the original function which is e to the power x. Please ignore these discontinuities over here because the resolution used is lesser but we can always use a higher resolution and we can take care of this part. Now, as against this instead of using just first three coefficients if I use first 12 coefficients then you can clearly see the representation goes very close to the actual representation and there comes the question of the cooperation of the series expansion and what we say is in Taylor series this cooperation to build better representation is rigid. Why this is rigid? Number one that is because we do not have freedom but to add large number of terms. I cannot play around with the individual term I am restricted with the scale and the translation parameters of every single term and I do not have freedom to change this. In contrast to this in wavelet analysis a beautiful combination of the scaling function and its associated wavelet function makes the entire representation very very flexible. I have flexibility in terms of selecting the scale selecting the translation parameter and then I can also bring in the dilation by virtue of which I can then create the nested subsets and this is going to lead us to MRA multi resolution analysis. In wavelet analysis the scale 1 upon 2 to the power j is dependent on the analyzer to what degree we require the refinement to be added to the actual representation and one example could be if we want to determine the spike in the signal we can think of using a very high value of j and then we can bring in the translation parameter tau j comma k which is equal to k upon 2 to the power j and this can be used to focus on that specific part in the signal. This combination together scale and translation parameter is so beautiful that I can go and look at any particular part of the signal I can change and alter the scale and I can zoom in or zoom out and by virtue of making use of a beautiful combination of scaling and translation parameter what we get is the zoom in and the zoom out facility of wavelet transform. As can be seen from the slides someone might argue that Fourier series has a noteworthy advancement over Taylor series and that is all the elements of Taylor series they do not necessarily and always form an orthogonal set. However, when it comes to Fourier series the set 1 cos of n x and sin of n x n ranging from 1 to infinity is always orthogonal on range minus pi to plus pi. This is absolutely true however the fundamental query remains the same I cannot change the scale and the translation parameters associated with the basis function or the kernel function when we are trying to represent the information. However, we can derive some useful information from the Fourier series and that is why they always say wavelet transform in a way stands on the shoulders of Fourier series and Fourier transform. From the slides we can clearly see that a very special relationship actually exists between the sin and cosine parts of Fourier series. A similar special relationship also exists between the scaling function and the wavelet function when it comes to wavelet series. This relationship is quite trivial but very very interesting and the interested viewers can dig further into this. So, in a way we have answered the first question in a way we have also answered the second question, but we will take the second question further in order to really bring out what is strikingly special about wavelet representation. The scaling and the translation parameters are indeed the hallmarks of wavelet transform and when we add up the dilation parameter in totality they would lead us to the multi resolution analysis framework which is popularly also known as MRA. The central theme of MRA is as shown in the slide. We are talking about the piece wise constant approximations on unit intervals. However, wavelet transform is not just about finding out the piece wise constant approximations otherwise there would not have been any difference in wavelet transform and a simple process of quantization. Once we carry out the successive approximation then comes the very interesting part and that is filling in the details. So, the piece wise constant approximation will be given by the scaling equation phi of t and then the details will be added with the help coming from the wavelet equation which is psi of t. The filling in of the details can be then called as the zoom in feature. Losing out on details can be called as the zoom out feature. Increasing the resolution would lead us to zoom in and decreasing the resolution would lead us to zoom out. The very concept of zooming in and zooming out has become a well known phenomenon these days and that is because we live an era where we make use of digital cameras. In fact, most of the scanners and most of the sensors they have gone digital and by virtue of using digital camera we often capture digital images and then zoom on to a particular portion in that image to really understand what kind of activity is going on there. Correspondingly we can also zoom out by losing out on details if we want to get a generic feel about that particular picture. So, we understand what is zoom in and zoom out. However, when it comes to signal analysis many of the times it is a requirement that we should have a framework by virtue of which we should be able to zoom on to a specific part in the signal. So, as to understand what is really going on there consider a case of ECG signal may be we have a large recording of one hour and in that large recording of one hour if there are only few samples which shows some abnormality then as a signal analyzer we should be able to focus only on that part from the large recording that we have with us. So, this beautiful property of zooming in and zooming out is of great importance, great significance. From the slide we will understand that the whole point in doing this exercise is an ability of the analyzer of going arbitrarily close to the original signal. And in fact in a book written by Stephen Malat on this beautiful subject of wavelets he has used one beautiful word he says one should be able to go tantalizingly close to the original signal, beautiful. How to achieve this that is what we are going to uncover in the reminder of this particular lecture. So, in a way the introductory part is over and now we will begin our journey towards understanding how we can achieve this zoom in and zoom out features. We have understood the requirement we have also understood why at all it is essential and now it is actually time to dirty our hands to actually see the framework and then to be able to understand how we can carry out this task how we can actually do this zoom in zoom out using the framework that we are going to lay down. From the slide let us define linear space and let us say it is a linear space that we have named as v of 0 and then this would contain the functions which are square integrable that is the functions which in a way obey the L 2 R norm. So, to say and the piecewise constant approximation will be done on an open interval from n to n plus 1 where n is an integer what is really interesting is the size of this interval. If we are in a linear space v of 0 the size of this interval and we can call this as the analysis window this analysis window will be 2 to the power 0 from v of 0 we can move on to v of 1 and if we are in linear subspace v of 1 then correspondingly the size of the interval will be 2 to the power minus 1 that is half and we can continue doing this activity and we can generalize this notion and we can say we are talking about a linear space v of m where the size of the interval is 2 to the power minus m and that leads us to very spatial relationship and this relationship in general is called as the nested subsets and we have already seen in the lectures by Dr. Gadre that if we are looking at this ladder of subspaces which are the nested subspaces then we can think of either moving up the ladder or moving down the ladder. If we move up the ladder the analysis window will become smaller and smaller and smaller and will go on adding up the details and eventually we should be able to achieve the L to R norm as against that if we move down the ladder then obviously we are talking about the resolution getting coarser and coarser and eventually that is going to lead us to a trivial subspace and we will probably lose out all the details. We can convey this mathematically using this formula and the phenomenon of moving up the ladder with closure as was given by Dr. Ingrid Dobasz is captured in this mathematical formula and what is really striking about the wavelets is you could be talking about a particular function and its corresponding projection in any space any subspace that can be very neatly and nicely constructed using just one single function and that is psi of t and how to do that we have already seen this. We can do this using hallmarks of wavelet transform that is scaling, translation and also dilation. We will talk about it more once we start putting down the framework. Well, this is all true when it comes to psi of t that is the wavelet functions and we can very well span the w subspaces but who will span v subspaces. There has to be a function who would do that task for us and that function is phi of t which is also known to us as the scaling function. Now who gives this scaling function? How we can go about finding out the coefficients of the scaling function that is one interesting question and probably down the line in this series we are going to also try and uncover this part. The whole point of having the scaling function as can be seen from the slide is to be able to span that particular subspace v of m and this in a way guarantees the generation of ladder of subspaces. And we are saying this term again and again because this ladder of subspaces is eventually going to lead us to MRA that is multi resolution analysis. Let us quickly run through the axioms of MRA because the framework that we are going to see in this particular lecture in a way depends on these axioms. So, once we understand how the ladder of subspaces are formed then the first axiom is obviously moving up the ladder, the second axiom is obviously moving down the ladder. The third axiom guarantees the existence of a scaling function that will help us span all the v subspaces. Correspondingly we have to ensure that phi of t in a way generates the orthogonal set of series and then that will lead us to axiom number 5 and 6. The axiom number 5 and 6 we are going to see the direct implication of these two axioms in the framework that we are going to go through. And based on these axioms we have the MRA theorem that tells us given these axioms there exists a wavelet function psi which is once again a square integrable function and using this function I can span those W subspaces and bring out the details in the underlying signal or function. One typical way of implementing this MRA philosophy is using a two band filter bank structure. And if we have some input let us say p of n then I can very well go about doing the analysis by first of all passing this through the analysis low pass filter followed by a down sampler, analysis high pass filter which is shown by g 0 of z followed by a down sampler. And if I want to reconstruct back the original signal then I can run the synthesis phase where I can have an up sampler followed by the synthesis low pass filter which is shown as h 1 of z and the synthesis high pass filter which is g 1 of z and by virtue of doing the orthogonal summation we can very well reconstruct back the original signal as the two filters are complements or duels of each other. With this introduction now let us try and understand how we can go about building the framework. One serious drawback of this two band filter bank is as follows. As can be seen from the slide if we focus only on the analysis part and let us say input to our system is some signal p of n it is a digital signal of finite duration and let us say this signal belongs to some subspace v 1. The moment you run this signal through analysis low pass filter followed by a down sampler by a factor of 2 and analysis high pass filter and a down sampler by a factor of 2 you in a way end up with signals p 0 of n that would belong to v 0 and q 0 of n that would belong to a subspace w 0. So, in this process let us really try and understand what exactly is happening. Let us say you have some signal x of n and you are passing this signal through the two band filter structure and this will be followed up with a down sampler by a factor of 2. What we are saying is if this signal x of n belongs to some subspace say v of j then by virtue of doing this kind of an arrangement we end up with x j minus 1 n which would obviously belong to v of j minus 1 and let us say y of j minus 1 n which belongs to w of j minus 1. And if we try and understand what is happening over here and if we see the nested subsets that we have already seen and so on. And let us say this v of j is v of 0 that means we are starting over here then by virtue of doing these operations we start moving in the leftward direction and that is because if I am starting in v of 0 I will generate projections in v of minus 1 and w of minus 1. So, in a way we start moving towards the left direction we start moving down the ladder not always it is desirable to move in the downward direction. In fact, for many of the applications it is required to actually move up the ladder and only by virtue of moving up the ladder we can go on adding up the details from v of 0 we can move on to v of 1 from v of 1 we can think of moving to v of 2. And we can go on adding up the details and then we can think of going tantalizingly close to the actual signal and its corresponding representation. This is one interesting journey and we will have to build the whole framework in order to be able to achieve this. What we have seen in this particular lecture is how we can interpret the well known two band filter bank structure and understand it from the point of view of nested subsets. And how typically we end up only moving down the ladder and for many of the applications it is required it is desired to move actually up the ladder. And if we want to move up the ladder we will have to design the framework and that we are going to cover in the next class. However, we will write down one important mathematical formula which is going to be at the heart of what the kind of framework that we are going to build in the next class. And that formula is by looking at the two band hard structure we can very well write down that v of j is equal to v of j minus 1 orthogonally added with w of j minus 1. This is the formula which is of great importance, which is of great significance. And the whole multi resolution analysis structure is in a way based on this formula. We will stop here and we will continue building the framework by virtue of which we can possibly think of moving up the ladder, add the details and really think of going tantalizingly close to the signal or function under analysis. Thank you.