 Good morning. Let me introduce myself first. My name is Vikram M. Gadre and I am a professor in the department of electrical engineering at IIT Bombay. I am going to deliver a series of lectures on the subject of wavelets and multirate digital signal processing. In this first lecture or more appropriately exposition, I would like to bring before you the reason why we should be studying this subject or rather the reasons, the inspirations that have driven researchers and scientists to understand the concepts of wavelets and multirate digital signal processing, the broad themes that we are going to address in this course and finally, a broad outline or a structure of this course. So, before I begin a formal statement of structure, it would be appropriate to put before you the thrill or the fascination of wavelets that I have or the inspiration that leads me to look deeper at this subject, which I would also like to convey to you in a few words and inspire you to study this subject in depth. Typically, this subject follows a basic exposition to signals, system theory, perhaps digital signal processing or discrete time signal processing and other basic courses in communication and signal processing. So, I would pick this course at an advanced level in signal processing. I do not intend to convey from this that the concepts are difficult to understand. As you will see in some of the subsequent lectures, the concepts are actually very easy, very simple, perhaps even simpler than some of the concepts that we learn in a basic course. In a basic course, the whole subject is new to us. When we study signals and systems, the whole idea of abstraction is new. Abstraction of a signal, abstraction of a system, abstraction of a transform, abstraction of a domain. So, we assume that we have crossed that step. We have overcome that obstacle. We have agreed that abstraction is a good thing. In that sense, this course is simple. We are moving one level higher, where we are moving a little closer to reality. In a basic course on signals and systems or in a basic course on discrete time signal processing, we tend to be idealistic. We tend to want to oversimplify. In this course, we will refrain from doing that, at least in total. What do I mean by this? In a basic course on signals and systems, we assume that signals last forever. For example, when we talk about the Fourier transform, we think of basis functions or functions on which the transform is based, which start from time t equal to minus infinity and go up to t equal to plus infinity, t denotes time. In this course, we shall recognize that this is a silly thing to do. No signal in the world, at least no signal that we can deal with realistically is going to last forever. It is going to have its lifetime. It is going to begin somewhere, end somewhere. After all, the analyzer, the person who analyzes is also going to begin somewhere and end somewhere. So, the first thing we understand is we must deal with finite domains. We understand the finite domain as far as the natural domain goes very well. What I mean by the natural domain is, suppose for example, we are talking about an audio signal. The natural domain is time. So, an audio signal begins somewhere in time, ends somewhere in time. We do not listen to a piece of music forever, but for that limited period of time in which we wish to listen to a piece of music, we would like to understand better the content of the music to which we listen. We also wish to be able to develop the ability in a signal processing setup to enhance what we want out of the music and to suppress what we do not. We also wish to be able to characterize a system which does so and all the while recognizing that we are not going to deal with the signal forever. We are going to deal with it over a limited range of the natural domain. So, here as I said in an audio signal, the natural domain is time. Let us take another example of a natural domain. Suppose I wish to deal with a picture. I wish to take a picture in which I have a face and naturally many facial features, the eyebrows, the forehead, the nose, the lips and other features that are associated with a typical face. When I wish to isolate a certain feature, I would imply being able to localize in the spatial domain. Now, this is an example of a two variable domain unlike the other example of audio where it was a one variable domain, only time. Here I have two spatial variables. Localization is the common thing. Let me spend a couple of minutes in explaining that a little better. Suppose I take a piece of audio. Let us assume that in that piece of audio a number of notes are sung. If I take recourse to the Indian processing system of description, you could have a raga in which there are several different notes, the components of the raga. Suppose I wish to build a signal processing system that takes the rendition of this raga and identifies the notes that compose it, the different frequencies so to speak that come together and play to form that piece of audio. What do I need to do in such a circumstance from a signal processing perspective or from the point of view of analyzing signals? One thing is obvious. I need to segment in time so I need to be able to say that in this part of the time axis, I had this note prominently played perhaps only that note was played. In a different part of the time axis, a different note was played. Now, when I say different parts of the time axis, they are not only separated but also might be different in length. So, one note could be longer, another note could be shorter. It is not just a question of which notes have been played but also for how long. Now, what exactly do we mean when we talk about notes? If we take a signal processing perspective, I mean after an introduction to the fundamentals of signals and systems or even for that matter a basic course on discrete time signal processing, all of us are comfortable with the idea of a frequency domain. So, we accept that we can think of continuous signals or discrete sequences as having embedded inside them. A collection of sine waves, if it is a continuous signal that we are talking about, we have continuous sine waves. If it is a discrete sequence that we are talking about, we have sampled sine waves and the whole philosophy of the Fourier transform is most reasonable signals that we deal with can be thought of as comprised of or composed of a collection of sine waves. In fact, in principle if the signal is not periodic, then we are talking about an infinity of sine waves with frequencies ranging all the way from 0 to infinity. If it is a periodic signal that we are talking about, then we have a discrete set of sine waves possibly infinite, possibly finite which we call the Fourier series representation. Anyway, what I am trying to emphasize here is that there is a different domain to which we go when we wish to analyze a signal better. So, in the language of signal processing, if I were to take that raga, that rendition of the audio piece, the elementary audio segment as understood in Indian music and if I were to query what notes are being played I am equivalently asking what is the frequency domain content of this audio piece. If I visualize a frequency axis, what points on that frequency, what points on that frequency axis are occupied, what are the locations where the transform is predominant, is prominent. Now, this is where the whole story starts and perhaps is the most fundamental inspiration for this course and wavelets and multirate digital signal processing. I will first talk about where wavelets come from. Well, Fourier transforms deal with waves, sine waves to be more precise. We recognize the merits of sine waves. Sine waves have many nice properties. For one, they occur naturally in many different circumstances. For example, an electrical engineer recognizes the sine wave as naturally emerging from an electricity generation system when there is electromagnetic induction. If there is a perfectly circular rotating device in a magnetic field and if all is perfect in the generating system, we would be generating a perfect sine wave from the brushes. So, sine wave is a good idealization to work with for an electrical engineer, but that is not the only point. Sine waves are the in some sense the most analytic, the smoothest possible periodic functions. They also have the power of being able to express many other waveforms. That means, they form a good basis from which many other waveforms can be generated. They have many other nice mathematical properties. If I take two sine waves with the same frequency, possibly different amplitudes and phases, I would get back a sine wave of the same frequency of course, with a third amplitude and phase. If I differentiate a sine wave, I get back a sine wave of the same frequency and naturally, if I make a combination of these operations namely differentiation or even integration for that matter and linear combinations and if I restrict myself to sine waves of a particular frequency, I remain within the domain of sine waves of that particular frequency. This is something beautiful about sine waves. It is not easy to find wave forms which obey this and as I said before, sine waves form a good basis. So, they form good building blocks for being able to express a wide variety of signals. For all these reasons, the sine wave has been very popular in a first course on signal systems, discrete time signal processing and what have you, communication. But as I said, right in the beginning of this lecture, one of the reasons why we are not so happy with sine waves is that they need to last forever, begin from minus infinity and go all the way to plus infinity. Otherwise, if you truncate a sine wave, if it is one sided for example and if you look at what happens to an electrical system when you apply a one sided sine wave, by a one sided sine wave, I mean a sine wave which starts from some point, it is 0 up to then and starts from some point and continues afterwards the sine wave, the response is very different from what would be the response for a sine wave that started at minus infinity in general. There would be transients. Which are not periodic and then all these beautiful properties of sine waves and their responses go away. So, if I really wish to be able to apply the basic principles of signal systems and discrete time processing that I have learnt in a basic course, I need something unrealistic. I need a sine wave that lasts forever. How can I be more realistic in my demands? By accepting that I cannot deal with waves, but it is more appropriate to deal with wavelets that is why the word wavelet comes from, small waves, waves that do not last forever, functions that are not predominant forever, they are significant in a certain range of time, perhaps only exist in a certain range of time and insignificant outside. So, they have a certain support over which one might want to use them, one might want to consider them to exist and so on. A much more realistic assumption and that really is what we call a wavelet, not a wave, but a wavelet. For example, you could if you wish think of truncating a sine wave to a rectangular region. That means suppose we take a sine wave to last from 0 to 1 millisecond as an example. It could be an example of a wavelet, later we will see this is not a very good example, but yes in principle a wavelet, a wave that does not last forever, a simplistic explanation of what wavelets means. But that is not the whole story, our whole objective was to talk about the other domain too. So, going back to the example of the audio signal, if I thought of the audio signal as comprising of many sine waves to come together to form an audio piece, then I wish to be able to do something simultaneously in two domains and that is the key idea here. So, for example, to put it in plain language I should be able to say well there was this two second audio clip out of which there were five notes being played. Each note was played for different intervals of time, may be the first note was played for 0.4 seconds, the second note was played for 0.7 seconds, the third note only for 0.2 seconds and so on. So, I need to be able to segment in time, but when I am talking about being able to identify notes, I am also talking about being able to segment in frequency and low and behold that is where the conflict arises. A very basic principle in nature says, if I wish to be able to segment in time and frequency simultaneously, I am going to run into trouble. Nature does not allow it beyond the point and that is something very fundamental. It pops up in many different manifestations in different subjects. In modern physics they call it uncertainty, the uncertainty of position and momentum. In signal processing we call it uncertainty, the uncertainty of the time and the frequency domain. So, to put it simply though not very accurately, the shorter you play a note, the more difficult it is to identify it, not very far from intuition. If you play a note for a long time and listen to it for a long time, you are likely to be able to identify it better. Common sense tells us that, but what common sense does not tell us is that you can never quite go down to identifying one particular frequency precisely. So, if I wish to be able to come down to a point on the time axis, then I need to spread all over the frequency axis and if I wish to be able to come down to a point on the frequency axis, I need to spread all over the time axis. That is of course, the strong version of this restriction, but there is a weaker and a little more subtle version and that is as follows. Even if I am not quite interested in coming down to a point on the time axis, I am content with being in a certain region. As I said in the first 0.4 seconds out of the 2 second clip, I was playing note number 1, that means some frequency number 1. I would be able to say this at least to a certain degree of accuracy, that is what I am trying to point out here. What the principle of uncertainty tells us is that this can be done to a certain degree of accuracy, you can identify that note to a certain degree of accuracy. Well, what uncertainty also tells us in a more subtle form is that if I even choose to relax to a certain region of time. So, I say well in this region of time tell me the region of frequencies which were predominant, even then there is a restriction on the simultaneous length or measure of the time and frequency regions. And of course, they have a tussle with one another. The smaller I make that time region, the larger that frequency region becomes. That means the more I want to focus in time, the less I am able to do so in frequency. This is indeed something that arouses a lot of thought. It may seem something far from our interest at first glance, but when we look at it carefully, we realize it is something very fundamental to what we often desire and that is what I am now going to explain to you with a couple of more examples. We live in an age where we use mobile telephones. In fact, more fundamentally digital communication. What are we asking for in digital communication when we look at it from a signal or system perspective or transform domain perspective or time and frequency perspective? Going right down to brass tags, what we are asking for in digital communication is I should be able to transmit a sequence of bits, binary values 0 or 1 and how do I transmit that sequence of binary values? I choose may be 1 of 2 possible wave forms in the simplest scheme so corresponding to 0, I have 1 wave form corresponding to the bit 1, I have a different wave form. To make life simple, the 2 wave forms have the same time interval. So, for example, we talk about I mean all of us here about computer networks and they talk about the speed of the network. So, they say well this network can operate at a speed of 1 mega bit per second. What does that mean? It means in 1 second I can transmit 10 raise the power of 6 bits. So, you have 1 millisecond to allow each bit to be transmitted. Give it a thought, here we are talking about time. What are we saying about frequency? Now, let us go to the mobile communication context. I have so many different mobile operators. Obviously, each operator will want its own privacy. So, what is being communicated on the network of operator 1 should not interfere with what is being communicated on the network of operator 2. Now, where is this separation going to occur? Not in time. After all, there are many different people simultaneously using mobiles bought from both of the operators. So, the separation cannot be in time. We may argue the separation can be in space. So, in one region you may have mobiles from one operator and another region in space I mean mobiles from the other operator. So far so good. But that is also not always true. It is very common to see mobiles purchased from different operators operating in the same room. So, there is no separation in time, no separation in space. So, where is the separation then? The separation has to be in a domain which is not so easy to see, but once we have done a course on signals and systems reasonably easy to understand and that domain is frequency. So, we say well operator 1 has this bandwidth allocated to him, operator 2 has the other bandwidth allocated to him. When we say this bandwidth may be a certain region of the frequency axis of size let us say 2 megahertz. When we say this region of 2 megahertz is allocated to operator 1 and another region of 2 megahertz is allocated to operator 2. Are we not talking about segmentation in a different domain? In fact, there we are talking about simultaneous segmentation. We have a segmentation in time because you want to transmit different bits in different time segments and you want to have a separation in frequency because what is transmitted by operator 1 should not interfere with what is transmitted by operator 2. So, here is a very common though not so obvious example of simultaneous desire for localization in time and frequency. Other than the audio example which is of course a little more obvious, a little easier to understand, this example is equally common at least in the scenario today, but perhaps not so easy to understand, but a little reflection makes it very clear to us. There is a desire to localize in two domains simultaneously. Let us go to a biomedical example. Very often when one analyzes an electrocardiographic waveform, what one wishes to identify are features in the ECG signal. Now, I do not intend to go into the medical details, but there are different segments in a typical ECG signal. They are often indexed by letters PQ and so on. Without needing to focus on specific details of an ECG signal, let us try and understand the connection to time and frequency localization. When we talk about an ECG signal, all features are not of the same length in time. Some features are kind of shorter, some features are longer. In fact, to go away from an ECG signal, biomedical engineers often talk about what are called evoked potentials. So, we provide a stimulus to a biomedical system or to a biophysical system and we evoke a response and the waveform corresponding to that response is called the evoked potential. It can be studied as an electrical signal. Now, the evoked potential again typically has quicker parts in the response and slower parts in the response. Naturally, we expect the slower parts of the response would be predominantly located if you think of the frequency domain in the lower ranges of frequency and the quicker parts of the evoked potential waveform would be located in the higher ranges of frequency. Now, here is again an example of time frequency conflict. Suppose, I wish to be able to isolate the quicker parts. Is it alright simply to isolate the higher frequency content in a certain signal and which comes from an evoked potential and suppress the low frequency part? Well, you see if we try and suppress the low frequency part, then we have already suppressed the slower parts of the response and if we try and suppress the higher frequencies in a bid to emphasize the slower parts of the response, we have suppressed the quicker parts of the response. So, if we think conventionally in terms of the frequency domain, may be high pass filtering or low pass filtering, nothing works for us. If we do high pass filtering, then we have effectively suppressed the slower parts of the response. If we do low pass filtering, we have suppressed the quicker parts of the response. So, we need a different paradigm or a different perspective on filtering. We need to identify in different parts of the time axis which regions of the frequency axis are predominant and therefore, in a certain sense identify different parts of the frequency axis to be emphasized in different time ranges. This is another perspective again on the time frequency conflict and all this is going to lead us in the direction of building up this course on wavelengths. We shall of course understand some of these concepts a little better as we progress in the lectures. But for the time being, I have given you these three examples with the intent of bringing before you, perhaps not completely, but at least in a way to inspire your imagination, the whole idea of time frequency conflict or more generally the conflict between two domains, domains of analysis and representation of a signal and of course, then going further even of a system. In a first course, we understand the domains very well. We understand as a time domain. We understand as a frequency domain. We do well because we keep them apart. It makes life easier. But what we are trying to bring out through these three examples the audio example, the digital communication example and the biomedical waveform example whether it is electrocardiographic waveforms or the evoke potential waveform. What we are trying to bring out is that one normally needs to consider the two domains together time and frequency. And when we try and do so, there is a certain very fundamental conflict that we have to deal with. That conflict called uncertainty appears as I said in different manifestations in different subjects. And we are going to look at that principle the uncertainty principle as it applies to signal processing in great depth at a certain stage in this course. But before that, we are going to consider one particular tool for analyzing signals, analyzing situations with the recognition that we need to be local and not global. So, we are going to build up the whole idea of wavelets by starting from what is called the Haar multi resolution analysis. Haar incidentally is the name of a mathematician. Call him a mathematician, call him a scientist what you will. But one of the beautiful things that this gentleman proposed was what is called a dual of the idea of Fourier analysis. What do we do in Fourier analysis? We allow even discontinuous waveforms, we allow non smooth waveforms and we convert them into a sum or a linear combination of extremely smooth functions namely the sine waves. Haar said can we do exactly the dual can be in principle take smooth functions and convert them into a linear combination of effectively jagged or discontinuous functions. Why on earth would one like to do something like that? Again let us reflect for a minute. A few years before this might have seemed silly to do, but today it is not. What are you doing when you are doing digital communication? You are transmitting audio, you are transmitting pictures and you are doing all this actually with a large level of discontinuity. How does one record digital audio? One firstly samples. So, one takes values of the audio signal at different points in time, one digitizes them and one then records those digital values as a stream of bits. All of these are highly discontinuous operations. You are forcibly introducing discontinuity in time and on top of that you are introducing discontinuity in amplitude by quantization. So, wanting to represent the beautiful smooth audio in terms of very discontinuous bit streams is very very beneficial to digital communication. And in fact, none of us complain when we have a good digital audio recording. Sometimes we even say that a digital audio recording is better than an analog recording as we had in the past. So, going from smooth to non-smooth has its place in modern communication and signal processing. And when HIR proposed that one should look at the whole philosophy and the whole principle of being able to go from smooth to non-smooth perhaps he was looking into the future when this would be absolutely essential. What we are going to do in the very first few lectures immediately following this is to look at one whole angle of wavelets and multirate digital signal processing based on the principles that HIR propounded. So, we are going to look at what is called the HIR multi-resolution analysis. And in fact, if we understand the HIR multi-resolution analysis in depth, we actually land up understanding many principles of wavelets many of the essentials of multirate processing specifically what is called two band processing very well. So, we shall draw upon the HIR multi-resolution analysis to understand some of the basic concepts that underlie this course and of course, build upon them further later on. From the HIR we shall progress to slightly better multi-resolution analysis, better in what sense we will understand. And there are many different families of such better multi-resolution analysis one of them being what is called the Dobash family. Dobash is again the name of a mathematician scientist who proposed that family of multi-resolution analysis. As I said at a certain point in the course immediately following this we shall then look at the uncertainty principle fundamentally and in terms of its implications. From there we shall move to the continuous wavelet transform. So, in the HIR multi-resolution analysis we have a certain discretization in the variables associated with the wavelet transform. Later on we shall go to what is called the continuous wavelet transform where the variables the independent variables that are associated with the wavelet transform all become continuous. Following that we shall look at some of the generalizations of the ideas that we build up earlier in this course and towards the last phase of the course we shall look in depth at some of the important applications to which wavelets and multirate digital signal processing provide great advantages. Now I would like to spend a little while in this lecture on building up in parallel some of the developments that took place to introduce the subject of multirate digital signal processing. What is multirate? What rate are we talking about here and why do we need to talk about multirate? Why is it connected with wavelets let us go back to the audio example or maybe let us first go to the biomedical example. In the biomedical example we said we would have quicker parts in the response and slower parts in the response. The slower parts of the response are likely to last for a longer region in time. The quicker parts of the response are likely to last for a smaller regions in time. So, here other than the concept of being able to localize on a certain region of time and of course correspondingly on frequency there is also the distinction between what kind of localization is required for higher frequencies and lower frequencies. If we spend a little bit of thought and time in understanding these two kinds of components we will realize that most of the time when we are talking about slower parts of the response or lower frequencies we are talking about compromising on what is called time resolution. So, I bring in the idea of resolution here. Resolution means the ability to resolve the ability to be able to identify specific components. So, for example, frequency resolution relates to being able to identify specific frequency components and going further and being a little more pinpointed when I am talking about frequency resolution what I am saying in effect is suppose I have two sine waves whose frequencies start coming closer and closer together. Over what region of time do I need to observe them so that I can actually identify the two frequencies separately? How can I resolve the two frequencies? How much can I narrow down on the frequency axis? Now, what we are talking about is not so much how much we can narrow down, but how much we need to? When we talk about higher frequency content or things that vary quickly it is often though not always the case that we are willing there to compromise on frequency resolution, but we want time resolution. So, things that take place quickly and our transient short lived demand time resolution and things that occupy the lower frequency ranges which last for a long time demand frequency resolution. So, very often it is true that when one goes down on the frequency axis one demands more frequency resolution the ability to resolve frequencies more accurately as opposed to time resolution. The ability to resolve which time segment it occurs and when one goes to higher frequencies one normally demands more time resolution and lesser frequency resolution. One is asking for how closely one can identify two segments or two parts of the waveform which vary quickly that means one is trying to narrow down on the time axis and in doing so then one must compromise on the frequency axis. So, this is what brings us to the idea of multirate processing. You see it means that when I talk about bands of higher frequencies I must use smaller sampling rates in a discrete time processing system when I am talking about lower frequency ranges I must use larger time sampling points or sampling intervals why must I do so to be most efficient in the processing operation when I am talking about lower frequencies. So, in an evoked potential waveform if I am trying to look at the slower components I should not unnecessarily sample too frequently it only increases my data burden and does not offer me anything special. On the other hand when I am analyzing the quicker components it is inadequate to use a low sampling rate. I would be doing injustice to the components for those of us who might be exposed to the concepts of sampling and aliasing. If I am not faithful in my sampling rate of the quicker components I would introduce aliases I would introduce spurious effects which I do not want. So, all in all we recognize that it is not a good idea to be using the same sampling rate for all frequency components. So, unlike a basic course on discrete time signal processing where we assume all sequences are at the same sampling rate here we need to deal with sequences that are effectively at different sampling rates in the same system. That means we also need to deal with systems that operate with different sampling rates and that is why we talk about multirate discrete time signal processing. Now at a conceptual level we understand very well why there is a close relationship between multirate discrete time signal processing, the idea of uncertainty, the requirement of resolution and if we go further then when we do multirate discrete time signal processing we also bring in a new concept of filter banks versus filters. As I said if we go back to the biomedical example there is the effect or there is the desire to separate components. So, when I wish to separate components naturally I wish to have many different operators all at once. So, I need a system of filters which not only have certain individual characteristics, but which also have collective characteristics. So, I need to be able to analyze and then synthesize and all this with localization included. This is what we mean by a filter bank. So, a bank of filters as opposed to a single filter in discrete time signal processing refers to a set of filters which either have a common input or a common point of output or summation output. This concept of a bank of filters and in fact, two banks of filters an analysis filter bank and a synthesis filter bank taken together is very central to multirate discrete time signal processing. We shall be looking at that concept in great depth. So, we shall be building up the idea of a two band filter bank in reasonably great depth in this course. The concept of a two band filter bank is of great importance in being able to construct wavelets. In fact, we shall see even from the R multi resolution analysis example that there is an intimate relationship between the wavelet or the R wavelet and the two band R filter bank. So, much so that if I construct a properly designed two band filter bank I also construct a multi resolution analysis that goes with it so to speak. All this is very exciting and what we intend to do in the lectures that follow from here is to take these concepts one by one. So, in the next few lectures we intend to talk about the R multi resolution analysis to build up certain basic ideas from it. In the lectures that follow as I said before we intend to talk about the uncertainty principle and its implications. Following that we move from the discrete to the continuous wavelet transform and then generalize both the continuous and discrete wavelet transforms to a broader class of transforms. Finally, in the course we would talk about what is called the wave packet transform and some variants of the wavelet transform and variants of filter banks and to end we shall look at several different applications of wavelets taken from different fields. With that then we come to the end of this first introductory lecture on the subject of wavelets and multiracial digital signal processing and proceed there with in the next lecture to talk about the R multi resolution analysis. Thank you.