 The relative side load area for a given window shape reaches an asymptotic unzero limit and this is the origin of what is called the Gibbs phenomenon. First have we exposed the Gibbs phenomenon in sinusoidal series approximations. When we decompose a periodic function into its Fourier series expressions, what happens is the more involved the number of Fourier series terms that you retain in the in the expansion, the better and better are the discontinuities approximated. So you see if you recall if you have a square wave and if you take one cycle of a square wave and after all what is an example of one cycle of a square wave? The ideal period of the DTFT, so if you take from minus pi to pi the ideal DTFT and then you look at the impulse response essentially as the Fourier series coefficients of this. Now that perspective is not difficult to see at all, think for a minute, after all if this be H ideal omega, what is the impulse response H ideal of n? H ideal of n is 1 by 2 pi integral from minus pi to pi H ideal omega e raised to the power j omega m d omega. So effectively these are like the Fourier series coefficients of this periodic DTFT on the variable omega, that is another perspective on the impulse response. The impulse response is like a Fourier series expansion of the periodic DTFT. Now we are looking at it the other way, all this while we thought of the DTFT as a consequence of the impulse response. Now we are thinking of the impulse response as arising from the DTFT and that is not unreasonable because it is the ideal that we know in the frequency domain and we want to find out the impulse response coefficients that should give us that ideal and that process of finding the ideal impulse response is akin or in fact is identical to a Fourier series expansion. And when we truncate this impulse response what we are effectively doing is to truncate the Fourier series of that periodic function and that is what leads us to what is traditionally understood as the Gibbs phenomenon. So what happens is on the same you see you have what happens is the more and more Fourier series coefficients you retain you have this oscillation here you get more oscillations as you approach discontinuity then you get a rise and then you get the largest oscillations here and then you get smaller ones in between you get larger and larger oscillations and then again a drop and then the largest oscillations and finally it reaches a steady situation and Gibbs phenomenon says that this oscillation does not reduce no matter how large you make the length all that happens is that these oscillations get confined to the boundary near the discontinuity but the magnitude the height of those oscillations reaches an asymptotic limit you cannot go below a certain value for the rectangular window but of course you could go below that value if you took a different window but of course at the cost of the transition bandwidth. So what I am trying to emphasize is that once again let me say it very clearly you see all windows have the tuning parameter given by the window length n. So all windows can give you as small a transition band as you desire but what they cannot do is once you have chosen a window you have bound the maximum deviation in the pass band and stop band. So the tolerance of the pass band and the stop band cannot be influenced beyond the point you can influence the transition band but you cannot influence the tolerance and unfortunately the pass band and stop band tolerances are equal or fortunately if you want to call it whatever it is now the Kaiser window allows you one more tuning parameter and that is like a shape parameter. So you see as you see it is the shape which gives you this maximum deviation relative side lobe area. So it helps you control the shape and the shape parameter allows you a tolerance between the main lobe width and relative side lobe area. So this Kaiser window is described by Wk is equal to I0 beta n that is a complicated expression but that is not bothered too much about it. Square root 1 minus k by n the whole squared divided by I0 beta n with beta always greater than 0. So beta goes from 0 to infinity and of course we have written the expression for I0 before it is the modified Bessel function of first kind and order 0 in x. So it is very easy to see that beta equal to 0 corresponds to the rectangular window. There is what is called a shape parameter. So the windows take different shapes. Now of course you must not forget that this window is described only between minus capital n and plus capital n. The modified Bessel functions of course last for all values of the variable but this window is defined only between minus capital n and plus capital n that is to be understood. And of course you can then draw the envelope of the window and then you could resample it to get the even length window. So the Kaiser window is in some sense an optimal, optimal in this game of compromise between the main lobe width and side lobe area. It was studied in depth by the mathematician Kaiser and now typically it is the Kaiser window which is used when one uses FIR filter designed with windows. In fact in the assignment that all of you would do to design the FIR filter using the window approach you would use the Kaiser window. Now how one chooses the shape parameter and the length is only by empirical experiments that have been done by Kaiser and other researchers there is no closed form for choice. And what I shall do is to put down certain guidelines so to speak for the choice of length, the empirical design equations for the Kaiser window. You see what we are assuming here is that we will take the case of the low pass filter but a very similar kind of argument can be used for other kinds of filters. Of course you have pi here, you have omega s there, you have omega p, this is 1 plus delta up there and 1 minus delta down here and delta down here plus delta and if you like we will call it minus. So the response is between plus delta and minus delta and 1 minus delta and 1 plus delta. Now notice that the pass point and the stop point problems are the same. We have no choice on that front. So the empirical design equations are as follows. The steps are define delta omega t or the transition bandwidth to be omega s minus omega p. Define a to be essentially the logarithmic value of the tolerance or the db, decibel value of the tolerance. We define alpha to be beta times n and therefore the first step becomes the choice of n. So n is chosen according to 2n plus 1 that is essentially the window and I am talking about an odd length window here. So you see if you want to design even length then you can just choose one more or one less after all these are all empirical expressions. So 2n plus 1 is greater than or equal to 1 plus. Now these numbers have been arrived at by empirical considerations 2.285 delta omega t. This is the guideline for the choice of n. So see more about these steps one could look up any standard text on DSP. For example one could look at the text by open hind. So of course we also need to determine the shape parameter. For the shape parameter which we could either call alpha or beta. Alpha is of course equal to beta times n. So we will write an expression for alpha and then you could of course correspondingly write an expression for beta. Alpha is equal to 0 for a less than 21. So you notice that alpha the choice of alpha is governed essentially only by the fast band tolerance for stop band tolerance and that is to be expected because alpha which is of course beta times n you know the shape parameter having chosen the length then the choice of shape parameter has only to do with the tolerance. So alpha can be called the shape parameter here beta times n and what this means is that as long as a is less than 21 you could as well do with a rectangular window because you know beta equal to 0 alpha equal to 0 essentially implies a rectangular window. Now if a is between 21 and 50 then it does not matter you can say less than or equal to. Then we choose this according to 0.5842 again all these are empirical a minus 21 to the point to the point 4 plus 0.07886 a minus 21. So it is the additional a above 21 which you are using in the expression yes the question is when a is less than 21 we are effectively saying the shape parameter is 0 and we are essentially saying the rectangular window does very well what it means is the rectangular window gives you a tolerance which is less than 21 decibels and finally when a is greater than 50 the expression to be used is 0.1102 a minus 8.7 alpha is equal to this and of course let us not forget the beta is equal to alpha by n and the beauty is that you see for every kind of window there is an equivalent alpha which gives you the same tolerance what I mean by that is take any window shape triangular for example now the triangular window reaches an asymptotic limit in terms of its passband and stopband tolerance what I mean by that is after a while after you keep increasing the length of the triangular window you cannot reduce the passband for stopband tolerance beyond a certain point that is called the asymptotic tolerance for that window asymptotic delta for that window the delta leads to an a and the a leads to an alpha in the Kaiser window so for every window there is an equivalent alpha in the Kaiser window which gives you a tolerance the beauty is for that alpha of course you also have an m corresponding to you know when you have a the beauty is that you know once you have chosen the for the Kaiser the beauty is the Kaiser window does better in terms of the transition bandwidth than most of the windows for that same alpha from the comparative platform once you've chosen the alpha to match the tolerance in the passband and stopband asymptotic tolerance in the passband and stopband for the window the transition bandwidth is slightly reduced so one can construct a table for each window you can put down the way the transition band varies as a function of length and one can put down an asymptotic tolerance and the asymptotic tolerance you can find out the equivalent alpha and then once you find the equivalent alpha you can study how the transition bandwidth changes for the Kaiser window of that alpha within and the transition bandwidth does better for the Kaiser window than the original window that is typically the case that's interesting that's why the Kaiser window is preferred we take just one example to illustrate this you see for the HUN window the peak relative side load amplitude or you know the effective so the peak error the peak well let's not worry about the side load amplitude the main width is 4 pi by n and the peak error in approximation that is essentially the A that we have talked about is 53 and 7 this is 44 for the HUN window and essentially taking this data from a table 53 actually happens to be for the HUN window the HUN window and the HUN window have the same main load width in fact I can write it down here HUN 44 53 for HUN window the if you take the equivalent Kaiser window equivalent Kaiser window with the same alpha for the HUN window the corresponding main load width or the transition width becomes 5.01 pi by 2n and for the HUN window it becomes 6.27 pi by 2n plus a greater A is better a greater A means a smaller delta you must understand this so greater A is better so in the sense of A the humming window is better than the HUN window in fact that's why you see those fun numbers that were there 0.54, 0.46 instead of half and half but of course better in the sense of tolerance and worse in the sense of the you know actually equal in the sense of main load width but humming is better in terms of tolerance that's why the humming window would be preferred over the HUN window in a sense the HUN window is easy to write but you know it's not as good as the humming window in terms of its tolerance but of course you see in the corresponding Kaiser window of course you see the compromise the main load the transition bandwidth increases little bit so as you go from beta equal to 0 to beta equal to infinity it's a perpetual gain at the rectangular window you have the very best transition width and the very worst tolerance and as you go towards larger and larger beta you get more and more transition bandwidth but you of course do better and better in terms of tolerance this is how we design FIR filters using the Kaiser window and now you are all equipped to complete your assignment individual assignment on the design of filters both IRR and FIR with the specifications that you have been given yes there is a question yes. We are trying to approximate e raised to x so why not use directly e raised to x so the question is you know it seemed like when we went from the rectangular to the triangular and then to the HUN humming and then going further to the Kaiser you are trying to make the function smoother and smoother and we also remarked that when we looked at the Kaiser window it seemed to temptingly resemble e raised to x so why not use an exponential itself the answer is that it's just a resemblance the Taylor series is not really the expansion of an exponential it looks like that you know it looks temptingly like that it's it's much it's you know it's it's different because you have a square term the x by 2 the whole to the power L divided by L factorial the whole square so it's not quite an exponential the exponential is not quite the optimal you know it's very so this modified Bessel function has been chosen by strategy alright so then we conclude today's lecture we will see a little more about FIR filter design and some more things about FIR filters in the next lecture thank you