 So, you see what we need to do is to put down some advantages of a fire filter design by window. First, symmetry or asymmetry can be maintained and therefore linear phase or pseudo linear phase can therefore be maintained as well. Let me spend a minute explaining this once again, it is a very important idea. What we are saying is that if you have a response h of n which is real and h n is equal to either plus h n or minus h n, h of minus n, I am sorry, there is either even symmetry or odd symmetry. Then the corresponding frequency response is of the form summation n going from minus capital N to plus capital N. Now, here I am assuming odd length, the same argument can be extended to even length h n e raised to power minus j omega n and we can club. You see we will take the plus and minus cases separately. So, we can club to take h n equal to h minus n. So, we can rewrite this as h omega is summation n going from 1 to n, h n e raised to power minus j omega n plus h minus n e raised to the power j omega n plus h of 0 by clubbing the plus and minus terms together. And of course, these are equal and therefore we can rewrite this as h of 0 plus summation n going from 1 to capital N h n times 2 cos omega n. So, therefore when you have a real and even impulse response as we expect the frequency response is also real and even and this is called the pseudo magnitude. It is called the pseudo magnitude because if you now delay this impulse response by n samples to make the FIR filter causal, the only change that takes place in the frequency response is a factor of e raised to the power minus j omega times the delay and that only contributes a linear phase. So, you have a pseudo magnitude multiplied by a linear phase. Now, the only catch is that it is a pseudo magnitude. This is not quite the magnitude. In other words, it could be positive or negative. Now, wherever it is negative, you are also putting an additional phase of pi. So, you can call this resultant causal FIR filter that is the FIR filter which has been obtained by delaying this by capital N samples as a pseudo linear phase filter. pseudo in the sense that it is linear phase to the extent of a phase factor of pi, linear phase plus minus I mean plus 0 or plus pi. It is called pseudo linear phase and of course, a similar in fact, this I leave to you as an exercise. Exercise, reason out what happens when h n is equal to minus h of minus n. Here you would find the pseudo magnitude has phase plus minus pi by 2. Essentially, what we call the pseudo magnitude in this case would have an additional factor of either plus j or minus j. And then of course, if you delay it again, you have the linear phase term, but then here the pseudo the so called pseudo magnitude would have a phase of either plus 90 degrees or minus 90 degrees. So, this is what we mean by FIR filters allowing us linear phase. You know when you maintain symmetry or anti symmetry in the response, then you are guaranteed a pseudo magnitude or pseudo linear phase, pseudo magnitude plus linear phase or pseudo linear phase. So, it is the best the closest to linear phase that we can get that is what it means. Now, this is one of the advantages. The second advantage is that FIR filters are unconditionally stable. The impulse response is always absolutely summable. One in the presence of numerical inaccuracies. So, you see if the coefficients are real when we realize the coefficients in finite precision that is likely to be inaccuracy in the representation of the coefficient. But even in the presence of those inaccuracies, the stability of the filter is unaffected. Now, this is not the case with IR filters. If the poles of the IR filter happen to be close to the unit circle and if there are numerical inaccuracies in realizing the coefficients, there is a possibility that the coity poles may migrate outside the unit circle in the presence of numerical inaccuracies. And then we have trouble in stability then of course, it does not remain a filter at all because then you are not even you know you are not sure if the now of course, I would still say it remains maybe it is not correct to say that it does not remain a filter. It remains a filter, but then you have this trouble that you are not sure whether a bounded input can result in a bounded output or not. Incidentally, IR filters can never give you linear phase. And in fact, I pose this as a challenge to you. Show that IR filters which are causal can never give you linear phase. I believe I have posed this challenge before, but I am just repeating the challenge again. I also give you a hint. The hint lies in showing that causality and symmetry cannot go together. Causality, symmetry and IR cannot all go together. Anyway, you see we have seen this universal principle of engineering and nothing comes for free. This is also true here. So, FIR filters seem to have everything that we would want them to. In fact, one more thing that they have is that there is at least a design approach for FIR filters which are non-piece wise constant in the ideal response. So, we know how to for example, realize an approximation to the discrete time differentiator by using FIR filters. Simply find the ideal impulse response and truncate it or find the ideal impulse response and then window it. So, we know I mean at least one way to do it. We do not know how well that approach would work, but experience tells us that it at least works well. We have an approach. We do not have one for IR filters at all. There is no way to design an IR or there is no easy way to design an IR discrete differentiator. I mean definitely not based on the bilinear transform because the bilinear transform is going to distort the frequency axis. So, it cannot be used for non-piece wise constant responses. You see if now when you reflect on the bilinear transform with the benefit of hindsight, you realize that the reason why the bilinear transform worked even though it made a non-linear distortion of the frequency axis is that the bilinear transform in frequency was a monotonically increasing transform. So, as capital omega increased small omega also increased all the way from minus to plus infinity. So, as capital omega the analog frequency went from minus infinity to plus infinity the discrete time frequency went from minus pi to pi and therefore, pieces of the axis, contiguous pieces of the frequency axis mapped to corresponding similarly ordered contiguous pieces of the discrete frequency axis. Pieces went to pieces and therefore, the bilinear transform in spite of the non-linearity of the frequency transformation was employable for piecewise constant filter design. But it would not be applicable for designing a discrete time differentiator because there even if you happen to design a very good analog band limited differentiator when you transformed it with the bilinear transform, the frequency response would be completely distorted from linear. And therefore, we do not have a good we right now we do not have we have not talked about any meaningful way to design discrete differentiators or similar such responses are not piecewise constant in the IIR context. So, that is another reason again why IIR, why FR filters are attractive. So, then where is the where is the price that we are paying? The price and that is what we will now write price for all these advantages. The same specifications when realized with FIR filter designs demand more resources. So, in fact, you would want to verify this when you carry out the design that you have been assigned for the same magnitude specifications when we realize it using the FIR filter, you would find typically that the FIR filter is much longer it requires many more additions and delays. So, nothing comes for free. Anyway, so much so about the relative behavior of IIR and FIR filters and now we have been talking about resources all this while you must now actually come down to the issue of realizing filters. Now, there is one little thing before I go to realization that I would like to mention in the context of FIR filter design. You see one might wonder why at all one should use the rectangular window when you have so many other windows to choose from. Of course, one argument is that the transition bandwidth is kind of the minimum. So, if transition bandwidth is the issue then the rectangular window is a good choice, but more importantly there is a fundamentally other reason why the rectangular window is attractive. You see when we talk about passband and stopband tolerance all this while what we have been talking about is what is called the L infinity tolerance or the maximum deviation. So, what I am saying is when we put down the specs for a low pass filter for example, we said something like this we said there is a passband tolerance and there is a stopband tolerance meaning that the magnitude in the passband must be within this shaded area and the magnitude in the stopband must be within the shaded area. However, we are saying nothing at all about the extent to which or how or the frequency with which this should deviate from the ideal in the passband and the stopband. So, it is quite possible that in the passband it is only at one frequency that it really goes all the way up to the tolerance everywhere else it might be far away from the tolerance it might be close to the ideal. So, this is called the L infinity notion of error. Now, this L infinity is a strange word at the moment, but it will become clearer when we come to another notion of error called the L 2 notion of error. The L 2 notion would be the mean squared error or the sum squared error and it will be very clear where the number 2 comes from. You see the sum squared error L 2 error as you might want to call it is essentially the desired frequency response minus the actual frequency response. The absolute value of this is taken and integrated from minus pi to pi and this is where the 2 comes from the 2 comes from the square. So, when we talk about L 2 error what we are really talking about is the magnitude squared of the error as seen all over the passband. You see now also the actually if you really want to understand the number the why it is called L 2 one should define the L 2 error to be the square root of this integral of the error squared not error squared, but it does not matter I mean you know if the squared error is the maximum so is the square root of it. You know the square root is a monotonically increasing function that is not such an issue, but you know if you do take the square root then it does explain the infinity concept. So, if you were to take this instead of 2 if you had 3 there you would call it the L 3 error. If you had 1 there it means if you took just the absolute value and integrated it would be called the L 1 error. You could now conceive of the L infinity error that means you raise it to the power infinity notionally or raise it to a larger and larger power, but then do not forget to take 1 by that power outside. So, raise it to the corresponding root also. So, if you are raising it to if you take I calculated in the L 10 error for example you raise the mod error to the power 10, but then you take a 1 by 10 power outside. Now you can visualize this being taken to infinity and then what really happens is that as you take it to infinity it is only the maximum which survives all the others you know are suppressed that is why we call it yes there is a question also the question is would you consider the error only over the past point or everywhere the answer is everywhere. So, you know you have a desired response all over now of course you may argue what happens in the transition band well you know that is of course that is an important point here the transition band actually does not have a desired response specified. Now again it is interesting it does not matter. So, here you could for example take the middle of the transition band as a point of separation and you could take the response to be 1 up to the middle and then 0 of the middle in the case of a low pass filter for example and use that as a desired response. You could also if you wish take only the past point in the stop band and put down an error here that will also be a meaningful L 2 error right, but the error is calculated all over the band from 0 to 5. Now yes there is a question. So, the question is how do you take the desired omega the desired response is 1 in the past and 0 in the stop band. Anyway the point is the rectangular window actually minimizes the L 2 error as well. In addition to the transition band being optimized with the rectangular window if one is talking you see the transition band is optimized, but the L infinity error is the worst for the rectangular window. However the L 2 error is the minimum. So, the rectangular window is not without advantages. So, you see that also tells us that L 2 error minimization is not the same as L infinity error minimization and that is not too difficult to understand. You see it is quite possible that at one place as I said you may have the response deviating very far from the ideal, but it may be pretty close to the ideal many other places. Therefore, the L 2 error could be low on an average the squared error could be low, but you know because at one place it deviates very far the L infinity error is significant. So, that is about L infinity and L 2 error. What I also try to illustrate here is that there is just not one notion of error though we have taken the L infinity error all the time in our discussion without having explicitly realized it all this while.