 A warm welcome to the 26th session in the second module of the course Signals and Systems and we shall spend this entire session on recapitulating some of the ideas that we learned in the last session and explaining them a little better, they were rather slightly difficult ideas. You see, the first idea was about that sine x by x, so let us plot that sine x by x pattern. What we had was an x0 given by omega 1 into t minus t1 and we had plotted the pattern sine x by x or sine x0 by x0, but then it was also scaled, so what we plotted was essentially 2 kappa 0 times omega 1 into sine x0 by x0 and x0 was defined as shown here. Later what we said was we wanted to take omega 1 tending to infinity, so we would like to plot this not as a function of x0 but now we plot as a function of t minus t1, let us do that, let us plot this as a function of t minus t1. As a function of t minus t1, the nulls depend on omega 1 of course, the first null is at pi by omega 1, this is now a function of t minus t1 and the first negative null is at minus pi by omega 1, so this width here is 2 pi by omega 1 and the height here at the point 0 is 2 kappa 0 times omega 1, so as omega 1 tends to infinity approximate by triangles, call this a triangle, call this a triangle, call this a triangle, call all of these triangles. So you have a situation where you have several triangles, you could, I mean see this is an informal explanation, what I am saying is essentially whatever shape you choose asymptotically what is happening is the height diverges to infinity and the width goes to 0, so a non-zero but constant area starts getting encapsulated in each of these, the main lobe and the side lobes. And this is where you just have to take my word if one really wants to do a precise calculation, one would have to use numerical integration or other kinds of methods. In fact I encourage you to do that, those who really want to understand this better might do well actually to do a numerical integration of these and find out how this area converges as you take omega 1 tending to infinity. If you look at this figure here what is happening is we have several triangles here or several shapes essentially asymptotically we have a sum of constant areas positive and negative. And the beauty is that areas are remaining constant asymptotically you can visualize that as the height is growing to infinity in each case and the width tends to 0, in fact if you look at it I have reasoned out for the main lobe but you could do a similar reasoning for the side lobes, you do not want to worry too much about the side lobe because you can slowly assume that it is the main lobe which will dominate anyway, none of the side lobes have an area you know if you take there is one positive side lobe there is another negative side lobe and there is another positive side lobe there is a negative side lobe. So the positives and negatives if you look at it the side lobe contribution I mean at least here I am not evaluating it formally but I am just stating that the side lobe contribution will never overtake the main lobe contribution in area. So finally it is the main lobe area which will dominate so to get an idea of what is happening you could be quite content in just looking at the main lobe area to get a feel. So the whole situation here is that we are reaching a situation where all the area starts getting concentrated around the centre 0, t minus t1 equal to 0 and we have the flexibility to choose kappa 0, in fact a choice kappa 0 equal to 1 by 2 pi makes this asymptotically an impulse delta t minus t1 asymptotically as omega 1 tends to infinity. So the question is where is the orthogonality issue here, we need to understand that, where does orthogonality come in? Now look carefully what we are integrating is this quantity and we can rewrite this quantity. Now when you write it like this it is obvious that we are talking about an inner product, let us think of it as an inner product, let us look at the expression. This is an inner product, in fact it is an inner product of e raised to power j omega t and e raised to power j omega t1 treating omega as a variable as the independent variable. But you know you could as well write inner product of e raised to power j omega 1 t and e raised to power j omega 2 t treating t as the independent variable and that would essentially be integral from minus to plus infinity e raised to power j omega 1 t times e raised to power j omega 2 t complex conjugate dt and that is very easy to write that is essentially minus to plus infinity e raised to power j omega 1 minus omega 2 into t dt which is essentially a very similar integral. Essentially what we have shown is that copper 0 times this integral fits to delta omega 1 minus omega 2 and here we have the answer to the question that we have been asking, where is the orthogonality of these rotating complex phases coming into the picture? It is coming in here. Look at it, you have essentially the inner product is an impulse. Now you know this inner product actually is a diversion quantity because the integrand has a magnitude of 1. But what it means is if you allow generalized functions that inner product goes to an impulse. The impulse is at omega 1 minus omega 2 is equal to 0. The impulse can be thought of as a function of omega 1 minus omega 2 not omega. It is a difference between the frequency. As the difference between the frequencies approaches 0, you have a non-zero area concentrated and as omega 1 minus omega 2 goes away from 0, you have nothing there if you think of the impulse informally. So the inner product is an impulse. That is how we should understand this orthogonality. It is all concentrated around omega 1 minus omega 2 equal to 0. If you think of the impulse in the limiting case of a pulse, the pulse dies down quickly, very quickly, except at the point where it lies. So the inner product dies down very quickly when you go away from omega 1 minus omega 2 equal to 0. So in this reconstruction that is how we have indirectly brought in the orthogonality of the rotating complex exponentials with different angular frequencies. We have spent 3 sessions on explaining this inverse. This is slightly involved. You would need to think of this again and again to understand and appreciate these ideas better. We will also do some examples and exercises on the Fourier transform and that would make it easier for you to appreciate how the Fourier transform is used. Now if you really want to have a rigorous understanding of the Fourier transform, one should go down to functional analysis. But what I have tried to do here from a fundamental course on signals and systems point of view is to give a somewhat rigorous and a somewhat informal explanation of the Fourier transform and its inverse. We will see more in the next session. Thank you.