 A warm welcome to the 18th session in the third module on signals and systems. In this session, we will now try to address again the whole issue of making, sampling and reconstruction more practical. Now, one of the things that we did in the previous session was to make the reconstructor somewhat more practical. We said even if I used a very crude reconstructor, just that R-C circuit, it could reasonably serve as a reconstructor under two conditions. One is that I am putting my signal well within what I call the flat zone around 0. The other is if I could modify my sampling process so that the non-zero falloff as omega tends to infinity does not allow too much of the carbon copies to come into the output. Now, how do I do that? In fact, I do that by using non-ideal sampling. So, what we now want to do is to write down a more formal proof for the whole process of sampling whether ideal or non-ideal and reconstruction. The Shannon sampling theorem that we talked about a few sessions before. We would now like to write down a more general version of that Shannon sampling theorem where the sampling is not ideal and in fact, the proof is very similar whether it is ideal or non-ideal. We shall now get down to that. So, let us write down a proof of a more general form. So, what we are going to do is not to sample with ideal impulses necessarily, I mean not to necessarily sample with ideal impulses. We shall in fact sample with what is called a pulse train. So, let us denote the pulse train like this. Let the pulse occupy the region from 0 to delta in the interval from 0 to t or Ts. So, of course, here we can assume this pulse is reasonably flat. It need not be as we will see after a while, but we could now write down the Fourier series expansion of this pulse or pulse train and we will write the complex Fourier series. That is very easy. Let that pulse train be denoted as Pt and let us expand Pt in a Fourier series. This is very easy. Pt takes the value 1 between 0 and delta, capital delta that is and 0 elsewhere in the period. So, it is very easy to write down. Of course, this holds except for k equal to 0. We write a separate expression for k equal to 0. Let us simplify this. So, we have an interesting expression here. We can strike off the two j's from the numerator and denominator. We can strike off the two here and we can divide and multiply by Ts in the denominator. This would now be equal to e raise the power minus j 2 pi by Ts k delta by 2 times 1 by Ts. Now, how does this look? This looks like a sinc function, is not it? You can multiply again numerator and denominator by delta. So, I can bring a delta here on the numerator and put a delta on the denominator. So, look at this expression here. This is essentially a sinc function and we can write it, we can write the whole expression in terms of sinc. Remember that sinc of r is essentially sin pi r by pi r and you can sketch the sinc function like this. You see, now observe what is happening. So, you are essentially taking many samples of the sinc function. By the way, I leave it to you to show that C0 is essentially the limit of this as k tends to 0. I leave that to you to show. It is very easy to show. In fact, you can calculate C0 separately and you can prove this. But observe this carefully. This is you know C0 would come here and if delta by Ts is very small as is the case with ideal sampling, you know, so that is why you know in ideal sampling you also had a factor of 1 by delta that is why this delta went away. In ideal sampling, you would essentially be taking samples very closely around 0. So, let me zoom it and explain to you what I am trying to say. If we are doing ideal sampling, we multiply Pt by 1 by delta and take delta tending to 0. So, of course, Ck then becomes e raised to power all that business and since delta tends to 0, this anyway goes to 1 and then you have 1 by delta multiplying delta by Ts. So, the delta goes away and you then have sink k delta by Ts. So, as delta tends to 0, Ck all of them become essentially equal to 1 by Ts all the same. But now take the situation where delta is not 0. We have intentionally kept delta not 0. What is the situation? Ck is of the form e raised to power j 2 pi by Ts times k delta by 2 and of course, as we did instead of multiplying by 1 by delta, well, we could actually multiply by 1 by there is no problem. So, we could multiply by 1 by delta here too if you want to keep the area the same. So, then you would get a 1 by delta term here and then the delta would go away. So, that would give you e raised to power minus j and so on, you know. So, at least this delta would not be outside. Now, remember what you are doing is essentially sampling the sink function. You know you are taking values of the sink function at every multiple of delta by Ts. So, let us sketch that. Let me expand this. This is the sink function. So, you know you are taking samples like this. You are taking the sample at 0 for C0 and then you are taking samples here, here and then here and so on and the spacing of the samples is delta by Ts. So, delta is very small, the spacing goes to 0. So, you are going all around 0 itself, but if the spacing is non-negligible, what is happening here is quite interesting. You are actually getting Ck, C1, C2, you know. So, C1 would come here, C2 would come here and so on. Those coefficients are actually reducing first. So, you know you have this situation where delta by Ts is not 0, but is not too large either. So, you know you have a situation here. So, delta is not 0, but not too large. So, you have taken samples from the main lobe and the samples are of decreasing magnitude. Why is that a good thing? We will see in the next lecture that if you have this situation, the Fourier series decreases in magnitude as a function of k at least for the initial coefficients, it is actually good for a non-ideal reconstruct. We shall continue this discussion in the next session. Thank you.