 A warm welcome to the 41st session in the first module of the course signals and systems. In this module, we shall first discuss the typical kind of linear shift invariant system that we encounter in many different situations as far as the input-output description goes. Let us now come to it. The typical linear shift invariant input-output description. Now, let us recall the example that we began with, the RC circuit. We had xt, the input voltage and yt, the output voltage here. And we said that yt plus the current multiplied by the resistance. The current is c times dy t dt multiplied by the resistance, gives you the voltage across the resistance and the capacitor voltage plus the resistor voltage is equal to the input voltage. This was what we called the system description for the RC sub. Of course, we could always abbreviate, we could say RC is equal to tau and therefore, we could replace this by yt plus tau times dy t dt is xt. Now, what is the typical form that we see here? The typical form is the following. Some, let us call it a1 times dy t dt plus a2 d squared yt dt squared, second derivative plus the third derivative, any number that you want, but a finite number, combined with the input and potentially its derivatives. So, you could have, let us say, b0 times xt plus the first derivative multiplied by a suitable constant. All this is equal to yt, so yt equals all of this. This is a typical form. In fact, this form occurs very frequently in realistic systems, whether they be electrical systems or mechanical systems or hydraulic systems or abstract systems. So, of course, we call these differential equations, but these are differential equations of a special kind. Let us identify what that kind is. So, the first thing we see is the coefficients are constant, all these and further the equation is linear, linear in all term. So, this is what is typically called a linear constant coefficient differential equation and this is a typical form and we can abbreviate it by L C C D E or L double C D E where D stands for differential. Now, let us look at a typical discrete system. In fact, in a typical discrete system, we have the same abbreviation, but a different interpretation. So, let us see. Typical discrete LSI system or system description to be more precise. Here the output yn is equal to some a1 times y of n minus. Now, here I am talking about a causal system, but of course, you know, you could have non-causal ones too. We do not want to go into that right now when we have causal system, typical causal system description would look like this. So, here again, these coefficients are constant and the equation is linear in all terms. So, here again we have an L double C D E. The only change is that we are talking about a linear constant coefficient difference equation, difference not. There it was differential, here it is difference. So, the output at a particular n is a linear combination of the past outputs at n minus 1, n minus 2 up to n minus capital N and the present input xn and the past inputs xn minus 1 up to xn minus capital N. So, you are involving capital N past outputs and capital N past inputs, including the current input to calculate the current output and all these are involved linearly. Now, I am mentioning this, we are not going further right now to analyze these systems. So, we have done little segments of it here and there in our discussion. I am mentioning this at this time because we are going to see these systems in greater detail in the second course later. But at the moment, we must know in the first module, what is the typical system description that we often encounter and these are the two systems, you know it is very easy to remember L double C D E, linear constant coefficient differential equation for continuous independent variable and linear constant coefficient difference equation for discrete independent variable. Now, with this, we have come to a good point in our whole discussion of signals and systems. We began with the meaning of abstraction going from many systems which have similar input output behavior to one abstraction that describes all of them. We saw that in the earlier part of the course and then we realized that from the abstraction one needed to ask certain questions in the context of the abstraction which leads to properties of systems. So, we identified additivity, homogeneity, shift invariance, of course then we also had the notion of memory or lack of memory, causality and finally stability. So, main properties were additivity, homogeneity or scaling, scaling of the dependent variable I mean, shift invariance, causality and stability and an auxiliary property was memory. We said that additivity, homogeneity and shift invariance often come together in many of the system models that we wish to analyze and make the system what is called linear and shift invariant and since then we have been talking just linear shift invariant. We have been doing all kinds of things with linear shift invariance system and there is a reason for it. I also explained the philosophical reason. Let us just quickly recapitulate because we are now going to move on to a slightly different trail of discussion. You see, additivity essentially says that if I have two different things happening to the system, I can analyze the effect of the system on the two different things separately and then put those effects together to see the effect of the things happening together. Homogeneity says that if I step up or step down, the output of the effect is also stepped up or stepped down, so it tells us you know whether the system behaves the same way at all scales. If I increase the input, but keep the nature of the input the same, does it retain the nature of the effect even though the quantum of the effect might increase and shift invariance is essentially a statement of whether the system behaves like that forever. If the system behaves like that for some time and then loses certain behavior, then it is not shift invariant, so does the system retain its properties forever, so does the system retains input-output relationship forever, that is what is essentially being tested when we talk about shift invariance. In a simple model, we would like both of these things to happen. We would like the combined effect due to two individual causes to be the same as the two individual effects put together and we would like the effect to increase as the cause increases in the same proportion. And we would like this to continue, the system to continue behaving as it does for as long as we wish to, that is what linearity and shift invariance means. And once you make an assumption of linearity and shift invariance, then you have the very simple one experiment description of the system, namely give an impulse, now in fact you know that impulse was itself an interesting idea. We brought in the idea of an impulse in continuous time which required us to go to generalized functions and we also had the notion of an impulse which is very easy to understand in discrete time. It was just a sequence because one at one point and zero everywhere else. So the impulse response told us everything about linear shift invariant system, whether it was continuous or discrete independent variable. And we also saw you know that when we proved this, the proof was constructive meaning I could tell you how I could obtain the output from the input if I were given the impulse response. The how came through an operation called convolution. We looked at various properties of convolution, we have been discussing that for quite some time in this module. We looked at several important properties being commutativity and associativity and we worked out a few examples of convolution, both for continuous and for discrete time. In fact we brought in what we call the trained platform analogy to explain convolution. Now if we said the impulse response told us everything about the linear shift invariant system, we should also be able to talk about the causality and the stability of the linear shift invariant system by looking at the impulse response and yes indeed both for continuous and discrete time, if and only if the impulse response was zero all the way from minus infinity to time or variable equal to zero could we say the system is causal. So causality was equivalent to the impulse response being absent up to zero from minus infinity and present if at all afterwards. And stability corresponded to absolute summability or absolute integrability of the impulse response depending on whether you are talking about discrete systems or continuous variable systems. For discrete independent variable systems LSI system being stable is equivalent to the impulse response being absolutely summable and for continuous independent variable systems LSI system being stable is equivalent to the impulse response being absolutely integrable. We proved this. We have come to a good point where we can now conclude the discussions in this module. In this module we were looking at signals and systems in their natural domain in the domain in which they occur naturally which we call the independent variable. What we now need to do is to evolve the whole idea of looking at signals and systems in a domain different from their natural domain because we could probably try putting on glasses or taking an instrument and looking at the system. An instrument which gives us certain insights which brings us some knowledge of the system that we did not have when we were looking at the system in the natural domain. And we will take the first step in that direction in the next module of this course. So it is with pleasure that I conclude this first module and we shall soon proceed to the second module where we will look at one of the steps in going to what we call a transform domain namely the Fourier transform. Thank you.