 A warm welcome to this discussion session in which I have requested one of my teaching associates to convey to me some of the ideas that were raised in the discussion and some of the points that he identified from the discussion forum and from the issues that were raised in the examinations or in the quizzes. So, I am very happy to introduce Bijoy Singh Kochal who is one of our teaching associates here in this course. He has been keenly following what you are doing in module 2 and he has taken the trouble of identifying some important questions, some important issues which seem to be recurrent in the discussion. In fact, let me begin by showing you a very interesting answer that Bijoy has given to one of your queries and we will take this discussion little further right now. Here you will notice that there was a question posed about the linearity when you have higher order linear constant coefficient differential equations and SS3 said that when you have higher order derivatives like these 2 y dt squared they did not seem to be linear to him or her I do not know. So, Chaitanya and Bijoy answered this question, but I would request Bijoy to explain to me succinctly what the issue was that we can take it further. So, Bijoy. So, the question states and asks the doubt about what linearity is in general towards for all functions and just giving a fixed set of functions and calling them linear is sufficient or does the definition have a broader aspect to it. Yeah, so you see I think in this question the confusion arose because it seems that when you take higher order derivatives it gives the impression of raising to a power. Now, you know you are raising the operator to a power, but the question is linearity or otherwise as a relation between y t and x t. So, let me emphasize that by writing it down. You see if you look at it let us take for example, an equation of the second order in y and of course, also of the second order in x. So, you might have something like this is a second order linear constant coefficient differential equation, but it is still linear. It is linear because if I apply this equation twice I operate. So, what I do is I will show it in the same equation I do not need to rewrite the equation. I will write 1 in one color x 1 as an input. So, I need to make a correction I am sorry I have forgotten to write y 0 t and x 0 x t here I will just make that correction there is no problem. Now, let me continue this here let me put x 1 first that means I have given x 1 and I get y 1 I will just use two colors should show the whole idea and now I use a different color I will use blue. So, when I I put commas you know it is a multi statement here when I put x 2 into this equation I get y 2. Now, if I put in alpha times x 1 plus beta times I will do that in green. So, if I put alpha times x 1 plus beta times x 2 into this this is still obeyed and so on it is complete the discussion. Now, this is what is important you see I am putting down the final I am writing down what I expect to get you see we want this if linearity is to hold and how would you arrive at this you could arrive at this by alpha times the red equation plus beta times the blue equation. So, to speak you could arrive at this by taking alpha times the equation that comes in red and beta times equation that comes in blue and you see what is linear here is the fact that even if you are take higher order operators. So, even if the operators being raised to a higher power that higher power of the operator operated upon a linear combination still gives you the same linear combination of the respective outputs. So, one should not confuse the higher power of the operator and non-linearity perceive there is no non-linearity in bringing in a higher application or repeated application of the operator the linearity is still preserved is that right. So, that was answered very well by Bijoy and by Chaitanya here in fact if you look at it Chaitanya has answered the question initially by explaining what linearity linear dependence means and later Bijoy has also clarified again when the student SS3 had something still not quite clear some confusion and I hope it is now clear yes. Also in LCCDE can you elaborate on the initial conditions of the system. You see in a linear constant coefficient differential equation the moment you put down initial conditions which are non-zero even whether it is initial conditions or conditions at any point it is not just initial the moment you insist that the output be equal to a certain non-zero value at a particular point in time. So, you are holding the output to a value linearity is destroyed because now when I give an input x1 and expect to get that value at y and I give another input x2 and expect to get that value at y then all linear combinations of x1 and x2 are not going to give me that value at y you know if you still wanted to give that value at y then either that it is not the linear combination of you see for example suppose you want y at the point t equal to 5 to be 6. Now you want x1 to produce 6 at t equal to 5 you want x2 to produce 6 at t equal to 5. Now if I give 3 times x1 plus 4 times x2 linearity would have demanded that you produce 3 times 6 plus 4 times 6 at t equal to 5 but you are not producing that you are insisting that it still produce 6. So, linearity is destroyed very simple to see that moment you insist on however if you are talking about 0 conditions. So, if you want a 0 forcing function at a point then there is no problem you know. So, it is interesting with a 0 forcing condition linearity can still be preserved but if you have a non-zero forcing function the linearity is destroyed and of course, if there is no constraint at any point on the real axis of time then linearity still is preserved by virtue of the structure of the linear constant coefficient differential equation. So, I hope that clarifies the doubt that many people raised. Also sir can you explain the idea behind positive and negative frequencies and what does negative frequency have do they have a physical interpretation? Very good that is in fact a very important question and by the way you know this question plagues many and many a student of signals and systems even after he completes he or she completes the course. So, I am not surprised that this question is being raised you know many people wonder what negative frequency mean what at all can you mean by negative frequency. Now let me emphasize that negative or positive has no meaning in sinusoidal frequencies. So, sinusoidal frequencies are not going to be negative or positive it is the complex phasors which can have a positive or negative frequency. If a complex phaser rotates anticlockwise it is said to have a positive angular frequency if it rotates clockwise it is said to have a negative angular frequency and one anticlockwise rotating phaser and one clockwise rotating phaser come together to make a sinusoid. So, in fact in some sense both positive and negative frequencies are equally real or equally unreal in the context of complex phasors. However, when it comes to sinusoid you may either say that there are only positive frequencies or you can say there is no sign that needs to be attached to a frequency. In fact, let us see an expression let us take the expression a times cos omega t plus 5. Now, if I replace omega by minus omega here it would give me a cos minus omega t plus 5 and that amounts to saying a cos omega t minus 5 because you can negate the whole argument in cosine. So, it amounts to changing the phase you know. So, there is no real concept of negative frequency of a sinusoid. In fact, if you were to repeat the same exercise for a sign something similar would happen except that a negative sign would come out too. Now, there is so this notion of negative and positive frequency. In fact, a lot of people see I mean for whatever reasons perhaps because you know we tend to think of frequency as positive people are quite willing to accept positive phase of frequencies, but they are a little hesitant to accept negative phase of frequencies. And they confuse positive phase of frequencies with positive sinusoidal frequencies they are not the same thing. Well, it is basically the same pair of rotating phases one rotating anticlockwise, one rotating clockwise which come together and form a sinusoid of that frequency. The frequency runs all through, but the positivity and negativity only has to do with the complex phase of not with the sinusoid. I hope that clarifies the question. Also, can you please elaborate on the eigenfunction and eigenvalue properties of LSI systems? Very good. So, you know the notion of eigenfunction and eigenvalue. You see the basic idea of an eigen is an LSI system here put in some xt and if we get alpha xt as output we say xt is an eigenfunction with eigenvalue alpha. So, basically eigenfunction the system's very own function you know the function which goes into the system and comes out unchanged in form, but multiplied by a constant that is the idea of an eigenfunction and the eigenvalue is the constant by which you multiply that input right. So, now what can happen at times is that you could have a class of functions which go into the LSI system come out as not quite unchanged in form, but belonging to a similar set. So, now let us take for example a situation where you first give e raised the power j omega t to an LSI system with impulse response xt and you get integral minus 2 plus infinity h tau by convolution and if we are sure this integral converges. So, if this converges then it is the eigenvalue and this is an eigenfunction therefore, because this function has gone into the system come out unchanged in form, but multiplied by a constant and that constant can explicitly be related to the impulse response. Now put in a linear combination of this. So, for example let us put in the same LSI system or any other LSI system with impulse response xt a combination. So, let us the combination could be with omega equal to 0 as one term. So, let us say some constant kappa 0 plus kappa 1 e raised the power j omega t. Now here this would come out of course, as multiplied by different constants. So, you know you would have kappa 0 into capital H evaluated at 0 and plus kappa 1 e raised the power j omega t with capital H evaluated at omega where capital H omega is by definition and we assume it converges for these two values assume it converges. So, you see here the output is not quite unchanged in form the output is not simply multiple output is not equal to input multiplied by a constant. However, the output is indeed in the same set so to speak. So, you know if you have a set of if you have a linear combination of constant terms and one specific exponential if you define that to be a you could call it think of it as a space of functions. So, the output belongs the same space that can be set, but it is not the same as Eigen function it is slightly less than an Eigen function. You know it does not come out unchanged in form the form changes what changes is a relative presence of the two terms could change. I mean if it just so happens that for these two values suppose H 0 is equal to H omega for whatever reason then of course, you know it happens to become an Eigen function too, but that in general may not be true is that right. So, does that clarify the question? Yes and that is all the questions we have for this discussion video. Very good. Thank you so much Bijoy. It was very good that you identified some points and do come forward with more points later if there are. Thank you.