 A warm welcome to the 13th session of the fourth module in signals and systems. We have been looking at the whole process of inverting the Laplace transform and one important property that is required for inversion is the differentiation property. So, we had seen what happens when we differentiate Laplace transform in terms of its the consequences of that differentiation time. In fact, let us review that. So, we saw that the differentiation property says if x t has the Laplace transform capital X of s with the region of convergence r x, then t times x t has the Laplace transform minus dx s d s. Essentially r x, we may need to worry about extremities. At the moment, let us not bother about this remark. I am just making it for the sake of completeness. But what is important is to note that on basis of this property, we were able to deal with Laplace transforms of the following nature. m being a positive integer and we are talking about Laplace transforms of the form 1 by s plus alpha to the power of m with either real part of s plus alpha greater than 0 or real part of s plus alpha less than 0. We have two different regions of convergence, two possible regions of convergence here. Now, there is a very simple principle to associate these with the so called right sided and the so called left sided signals. So, let us sketch. You see, we may have alpha somewhere in the s plane. Let us sketch alpha in the s plane wherever it lies. So, this is the s plane. Let us say alpha is somewhere here. I mean, I am not necessarily talking about a real alpha or c. So, this is the real part of alpha and this is the imaginary part of alpha. And now, we are saying that we could either have, you know, it is the real part of s plus alpha. So, you know, the condition is real part of s plus alpha greater than or less than 0, which means you need to find out real part of s greater than minus real part of alpha or real part of s less than minus real part of alpha. So, instead of alpha, let us write minus alpha here. So, let this be minus alpha. And then, of course, you just need to put a minus sign everywhere. So, based on this, you can now identify two clear regions. This region, which I am shading in red here being real part of s is greater than minus real part of alpha or the region which I am now shading in blue, which is real part of s less than minus real part of alpha. Now, you can think of this blue region as the so-called left-sided region because it is to the left of this critical line. So, let me mark the critical line in green. So, you know, the blue region is to the left, so to speak, of this critical line. And the red region is to the right of this critical line. And therefore, we have two possibilities as far as the corresponding signal goes. One is that you can have a right-sided signal or a left-sided signal. And actually, the mnemonic is very simple. The right-sided region of convergence corresponds to the right-sided signal. And the left-sided region of convergence corresponds to the left side. It is so simple. It so happens. Let us write that down. A very simple mnemonic, right-sided to right-sided and left-sided to left-sided, very simple. So, you know, by right-sided, I mean of the form something multiplied by U t and so on. And here, we mean of the form. Of course, could be a shift, could be a finite shift on these. By these, I mean these two signals that I am talking about. There could be a finite shift, you know. If there is a shift backwards or forwards that whatever it is, essentially a right-sided region of convergence corresponds to a right-sided signal. And a left-sided region of convergence corresponds to a left-sided signal. So, in that sense, the choice between the two for a given such term is not difficult. But what happens when a Laplace transform has multiple such terms? So, let us look at that possibility now. Suppose a Laplace transform is of the following kind. Now, you see, let us assume that real part without loss of generality, let the real part of alpha be rather minus real part of alpha. That is what is important. Minus real part of alpha be less than minus real part of beta. You see, how do you deal with this kind of an expression, whatever be the regions of convergence? In fact, there are only three possible regions of convergence here. And in fact, they are based upon real part of s plus alpha and real part of s plus beta. So, you know, given this condition, minus real part of alpha is less than minus real part of beta. So, you know, in the s plane, we are talking about a situation like this. This is the s plane. You might have minus real part of alpha somewhere here, let us say, and minus real part of beta somewhere here. So, you know, either we could have real part of s plus alpha less than 0 and real part of s plus beta less than 0, which amounts to saying that real part of s is essentially less than minus real part of alpha, because that automatically ensures the second condition, because minus real part of alpha is, of course, less than minus real part of beta. So, nothing more needs to be said. So, it means we are talking about this region, which I am now going to shade in red, or I could have the region, which I am shading now in blue. So, I draw vertical line here and draw blue shaded region, wavy blue region. In this region, we are talking about real part of s lying between minus real part of beta and minus real part of alpha. And finally, we have the third possible region, where I could shade it in green and this corresponds to real part of s greater than minus real part of beta. And there are only these three possible regions. Of course, in all of them, you do not all really, only two of them, the ones which include what are called the infinity contours, you have to worry about whether the infinity contour is included or not. It is a minor detail, but an important detail. But aside apart from that, there are essentially three possible regions of convergence. Now, it is very clear that taking any one region of convergence, it falls into one of those two possible regions of convergence with respect to a specific vertical line. So, let us take, for example, this blue region of convergence, this one. Now, the blue region of convergence is clearly to the right of the vertical line passing through minus real part of alpha. And it is to the left of the vertical line passing through minus real part of beta. So, it is a clear right or left demarcation with respect to each vertical line. The same thing holds for the red region. The red region is clearly to the left of both. And the green region is clearly to the right of both. So, it is very clear that there is no confusion with respect to any particular vertical line. It is very clear whether your region of convergence rise to the left or to the right. No ambiguity on that. So, now, if I can break this Laplace transform into terms that contain only one of these vertical lines, let us do that. That is done by a process called partial fraction expansion. So, let us make a partial fraction expansion of this Laplace transform. Here, it is very easy to make a partial fraction expansion. It is expected to be of the form some constant A divided by S plus alpha plus some other constant divided by S plus beta. And how do we determine A and B? It is very simple. Multiply both sides by S plus alpha and put S plus alpha equal to 0. So, you get X S multiplied by S plus alpha is 1 by S plus beta. And this should be equal to A plus B into S plus alpha by S plus beta. And when you put S plus alpha is equal to 0 or S equal to minus alpha, this term vanishes and this term survives. And here you get minus alpha in place of S. So, we get A is equal to 1 by beta minus alpha. We can similarly find which you can do as an exercise. So, you see when I have simple factors like this, there is no problem. Breaking it into a partial fraction decomposition is no problem. What do we do when we have multiple factors? So, let us see that case too. Now, for example, you could have something like X S is 1 by S plus alpha squared into S plus beta, in which case there are different ways to decompose into partial fractions. But the type that we wish to use is of the form A 0 by S plus alpha plus A 1 by S plus alpha the whole squared plus B divided by S plus beta. Now, here you can of course, determine A 1 with some E's. B is not a problem at all, to determine B. B can be determined by multiplying both sides with S plus beta. However, A 0 and A 1 pose some problem. A 1 again can be determined with some E's. So, we can similarly determine A 1 by multiplying both sides by S plus alpha whole squared. In fact, let us do that. So, if we do that, we would get X S into S plus alpha the whole squared A 0 into S plus alpha plus A 1 plus B into S plus alpha the whole squared divided by S plus beta. Of course, all this while we are assuming that beta is not equal to alpha, that is implicit. Now, of course, if we put S equal to minus alpha, we can obtain A 1. What about A 0? For A 0, we will need to differentiate. Now, we differentiate both sides with respect. Well, you differentiate both sides with respect to S and then put S equal to alpha rather S plus alpha equal to 0, S equal to minus alpha. You see, X S into S plus alpha the whole squared is essentially 1 by S plus beta. And you have 1 by S plus beta is A 0 into S plus alpha plus A 1 plus B into S plus alpha the whole squared divided by S plus beta. And when we differentiate both sides with respect to S, this will survive, this will vanish and this will vanish on putting S equal to minus alpha. So, you know, by making one differentiation, we can solve for A 0. This is the idea. We have to successively differentiate. I recommend you review the whole idea of partial fraction expansion. I have given some points in this session. We shall see more in the next session. Thank you.