 Welcome everyone to this next lecture on describing functions. So, we have started with describing functions. For that purpose, we define what is the meaning of gain of an operator, possibly non-linear and hence we defined a notion of quasi-linearization and quasi-word referred to the linearization depending on the input and there amongst various linearizations we decided a notion of optimality. So, we said that describing function is an optimal gain, optimal gain, gain with respect to a reference input, which reference input A sin omega t. Because the linearization depends on the input, this is called quasi-linearization and the word gain is what makes it a linearization. So, the describing function is nothing but an optimal quasi-linearization of a non-linear system. We will recapitulate this in a little more detail. So, what we did last time is we let R come here, this was our non-linear system, this was the actual output, we are going to compare it with some output of another linear system by a prox, we called it. Then we said R is a signal of finite average power. R has finite average power. What is finite about it? What is average about it? We integrate it from 0 to capital T, R of t square dt. So, 0 to t when we integrate, then it would give us energy. We divided it by t and now this is power, this average power and we let limit as t tends to infinity. Because if you do not let t tend to infinity, this will always be finite. There is no way it will become infinite for a function R, for a function R that is bounded at every time instant. So, the finite word refers to as this limit tends to infinity, as this limit t tends to infinity, this quantity is not unbounded, it is finite. To say that this limit exists when capital T tends to infinity means that this signal R is finite average power. So, we took an R like this, we assume that this N is bounded input bounded output stable. That means that the actual output Y actual is also of finite average power. Then we decided that what H should be fit here, so that the error E has the least average power. If Y actual Y is finite average power, then you can only stick H equal to 0 and you get the error also to be finite average power. Why? Because error E will be equal to Y actual, so one can try to minimize this. So, at the best approximation H, E will also be of finite average power and now we want to find H such that this finite average power in this is the least. So, that minimization problem is what makes describing function the best solution and it turns out that the optimum H, the optimal value is not unique. The optimal value is unique, but the optimized ZH is not unique. Any linear system, any stable linear system which evaluates to the Fourier coefficients, the first harmonics at this particular sinusoid also turns out to be a optimal quasi-linearization. So, let us see this in little more detail to find optimal quasi-linearization assuming N is Debo stable and time invariant. Time invariance is required because we want Y actual to also be a periodic signal when R of t equal to A sin omega t. The time we calculate Y of t, we analyze Y into its various Fourier coefficients A1R sin omega t plus A1I cos omega t plus A2R sin 2 omega t plus A2I cos 2 omega t and so on. So, we are going to take A1R and A1I. This we will use to define the so-called describing function. The describing function that depends on both A and omega of that particular nonlinear system is defined as A1R plus A1I j times of that divided by the amplitude of the input sinusoid. This is the definition of the describing function. Now, you can take any stable transfer function H has all its poses in the left half complex plane and evaluate at S is equal to j omega if that is equal to eta A, omega then this any such H is an optimal quasi-linearization for that nonlinear system. So, one can notice that there is a lot of non-uniqueness here. In this is in other words, you are specified with this particular point on the complex plane. Any transfer function H, any stable transfer function whose Nyquist plot passes precisely through this point, it has to pass through this point precisely at S is equal to j omega and not at any other frequency. Any such transfer function will all qualify as an optimal quasi-linearization. Now, we are going to see some more examples of describing functions or describing functions of some more nonlinearities. We already evaluated from first principles by using the Fourier coefficients of the signum, a nonlinearity whose input output graph depends like this. We had some debate whether at u equal to 0, it should be equal to 0 or plus 1 or minus 1. As I said, the Fourier coefficients do not depend on the value of y at just one point, but more on an aggregate sense. But for the purpose of calling this particular nonlinearity as a odd nonlinearity, it is already memoryless. Because it is memoryless, we could express y as a graph of u instead of the dependence on time t. In addition to it being time invariant, it is already memoryless. In addition, it is also odd, an odd function. That is what was helpful in saying that the describing function in such a case is a real function of A and omega. Because it is memoryless, it is only a function of A, the amplitude A. So, we already saw that the describing function graph looks like this in terms of A. So, for a very small amplitude of the incoming signal, the amplification is very high. On the other hand, for large amplitude, the amplification is very low. So, we are going to see some more examples, for example of the saturation nonlinearity. Now, for example, consider the nonlinearity, which is saturation. How do we expect the graph of this? Let us say the amplitude, it is equal to slope 5, as long as input u varies within the range delta. For plus minus delta, the slope of this is equal to slope 5. And for beyond that, it is saturated. So, it is saturated to 5 times of delta. So, let us try to already draw the graph of the describing function as a function of A. Do we expect the describing function to be real? We expect that to be real because this is an odd function of u. If you change u to change any value here to its negative, the value of y becomes just the negative. This is why we can call this function an odd function. In addition to it being time invariant and memoryless, it is also odd. Because it is memoryless, the describing function is a function of only A because it is odd nonlinearity. There is only a real part, the imaginary part is equal to 0. So, because it is slope 5, for all amplitudes up to delta, if you give A sin omega t as input and amplitude is less than or equal to delta, then the output is just scaled by 5. Because of that, we expect that it will be equal to a constant 5. But beyond that, you see there is more and more clipping going on. What exactly is the clipping? For A, let us say equal to 10. It has got saturated. This is what we saw briefly in the previous lecture. This clipping amount is what? For more and more fraction of the period of the signal, it will be clipped if the amplitude is high. That is why we can say that the describing function is decreasing, monotonically decreasing for amplitude larger than delta, for amplitude greater than delta. So, we will see an exact formula for this. We will see a formula for slightly more general with a little more generality even though the derivation is pretty cumbersome. But then I think that with lots of careful manipulation by keeping track of at what value of time t it saturates. By keeping track of this, one should be able to integrate explicitly and find this out. So, let us just reproduce a formula from Vidya Sagar of non-linearity that looks like this. Up to delta, it is equal to and beyond that, it is slope M2. Here it is slope M1. This is an odd non-linearity. These two lines have different slopes. This is slope M2 and in the middle there is slope M1. For such an input-output map, the saturation non-linearity is a special case of this in which M2 is equal to 0. The dead zone is another special case of this in which M1 slope is equal to 0. So, we can see that this particular, this can also be thought of as like a hardening spring. A spring whose spring constant goes on decreasing or goes on increasing. If the slope M2 is larger than slope M1 and if M1 has the interpretation that it is a spring constant, then one can think of this as a spring that hardens when it is extended. As and when it is extended more and more, the spring hardens. So, of course, this is an approximation. There is a hardening is gradual. If you are suddenly for amplitude larger than delta, there is some aspect of the signal that encounters an amplification of just M2. But for other aspects, for all lower values of amplitude, the amplification is just M1. So, how do we expect the deserving function to be M1? Then if M2 is lower, then up to delta it is equal to this after which it comes down to M2. Either it comes down or comes up depending on whether M2 is larger or smaller than M1. What is exactly this? What is the closed form expression for this? For this particular thing, we will reproduce a formula that has been calculated carefully. So, let me just write it here. So, we decided to reproduce the formula for the deserving function. So, this involves a good amount of calculation, a careful calculation. But then the formula is not very unexpected. So, it is for this that we are trying to find the deserving function. For input u outside the range minus delta to delta, the slope is M2 and M2. M2 here and M2 here. And for slope inside this range minus delta to delta, the slope is just M1. So, for this particular example, the formula goes like this. So, it has been a practice to use a lookup table where we use a readymade formula and apply it to our example. And this formula is what takes a good amount of labour to prove. But once it is proved, it is extremely handy. One often uses a lookup table of deserving function of various of many standard nonlinearities. And this one I have taken from Vidya Sagar's book on nonlinear systems analysis. So, this is the formula. Of course, one would ask is this formula for what range of A that is not difficult to answer. Because when amplitude is less than or equal to delta, the time deserving function is just equal to M1 for A less than or equal to delta. And for amplitude larger than for A greater than delta is when this formula is applicable. One could check for A equal to delta whether the two describing functions give the same value. We do not expect that the describing function for this nonlinearity becomes discontinuous as a function of amplitude A. Why? Because when A is equal to delta, there is zero amount that gets magnified by slope M2. And for A slightly more than delta, there is an infinitesimally small amount that gets amplified by M2. And hence we expect some continuity at A equal to delta. And that is what one indeed can verify by putting A equal to delta inside this formula and checking whether it is indeed equal to M1. Of course, this formula looks pretty complicated. So it turns out that it gets simplified if we use another function f of x in this particular way. This is also for x less than or equal to 1. And for x greater than 1, this is just to be equal to 1. So one can use this formula and then this entire difficult part gets absorbed into just function f of delta over A. It is expected that delta over A will play a role because at what value the new slope starts acting depends, will affect the amplitude at which the formula will also change. So this is the formula if we use an expression for f of x for this intermediate part. So we will see what this evaluates to for the saturation nonlinearity. Saturation nonlinearity is a case where M2 is equal to 0, M1 equal to 1. Let us go by the standard saturation nonlinearity where the slope is equal to 1 over the range minus delta to delta with delta also equal to 1. This formula that we have written here, let us see what this evaluates to. 8 of A becomes equal to just f of 1 by A plus 0, where f of x is equal to 2 over pi sin inverse x plus x 1 minus x square 1 by 2. So one can apply this formula for this special case. So let us just draw a graph. f of x itself looks like this, f of x as a function of x looks like this. So what we have is just f of 1 by x. So f of 1 by x and that do for A larger than, we can get this as the formula for describing function for the saturation. One can plot this explicitly, for example, on Sinab and check that this is indeed the case. If time permits, we will plot this on Sinab and show it in this course. So now one can also verify that the disturbing function for the so called dead zone nonlinearity, what is the dead zone nonlinearity? A nonlinearity whose input output looks like this. That is also again memoryless, time invariant and odd. It has some slope say m2 or say equal to 1 for this range, but for certain range it is just dead. There is no output response seen as long as the input is smaller than the range plus minus 1. So here we expect that the gain is initially 0 and then it saturates to 1. It saturates to the slope for very large amplitude, this zone over which it is dead is a very small fraction because it is a very small fraction we expect that it will eventually tend to 1. So this is how one can check by evaluating into that particular formula. So the next thing that we will do is we will take an example of a nonlinearity which has some memory. For example, the jump hysteresis and we will derive the formula for that particular purpose. That is our first example where the disturbing function turns out to become a complex function. It has an imaginary part also. So consider the jump so called jump hysteresis. So what is the jump hysteresis? Suppose this is x, this is a system whose input is u, input is called x for this purpose, output is y. So whether it is increasing or decreasing is what decides whether it is on this curve or this curve. So this is the case when x dot is less than 0, this is the case when x dot is positive. We might say what happens when x dot is equal to 0. Of course x dot for the reference signal x of t equal to a sin omega t, x dot is equal to 0 but at just one point. So at that time it is jumping from there to here. So we will say output and what about the slope? This slope is m, the jump amount is equal to 2b. So we will say that y of t is equal to mx of t plus b for x dot positive. This is equal to mx of t minus b for x dot less than 0. So let us evaluate the disturbing function from by first principles for this particular hysteresis. This hysteresis is what we will call jump hysteresis. What is jump about it? When x dot has increased fully and it is decreasing, that time the output jumps, this is the output y, the output jumps by amount to b, jumps down by amount to b. On the other hand after x has decreased fully and x dot is negative and it has decreased and when x starts increasing again, that time the output jumps up by amount to b. So for this particular nonlinearity we will derive the disturbing function both the real part and the imaginary part. So while we do this we will note some properties of the disturbing function. What are the various properties? So let us just recap the describing function, calculating procedure that will give us some very important properties of the describing function. Computation procedure. Evaluate output y of t, find Fourier series in particular first harmonic coefficients. We are not interested in other harmonics, first harmonic coefficients y a1r and a1i. These are the two things that we require. Now and then define describing function as in general can depend on both a and omega a1r plus j a1i by a. So now notice that if the output, if it turns out that the nonlinearity is, if two nonlinearity are added by an amount added to each other then the outputs will just get added to y1 plus y2. Then the Fourier series of the sum of two signals is just the sum of the harmonics. The Fourier series extraction procedure is linear in its arguments. So that is what makes that the Fourier series expansion of two signals y1 plus y2, when you add them you have to just add the Fourier series coefficients. So that is what makes this particular step in the procedure linear in the nonlinearity also. And now the nonlinearity comes not just to the addition of two signals y1 plus y2 but also to the scaling of a signal by a constant, by a static constant. If a nonlinearity just gets scaled by a constant k then the Fourier series coefficients have to just be scaled by an amount and hence the describing function has to also be just scaled by that same amount k. So what does this particular property mean? If you have nonlinearity n1 and nonlinearity n2 and if you have done lot of work to calculate the describing functions and the describing functions were calculated and now you were suddenly told that the output is the sum of the output of the two nonlinearity. Then the describing function of this big block, let us call this the describing function of n1 plus n2 of this block again is a function of a and omega will turn out to be nothing but the describing function of n1 a, omega plus the describing function of n2 a, omega. Why did we conclude this? That is because we said the output here is nothing but the sum of the two outputs here and here. Now if you are given with the Fourier series coefficients of these, is it very difficult to extract the Fourier series coefficients of these? No, because the Fourier series, let us call this map f that takes a signal y and gives you a0, a1r, a1i, a2r, a2i and so on. This map is linear in the signal y because it is linear in the signal y. If you multiply this y by constant k, so k y will just go to k a0, k a1r. To multiply this by a constant k means at every time instant the value of y of t is just scaled to k times y of t. So, this just gets scaled to k a1i and so on. Similarly, y1 plus y2 goes to just a01 plus a02 where a01 and a02 are the first entry of y1 and y2. So, to say that this map f that takes a signal y and gives you the, takes a periodic signal y and gives you Fourier coefficients. To say that it is linear in the signal means that these Fourier coefficients can just be added. Of course, it is said that this space of periodic signals, is that a vector space for you to introduce this sum, etc. If you take two signals which are periodic in the same period capital T and you add them, then again you get a signal that is periodic with that time period again and also you scale it by constant k, it will be periodic again with the same period. So, that is what allows us to say that if one has done lot of work to calculate the disturbing functions of non-linearities n1 and n2, the disturbing function of the net non-linearity n1 plus n2 as defined in this block is just some of the disturbing functions. That is coming because the procedure of calculating the disturbing function goes through this Fourier series coefficients calculation and this Fourier series coefficients extraction procedure happens to be linear. Why is this linear? Because we have that integration operation, etc. and that integration operation is also linear in its argument in the signal y. So, how is this useful? So, we will see quickly how this is a very useful property. So, if we have calculated for a saturation non-linearity for example and the output is y of t for the range 1 minus 1 and it is standard. And now we say that no, no, actually this is not what we wanted, but we in fact wanted scaling by constant k, k equal to 2, let us say. Notice that this part 1 minus 1 cannot be changed by that. That change has to happen by a slightly more complicated procedure. This is y2 in which the slope of this is slope 2. So, notice that this particular non-linearity and this non-linearity are pretty related. This non-linearity has to just be scaled, the output has to be scaled by an amount k equal to 2 to get this non-linearity. Because of this particular property, if the disturbing function of this has been calculated like we did before to a function like this, where this value is equal to 1, all we have to do is scale this 2 in which this range is equal to 1 again, but here it starts from 2. So, notice that this is just this one multiplied by 2. Let us take another example in which this is an example where it is just scaled. So, let us take another one where we add two non-linearity. So, one non-linearity happens to be just multiplication by 10. Another non-linearity happens to be the standard saturation non-linearity. This is the input which is a sin omega t and this is y1, while this is y2 and the two are added to give you the net output y. Let us plot both of these on the same graph. Both are odd, memoryless time invariant non-linearity. This one has this, while the other one has this constant at value 10 and this starts at 1 and comes down to 0. Notice that this graph is not up to scale. So, this is for non-linearity n1. Of course, it is a linearity in this case and this is for non-linearity n2. So, what about this net 1? What is the disturbing function of the non-linear map from here? Reference input to y, that is just the sum of these both, which starts at 11 and comes down to 10. So, this net, to say that we can just add these two disturbing functions as a function of A is what makes a disturbing function linear in its argument the non-linearity. So, we are going to use this very crucially to find out the disturbing function of the jump hysteresis. Of course, one can do it from first principle also, but we are going to do almost that. That way, we also benefit by understanding this particular structure that the disturbing functions have between each other. So, let us take the jump hysteresis graph again. So, recall that this was our jump hysteresis. When the input is increasing, the time it follows this amplification by m, except that it is shifted up by amount b and when the input is decreasing, the time the shift is to minus b and there is also scaling by constant m. So, let us first consider the case that m is equal to 1. This is the input A sin omega t. The output has two parts. One is the input itself, but it has been shifted up. After the shift is what we are talking about. After it has gone up, there is a shift down by amount 2b. Amount b will come back to the same graph and another amount b will bring it b lower. From here, it shifts up. So, this is how the graph looks. So, notice that this is the superimposition of two graphs, which two that we will draw on another page. This is time axis. One of them is just sin omega t. This one is the output A sin omega t plus b for x dot greater than 0 minus b for x dot less than 0. So, this is the output. Notice that we have taken amplification equal to 1. That is why we are able to see that it shifts by amount 2b plus b on one side of this minus b on the other side. So, we will write this as the sum of two things. It is better that we write down the sum here only. So, this jump is what we can say is equal to b here minus b again b here. So, when does this switch sign? It switches sign when sin omega t, derivative of sin omega t changes sign. So, derivative of sin omega t is nothing but cos omega t. So, notice that this is actually the signum function applied to cos omega t. The conclusion that I am trying to draw from here is that we have a signal A sin omega t. What we have added to that is b times the signum function applied to cos omega t. That is what we are trying to conclude from this figure. How did we conclude that? We noted that this is our original signal A sin omega t. When this A sin omega t is increasing namely from here up to here, it has shifted up by amount b. Why is this shift by amount b? Recall that this figure here, this was our jump hysteresis. When x dot was positive, it was scaled by m, m is equal to 1 for now and it is also shifted up by amount b. As soon as x dot becomes positive to negative, it jumps down by amount to b and follows this curve here. This is also again scaling by m, but it is amount minus b below the line m. This is the line m with slope m and passing through the origin. This one is amount b lower, this line is amount b above, so that the shift is exactly to b and it is symmetric. It is symmetric about this point origin, symmetric not really about the x axis or the y axis, because there is this dependence on x dot. But when we see along this axis, there is amount to b here, there is amount b on this side, amount b on this side. In that sense, it is symmetric. So, what does this mean? Coming back to this figure, after sin omega t has reached its peak, when sin omega t starts decreasing, that means the shift is down by 2b amount, so that it comes by b amount lower than A sin omega t. But then notice that this sin omega t derivative has changed, sin is nothing but to say that cos omega t function itself has changed its sin. That because cos omega t has changed its sin means to say that sin omega t derivative has changed its sin. So, this means that we are adding b when cos omega t is positive, we are adding minus b when cos omega t is negative, again we are adding b when cos omega t is positive. So, this is nothing but the signum function operated on cos omega t. So, what does this mean? What this means is that the describing function of the jump hysteresis can be very easily calculated by applying the signum nonlinearity on the derivative of the sin omega t, derivative of sin omega t is nothing but cos omega t. So, the describing function of A of the jump hysteresis equal to just a constant m applied to the scaling which we had taken equal to 1 plus j times, j times what? The describing function of the signum nonlinearity which we found was equal to 4 pi over A, I think let me verify, sorry I just now verified it is not this, it is equal to m plus j times pi over 4A. Why did we bring this in? Because we saw that the signum nonlinearity was being applied to the cos signal, cos omega t signal and the imaginary part comes precisely as the Fourier series coefficient, the first harmonic of cos omega t. It is a coefficient of cos, this is nothing but A1i, while this is nothing but A1r. This one we noted was nothing but the describing function of the signum nonlinearity, but that time it was odd, hence it applied to the real part only, but now it is coming with the cosine term and hence we have multiplied j here. So, this is how the jump hysteresis describing function looks. So, we are no longer able to plot the describing function as a function of A, but we have to plot it here. So, I missed one thing. So, notice that this one was scaled by amount B, why? Because the jump was not between plus and minus 1, but the jump was between plus B and minus B, hence we have multiplied this by amount B. So, this is the describing function in the complex plane. So, it can be thought of that if M is positive, this is how for A equal to 0 is very large and it comes down like this and this is for A tending to infinity as the amplitude of the signal tends to infinity, the jump amount is relatively very small and hence it amounts to just amplification by M, it turns out to come on the real axis. This is the imaginary axis in the complex plane, this is the real part. The significance of plotting the describing function on the complex plane as a plot in the complex plane will become very clear very soon when we use the describing function for finding periodic orbits. So, because the describing function is complex here, it is no longer real like we did, like we studied for odd memoryless non-genialities so far. Here the describing function is complex, it depends on M, A and B for the jump hysteresis, B was the amount by which it jumps, 2B was the amount by which it jumps, M was the slope for the case that x dot is positive or negative and A was the amplitude of the input signal A sin omega t. So, when we plot this, M is some positive number is what I have taken here, hence we have plotted it here and it comes from A very large, for A equal to 0 it is some number with a very large imaginary part and it decreases, the imaginary part is decreasing as A is increasing and it finally comes down to the real axis for A tending to infinity. What is the reason that for A tending to infinity it comes to the real axis, we have to go back to the plot of the jump hysteresis. So, when the amplitude is very large at the time, one can think that the jump amount 2B is a very small fraction of the total signal because it switches by amount 2B no matter whether the amplitude is 10 or 100 or 1000. So, the fraction of jump is a very small fraction of the total signal, hence the imaginary part is going to be very small, the imaginary part itself is come because it is no longer memory less, it depends on whether x dot is positive or negative and also because it is not odd, it is not an odd memory less non-linearity. So, now we are going to see how this disarming function is to be used for finding periodic orbits that is the next important topic that has historically been the reason that disarming function has been investigated so far for finding the amplitude and frequency of limit cycles and as we noted in the very start of this lecture and also a few lectures ago sustained robust sustained oscillations can be implemented only by non-linear circuits and we saw how the saturation non-linearity for example, can give us robust sustained oscillations with this third order system in the feed forward path, third order linear system in the feed forward path. Let us come back to that example, we have G of s, we have the saturation non-linearity, this is the output y, this G of s is equal to 1 over s plus 1, s plus 2, s plus 3 and we have some signal a sin omega t here. So, let us first take the case that this non-linearity is just a pure constant that can also be thought of to be the case when amplitude is smaller than the range over which it is linear. As long as the amplitude is smaller, it is within that range, the system will be seen as a linear system, one can think of it as just a constant k. So, when would we have periodic orbits in the closed loop, we will have periodic orbits if assume that the external input is 0. So, we have some signal here R, it gets amplified by k, goes through this and comes back there and it is equal to R. So, notice that R of t is here, gets multiplied by k, then G s acts on it and there is also minus sign here because of course, minus sign is operated on k before G of s is operated and that gives you back and this R and this R are the same. Please ignore the small font difference between the two. This is nothing but to say that 1 plus G s k R of t equal to 0. We will say some signal R of t equal to a sin omega t happens to be a periodic solution if it satisfies this differential equation. What is the meaning that it satisfies this differential equation? When we substitute a sin omega t into this, notice that we will get into this R of t, that time we get 1 plus G of j omega times k equal to 0 at omega. So, this is for linear systems, for linear systems necessary and sufficient condition for a sin omega t to be a solution. Of course, one might say that look a sin omega t will be a solution even if this is not equal to 0 because we could just take amplitude a equal to 0. So, we should say that the amplitude a is not equal to 0. That is to say that non-trivial periodic solution. So, when do we have non-trivial periodic orbits in the system? We will have non-periodic orbits if 1 plus product of the gains is equal to 0. It is an extremely important equation. What does this equation mean? That look at this, consider the gain from here minus 1 and the gain from here the net gain should be equal to minus 1. If you start from R, this is the net gain that we have got from here, that gain should be equal to plus 1. It depends on in the definition of loop gain whether you take this minus sign into account or not. So, R of t goes through this and comes back here but it is the same signal. This is a very hand-waving way of understanding this argument that you take a signal at some point here. It undergoes a gain by amount k. It undergoes by another gain first by minus sign by minus 1 and then by g of j omega. Why is g of j omega the gain? Because that is the meaning of a transfer function that a transfer function is precisely the gain when you give an exponential signal into it and when you give sin omega t as a signal then the amplification is exactly g of j omega at steady state of course. This requires that all the transients have died. So, how this translates to the disturbing function, how for linear time management systems the amplitude it does not play a role is what we will see in the following lecture.