 Welcome to this next lecture on disturbing function. We just saw some motivation in the previous one and we also saw that for linear time invariant systems, real part and imaginary part correspond to the sinusoidal part and the cosinusoidal part of the output for a sinusoidal input. So, we are going to use that to define the disturbing function in this lecture. So, recall that for LTI systems, let us say with transform function g of s, if you give a sin omega t, the output has a transient part which we are now going to worry from now on. We are going to have the output a real part of g of j omega times sin omega t plus a imaginary part of g of j omega times cos omega t, which we are going to write this together as a g of j omega times sin omega t. This one is a real signal, sinusoidal part, cosinusoidal part cos omega t times j, we are going to write as nothing but sin omega t. So, g of j omega is now a complex number, complex number. This making it complex has helped us eliminate this cos omega t part. Does that mean that everything in the output is in phase with sin omega t? No, there is an imaginary part hidden inside g of j omega that imaginary part does not mean that the signal is complex. We measure only real signal, in a real world we do not have any oscilloscope that measures imaginary part and that is just notation. So, the imaginary part is to be understood as the cosinusoidal part. The imaginary part inside g of j omega is to be understood as the cosinusoidal part, the part that is 90 degrees out of phase with sin omega t, 90 degrees leading or 90 degrees lagging that is extremely important that depends on indeed whether the imaginary part is positive or negative. So, we are now going to say, consider for what class of systems can we do this, we will worry very soon. Consider saturation non-linearity and we give a sin omega t, output equal to m b sin omega t plus c cos omega t plus many more. What are these many more? Are these transients? No, this is memory less. We can have transients only when the system has memory where it takes some time for the output to stabilize to the input. But this is I mean suddenly you start this and suddenly the output will come to something may not be this, it has something more plus higher harmonics. This is also missing in linear time invariant systems. In linear time invariant systems, the forced part has only the input frequency. It does not have higher harmonics or subharmonics. In our very first lecture on non-linear dynamical systems, we said that non-linear systems can have harmonics that are different from the input signal, it can have higher harmonics also. But we are interested in only b and c. So, describing function of this saturation non-linearity a, omega equal to b plus jc divided by a. Why? Because you see as I said in our previous linear time invariant system situation also there was a factor that is within a and we are going to think of this as a gain. Because it is gain we are going to divide by a. So, of course dependence on a for linear time invariant systems b will have a factor a, c will have a factor a, they will all get cancelled neatly and the net gain is independent of a. But even for linear time invariant systems, we already saw that there is a dependence on omega can depend on omega and a both in general. But of course we will see that for this situation of saturation non-linearity, there is no dependence on omega. In fact, the c will also turn out to be 0, it will turn out to be real, the imaginary part will be missing. In other words, the cos omega t will never be there. The input when you give all the higher harmonics will all have only sinusoidal terms. There will be no cos 2 omega t, there will be no cos 4 omega t, etc. So, how do you find b and c? One finds a Fourier series of the output and then finds out the first harmonics coefficients, the sinusoidal part we put into the real part here, the cos sinusoidal part we put as with multiplied by j and we divide. This is nothing but first harmonic, first harmonic amplification. This defines the describing function for saturation non-linearity. What we will do first? We will do this calculation for, let us say the sign, for the sign non-linearity. Which non-linearity? It just gives a sign of the system. Then we will analyze in more detail a ready made formula for this memoryless, for the saturation non-linearity and then we will also see for what class of non-linearity is one can define this describing function this way and what are its properties? For example, when is it going to be real? When is it going to be independent of omega? All this we will explore in the rest of today's lecture. So, first consider the non-linearity that takes a input and gives you plus or minus 1. What does it do? It just gives you the sign of the input at that time instant. Output y of t equal to plus 1 if this is u of t. If u of t at that time instant is greater than or equal to 0 equal to minus 1 if u of t is strictly less than 0. Of course, one might say why is it that when u of t is equal to 0 you are biasing it towards plus 1. That does not matter. You see u of t is equal to 0 for just one instant. Most of the time it is either positive or negative. At only a one instant at only 0 pi at 0, 2 pi by omega, 2 pi by 2 omega, pi by omega at only at these instants it is equal to 0 and at such a for a set of measures 0 what value you define y of t that does not make any difference. So, this we will call sign non-linearity. Some people also call it signum non-linearity. Signum non-linearity is same as sign non-linearity. Its graph looks like this. It gives the output plus 1 if the input is positive, it gives the output minus 1 if the input is negative. So, find Fourier series of output. How does the output look for the sinusoidal input? Let us plot the output first. Output is like this. For what value of the input let us also plot the input when the input is like this. The way I have drawn this figure, a is greater than 1, value of a is greater than 1 that is why this peak is greater than plus 1, but it is in phase. So, we can find the Fourier series of this periodic signal. This is y of t and this is of course the input u of t equal to a sin omega t. So, first important thing to note is the output is periodic. The output y of t is a periodic signal. Periodic signals are the ones which have a Fourier series. Let us find out this Fourier series coefficients. We are going to integrate over the period 0 to t where t is the period which is 2 pi over omega of this is the output y of t dt. This will give us so called a0. y of t we write as a0, this is a DC term plus a1 let us call this real times sin omega t plus a1 imaginary for cos omega t plus a2 real for sin 2 omega t plus a2 imaginary for cos 2 omega t etc. So, now we are interested in finding a0. To find a0 one has to integrate from 0 to t and since y of t is over one period this is value of t. So, over one period the DC is 0 it adds up to 0 this is equal to 0. So, a0 equal to 0. Now, how will you find a1r? a1r is nothing but we can be obtained by 0 to t y1r sin omega t. So, that we these all basis functions sin omega t cos omega t are an orthogonal basis then orthonormal basis. So, that when you in order to get the component of a1r along this direction all you have to do is project a1 y1 on to this particular signal and then find this integral. So, this particular integral evaluates to just integrate from 0 to t sin omega t. Why? Because y1r is nothing but equal to plus 1. So, y1 this y of t y of t is equal to just plus 1 when sin omega t is positive it is equal to minus 1 when sin omega t is negative. So, this evaluates to this. So, we will just evaluate this quickly. We will just go ahead and evaluate this integral that will give us this coefficients except that we have missed some factors. So, this a0 if we define a0 like this and what comes here is a0 by 2 and a1r we need some normalizing factors that we have missed. There is 2 by t here. So, 2 by t here I just not check that these factors will not give us without these factors we will not get the correct describing function. So, what we are going to calculate is a1r is equal to 2 over t integral from 0 to t of sin omega t dt. Why did we put this absolute value? Because we are multiplying sin omega t with y and y changes its sign if sin omega t changes its sign and y is equal to plus or minus 1 only. So, in other words when we multiply this by y we get exactly this which is equal to now 4 over t and we multiply integrate from 0 to only half the period. Why? Because this particular signal here is something like this. This is t, t is equal to 2 pi over omega. So, we are going to integrate only up to here. So, 0 to t by 2 of sin omega t. Over this interval sin omega t has not changed sign. So, that is equal to 4 divided by t is equal to 2 pi by omega. So, we are going to get 2 pi here into omega of integral of sin omega t integral of sin omega t is minus cos omega t divided by omega. Evaluate from 0 to t by 2. t by 2 is nothing but pi over omega. t is 2 pi over omega. So, t by 2 is pi over omega. So, this gives us omega omega cancels 4. So, we have 2 minus 2 over pi times cos of pi minus cos 0. So, this gives us cos of pi is minus 1 and cos of 0 is plus 1. So, we get minus 2 over pi times minus 1 minus 1 that gives us 4 over pi. This is a 1 r. So, now, we have to also calculate recall this formula. In this formula, we have calculated a 1 r. This is the part that comes as sin omega t. a 1 i is the part that comes as cos omega t. And similarly, these all higher harmonics of course, we have decided to not calculate because the describing function requires only these both. So, now, we are going to find a reason why a 1 i is equal to 0. So, let us just draw this input and output. So, the output is something like this. So, this is y of t and this is t. So, this output, notice that this output is a odd function. The output is a odd function. What is, when you call a function or f of x equal to f of minus x, this is what we would call even odd function. If some function f of x satisfies this property, then you will say f of x is an odd function. Of course, one should also notice that odd function requires that f of 0 equal to 0. And we have defined sign of non-linearity as f of when sin omega t is 0, output is 1. So, for the purpose of this odd, one can also put this as 0. Another important thing to note is that the Fourier series is unaffected if you change the value of the function, whose Fourier series you are trying to calculate, that periodic signals function value, if you change at just a few points, then the Fourier series is totally unaffected. Why? Because the Fourier series depends on this integral formula you see. The integral is more requiring the area under a curve and if you change the value at just a few points, the area under a point is 0. As long as that point is finite, the area under that point is height multiplied by the width and the width is 0 because the width of a point is 0. Hence, the Fourier series is unaffected by changing the periodic signal value at a few points. In other words, on a set of measures 0, that is the technical term. So, our odd function, so our output is the odd function. When you give sin omega t as an input, the output is the odd function. For the purpose of making it odd, one should I think define sign of non-linearity as this. But even if you do not, if you define a sign of non-linearity as taking value 1, when the input is 0, even in that case, as I said the Fourier series is unaffected by changing the value of the periodic signal at a few points. So, odd function, so Fourier series, series has only sin terms. Why? Because cos is an even function, cos n omega t for every n for every n is an even function. Of course, all functions are not either odd or even. All integers are either odd. Any integer somebody comes and gives us is either an odd integer or an even integer. But that is not the case with functions. Every function need not be either an odd function or an even function. In general functions, we can say has a odd part, even part. You need both odd and even parts to construct a function. If you are trying to resolve it in terms of odd part and even part. But coming to this case, we have resolved it in terms of all its odd parts and all the sinusoidal terms together comprise its odd part. All the cosine terms together comprise its even part. Since the function is already odd, you do not require any even function like cos n omega t to synthesize that function. So, using this, what this means is dA1 imaginary part equal to 0. In fact, A for any harmonic the cosine cell term is not required. So, this says that, now we are ready to calculate the describing function for our, for this example. So, what is the describing function for this? So, eta of A, as I said the describing function allowed to depend on both the amplitude of the input sinusoidal signal and the frequency omega of the input sinusoidal signal that is equal to A1 r plus A1 imaginary part, j times of that divided by A. Then this turns out to be equal to, let us just look at the formula 4 over pi and of course, this factor also has come. Why? Because we already calculated that A1 r is equal to 4 over pi. So, notice that this is first of all independent of omega. Second, this is real. Let us just see what is the significance of this. We have just now calculated the signum non-linearity. This is 1 minus 1 signum non-linearity. We have found that its describing function is equal to 4 over pi A. This is t, this is the output y of t, sorry this output is going to keep oscillating. This is what I have drawn here is not function of graph of y of t versus t, it is input u of t, y of t. If the input is positive, for any positive value of the input, output is equal to plus 1. For any negative value of the input, the output is equal to minus 1. In that sense, this non-linearity gives the sign of its input. It does not bother whether the input is increasing, decreasing at what frequency. It just gives you whether the input is positive or negative. If the input is positive, it gives plus 1. If the input is negative, it gives minus 1. Input is u, output is minus. Now we have decided, we have calculated that we have a notion of gain for the system that gain associated to A sin omega t's input and that gain turns out to be equal to 4 over pi A. Gain is a function of A for input A sin omega t. We all are used to Bode magnitude plots where we speak of low pass filter, high pass filter. There the independent quantity the x axis, the horizontal axis is frequency omega. Here also we are looking at gain, complex gain actually. Luckily, it turns out to be real for the case of signum non-linearity. When all it will be real, we will see very soon. But we are now plotting the gain as a function of amplitude and that we have found this as a graph. This is what we just now calculated equal to 4 over pi A. For large values of A, it is very small that is expected. You see you give A sin omega t and what you get out is only plus or minus 1. Here is a signum non-linearity and how much ever high amplitude you give as input, output is always plus minus 1. What is the magnification that this system causes? Magnification is very small if the input has a very high amplitude. Why? Because the output is always plus minus 1. If you are giving a very high amplitude signal, then most of it is sort of getting diminished. It is getting diminished to just plus minus 1. In that sense, this graph is reasonable that for large amplitude the gain is very small. Why the gain is small? Because output has to always be plus minus 1. Gain is like output divided by input. Output divided by input measured in some sensible sense to get a real number, to get a scalar, to get a complex number in general but in our case it is real. To get one scalar, we have to have a notion of how we are measuring quantifying the output and the input. We have decided to quantify using the first harmonics coefficients. So, using that method it turns out that this disturbing function of the signal non-linearity is decreasing like this. It is decreasing. When would it be constant? Let us see various other examples. We will see what happens for linear systems. When would it be constant? But for signal non-linearity this decreasing makes sense because for very high amplitude also the output is just plus minus 1. For a very small amplitude also output is plus minus 1. For a very small amplitude in other words, there is high magnification because we will give you plus minus 1 even if the amplitude is 0.0001. As long as it changes sign, the output will jump between plus minus 1. For very high amplitude also the output is just plus minus 1. Hence, the magnification is small for large amplitude. The magnification is very high for small amplitude. So, this is a graph of the describing function, the gain of the system as a function of A. So, let us see some more examples. The example in every non-linear dynamical systems course, the first example we should be dealing is linear systems, but we took a signal non-linearity because of the ease in which we can calculate its Fourier series. Let us take constant gain. Output y of t equal to 5u. Here is an input that just multiplies by 5. What does it do? It takes the input, sees the current value, just multiplies by 5. Easy thumb rule to implement, no complicated saturation, signal, etc. So, what is the gain? Gain is equal to 5. How do we see this? Input is equal to A sin omega t. Output is equal to 5A sin omega t. It is a good exercise to calculate the Fourier series. Fourier series A0 is equal to 0, A1r will turn out to be equal to 5, A1 imaginary part will turn out to be equal to 0. Hence, describing function here turns out to be equal to just 5. It is real. Important things to note is real independent of A, independent of omega also. That is expected. You see this just multiplies by 5. How does the describing function graph look as a function of A constant? We prefer plotting only for positive values of A because sin omega t we have amplification by A. It is a positive number. So, the describing function, the gain of the system is independent of A. That is expected for LTI systems. If eta dependent on omega also, then this graph would not make sense because it depends on A and omega. But here, it depends only on A and this is a constant. Why? Because we have a static linear system that just amplifies by 5. So, amplification by any positive constant would just be equal to that number. If it is amplification by a negative number, then nobody prevents the describing function from being negative. This will be equal to minus 3 for a system that takes input u and gives you output y and multiplies by minus 3. This is how the describing function would look. It is always at minus 3. Describing function can be positive and negative. It is a convention to plot it for only positive values of amplitude A, input amplitude A. So, now, we will see some more examples. Let us try to already guess how the saturation nonlinearity disturbing function is expected to look. After that, we will consolidate some properties of the describing function when it is expected to be independent of A, when it is expected to be independent of omega, when the imaginary part is 0, all those we will quickly see. So, the saturation nonlinearity, this is our input. This is our saturation nonlinearity. But at any time u, the output is like this. Now we are going to give some plot something as a function of time. The input is always a sinusoid as far as the disturbing function definition is concerned, u of t. Now, if this input amplitude is less than 1, it never crosses 1. So, we are speaking of a case where the input is always varying between let us say plus and minus 0.6. So, the input is now varying only inside this range. Since the input is varying only inside this range, the output is not even getting saturated to any value. If the output is not getting saturated to any value, what comes out in the output is again a pure sinusoid. Exactly same comes out in the output also because for this range, this output is equal to the input. So, for this range, because the output is equal to the input, this is like as good as a system with gain 1. We are trying to directly plot the saturation nonlinearity as long as input has been multiplied is by amplification is at most 1. The gain of the system is just 1. What is the meaning of this? The gain of the system is equal to 1 as long as the input amplitude is at most 1. Beyond that, you see when this t, this u of t, suppose the amplification is more than 1, so that 1 is somewhere here, so the output is equal to this, then it comes back. It is always getting saturated to 1. That is the meaning of the saturation nonlinearity here. So, if it is getting saturated to plus minus 1, you see there is something that is getting cut here. We can think of that when the amplitude is larger and larger, then the output is always getting saturated to 1. For a very small range, for a very small duration of time, it is inside this linear range. For most of the duration of time, it will be in the saturated range if the amplitude is very large. For very large amplitude, it is equal to 1 for most of the time and the input is actually much larger for most of the time. That time we expect that the magnification is actually very small. Why the magnification is very small? For very large amplitude, let us say amplitude A equal to 1000. For amplitude equal to 1000, the output is getting saturated to 1 most of the time and the input is much larger than 1 for most of the time. Even when you integrate and find this exact Fourier series coefficient, one will get that the disturbing function, the magnification is very small. Because the magnification is very small, for large amplitude is what we are talking about. For large amplitude, because the magnification is very small, for large amplitude we still get plus minus 1 only most of the time. Of course, there is still a small duration of time over which it is equal, but that time the input is also very small. Input is less than 1. That is why, so let us draw another figure. Let us draw a figure for A equal to let us say 100, the u of t. This 1 is somewhere here, this is 100. This is how the output looks like. Because the input is much larger than 1, most of the time, the output is sitting at 1 most of the time. You see, this is how where y of t is, this is where u of t is, this is y and u and this graph is plot of both y and u. You see, output is most of the time just plus minus 1, while the input is most of the time much larger than 1. So, what is like a net amplification? That amplification is a very small quantity. On the other hand, if the input amplitude were itself less than 1, if the input amplitude were equal to 0.6, then the output is reproduced as the input. So, this justifies our graph of the describing function as a function of A which is equal to plus 1 and then going on decreasing. Of course, it never touches 0, but comes down to 0 for large amplitude. Up to amplitude equal to 1, the describing function is a constant. It is equal to 1, like it would happen for any static gain linear system. For any such system, it would have been a constant like this. So, we will check from a formula from the book that it is indeed like this. How do we get this? One would have to calculate the Fourier series of a function that gets chopped like this. They are explicitly calculating the Fourier series from here to here, then it is a constant from here to here, again from here to here. For what range it is a constant and when is it increasing like a sinusoid? That depends on the value of A, of course. This is equal to 1. If A is very large, then this time itself is much smaller. So, using a careful calculation like that, one finds the describing function explicitly as closed form formula in terms of A. So, let us see in general. In general, if nonlinearity is memoryless, what is the meaning of memoryless? This is an operator and n of u is a function of time, is equal to y of t. So, n is an operated ax on u, but that is luckily given as some other function that does not know that the input is varying. So, we can think of this x, n of x. If n, if the output, the way the output depends on the input, depends only on the value of u at the time instant and as soon as that value is given to a function n, n of x, you will get the value. This is the meaning of memoryless. You will call nonlinearity n as memoryless if the way the output, n of u is the output. The way the output depends on time t can be associated to a fixed function n of x, where output y of t is nothing but n of u of t. In that sense, n does not even know at which time instant this is happening. It just depends on the value of u at that time instant. This is the meaning of memoryless. Of course, we have been doing this for time invariant systems, meaning if the input is delayed by certain amount of time, the output gets delayed by same amount of time. This is time invariant systems. It is a big class of systems, both linear and nonlinear. Amongst the nonlinear time invariant systems, we have described memoryless systems. What are examples of memoryless systems? All the saturation nonlinearity, the dead zone nonlinearity. These are all examples of memoryless time invariant nonlinear systems. What are examples of systems, nonlinear systems, nonlinear time invariant systems with memory. These are systems with hysteresis. We will see two-three types of hysteresis. Hysteresis is a nonlinearity that of course is time invariant. The way the output depends on input does not explicitly depend on time, but it depends on whether the input is increasing or decreasing. In that sense, there is a notion of memory associated to it. Hysteresis is an example of time invariant nonlinearity with memory. In other words, which is not memoryless. If the time invariant nonlinearity is memoryless, then the disturbing function depends only on A. At most, the time invariant nonlinearity this depends only on A means that independent of omega. So, the frequency at which the input is changing cannot affect the gain, the magnification of the input to get the output. To say that there is no omega, omega we have skipped here means that the disturbing function, the magnification, the gain does not depend on the frequency of the sinusoidal input. So now, we will see under what conditions it is real. So, when would it be real? This having function real. This is same as A1i equal to 0, no cosine term. In other words, output is in phase, phase with input. Now, notice that your input is odd function of time, A sin omega t. So now, if your output, if your nonlinearity was also a odd function, for example, this saturation nonlinearity. Another example was the signum nonlinearity. These nonlinearity is when instead of this value, when you take this value, then the sign, these all satisfy the property that f of minus x is equal to minus of f of x. These are all so called odd nonlinearity. Within the class of memoryless time invariant nonlinearity, we have classified some of them as odd nonlinearity, nonlinearity. If this dependence of n is as a function of x is an odd function, odd function meaning it satisfies this property. Under those conditions, the describing function is going to be a real function. There is no imaginary part, that is why there is no cosine part. Why would this happen? Because when sign is changing, the input A sin omega t is already an even function. So, the output will not have an odd function, the output will not have an even function. The input A sin omega t is an odd function and the nonlinearity is also an odd nonlinearity. This ensures that the output will not have a cosine term at all. It will not have any even part. This is what ensures that A 1 i, in fact, all the higher harmonics also are not required and hence the describing function will be a real function of A only. Why is it not a function of omega? Because it is memoryless. Why does it not have an imaginary part? Because the nonlinearity is also an odd nonlinearity. So, when you give A sin omega t as an input, you also get an output that is an odd function and hence no cosine terms are required to synthesize such a signal. So, we have found, described some conditions for which, under which the describing function is memoryless, the describing function is independent of omega and when the describing function is a real function, when the imaginary part is equal to 0. Before we see more examples of describing functions of various nonlinearity, we will try to see if the describing function has some rigorous development behind it one might ask, why study the first harmonic? Why aren't the other harmonics important? In what ways is describing function carrying some important properties of the nonlinearity? After all, it is an approximation. I have already used the word that this describing function is an approximation. It is approximation in what sense? Is it in some sense a best approximation? What property of the nonlinearity system has been captured by taking just the first harmonic? This is what we will see in little more detail before we see how the describing function is used for calculating approximate values of periodic orbits. So, consider a system like this. So, we have a nonlinear system like this and we are trying to linearize. So, we already saw a notion of linearize for a nonlinear differential equation. The differential equation is what we took and already linearized once and told something about whether you can conclude about the stability of the equilibrium points post linearization. With respect to the linearized system, you draw out some conclusion about the stability of the equilibrium point and you can that be utilized for concluding the equilibrium stability properties of the equilibrium point of the original nonlinear system. So, that is not the linearization that we are talking about now. Here, we are now speaking about linearize with respect to an input, with respect to a reference input. So, we are speaking about a reference input R of t equal to U of t and we are looking at the output of the nonlinear system. We are trying to find a linear system whose output is as close as possible to output of this nonlinear system. So, we are speaking about linearize with respect to a reference input. This is different from linearizing with linearizing the differential equation for analyzing the stability properties of the equilibrium point. These are two different notions of linearization. So, this is better called as quasi-linearization. Why? Because it will turn out that the linearized system, the optimal linearized system depends on which reference input you use. So, when there is some notion of dependence, it is not really linearized in the full sense because of this different linearized system that you will get depending on which reference input you took. So, whenever this happens, people will use a word to say that this is not independent of what you use for linearizing. So, that not independent is sometimes conveyed as either quasi or pseudo. So, in the context of describing functions, one says that the describing function is a optimal quasi-linearization. So, what is optimal about it? That you will see now. What is quasi about it? Because the linearized system depends on the reference input that you use for the purpose of linearization. So, let us take this R. Let us take this original nonlinear system N. This is the actual output. This is linear system. This is the error. So, take reference input. This is why actual, this is why actual reference input why approx. What is approx about it? Because we are trying to fit, we are trying to find a linear system approximation to the nonlinear system such that this error E is minimized. Minimized in what sense? At every time instant or some total, that is what we are going to see now. Take reference input R of t, find linear system such that integral from 0 to infinity of E square dt is minimized. Is that reasonable? Is minimized over what? Minimized over all linear systems that you could take. So, when we give a reference input, we say that this reference input is very important. It is not a problem that this nonlinear system has different linearizations. With respect to this reference input, we will compare the actual output that comes from this nonlinear system for this reference input and we will compare that with why approx. What is why approx? It is because we are trying to fit some approximation of this nonlinear system, approximating it by a linear system. So, this linear system output is what we will call why approx. We will take the difference that is called the error. That difference we will call as the error. We are trying to minimize the energy in the output. Why is this energy? Because we have taken square. In the time domain, we are measuring the total energy integrating from 0 to plus infinity. So, find best. What is best about it? This error has got minimized. So, of course, the first question is, is this integral even less than infinity? It is possible that this integral is not even bounded. It does not exist. It is very large. E is positive, agreed. So, this cannot be negative, but it could be plus infinity. For example, if the error is a constant number, if it is always equal to plus 5, if it is a nonzero constant when you integrate from 0 to infinity, then you will get something that is unbounded. It is minimized, provided finite. So, this is our objective, performance objective. So, it turns out one very important result is different, different for the same, for same fixed n nonlinear system, different inputs R of t give different best linear approximate. First important point, even for the same fixed nonlinear system. Second, it is not even reasonable to call the best. Best approximate, best linear system, approximate is not unique. Why, what are all those different linear system approximants which are all best and still there is some freedom? What are they? Assume n is Bebo stable. What is Bebo stable? Whenever you give a bounded input, the output is also bounded. Bounded input, bounded output stable. So, what is the meaning of this? Whenever you give an input that is bounded, the output is also guaranteed to be bounded. So, assume that this nonlinearity n is a bounded input, bounded output stable, so that you can give a sin omega t and you can be sure that the output has finite power. a sin omega t is not a signal that has a finite energy because you integrate from 0 to plus infinity and the energy is unbounded, but at least it has finite power. So, we are dealing with finite power signals now. We are trying to deal with finding a linear system approximation with respect to reference inputs that are all having finite power. This all will be made more precise in the following lecture. But assume that n is bounded input stable under these conditions. What are all the nonlinear systems that have the same? Let r of t equal to a sin omega t. Any h of s such that h real part of h of j omega is equal to a1r, imaginary part of h of j omega equal to a1i, where a1r and a1i are first harmonics, coefficients of y actual. What does this mean? y actual, we are trying to write as some a0 plus a1r sin omega t plus a1i cos omega t, etc. We expand this. We find a Fourier series representation of the actual output, where the actual output is for this reference input, a sin omega t is the input. So, any transfer function h, also has to be bounded input bounded output stable. So, any stable transfer function, the extremely important theorem that is what makes this function very rigorous and of course, it has always proved to be very useful. Derivation of this can be found in Vidya Sagar's book on nonlinear systems analysis. So, any stable transfer function h of s, whose real part coincides with a1r, whose imaginary part coincides with a1i, what are these a1r, a1i, a1 refers to first harmonic and r and i refers to the sin omega t part and cos omega t part. Take any transfer function which matches only here. As long as it matches here, this h is a best approximate. What is best about it? It minimizes the square error. There might be a division by a that is missing, sorry divided by a, where this a is amplitude of the input a. If the input is large, we might expect that a1r and a1i are also large. So, but then h is going to just allow the amplification a to come into the output, into its output, the y-approx that we used. Hence, there is a division by a that is required here. So, clearly, if you are only specified a1r and a1i, there can be many transfer functions which have the same value when evaluated at j omega. So, this is what brings in the non-uniqueness in h, but then the best part is that all these best approximations, best approximate linear systems all evaluate to the same complex number when evaluated at j omega, if they have to be a best linear system approximate to that non-linear system. So, once it is a linear system, how will the output look like? That is what we will use just straightforward impulse response. We will call this h of s, y-approx, y-actual. So, y-approx is nothing but equal to integral from 0 to t h of t minus tau r tau d tau, where h of t is impulse response, impulse response of h of s. We have only assumed that this is stable. No poles in the right of plane, no poles on the imaginary axis also. That is when h is bounded input, bounded output stable. And this is how one will calculate y-approx for any linear system. Notice that h can have memory also. So, this h of s output is calculated by this regular procedure, the way we do for linear time invariant systems. When will the error get minimized? It will get minimized in the average sense. In the power sense, there is a small mistake in what I told in the previous slide. I will correct it right now. So, in that power sense, with respect to the minimization of the power here, you see why we shifted to power instead of energy? r is not a signal with finite energy. When r is, that is what I will explain in more detail now. These things are made besides in Vidya Sagar's book. When r of t is equal to a sin omega t, 0 to infinity of r of t square is not finite. But you divide by t, if you look at the power in the signal, limit as t tends to infinity of r of t, taking square is same as taking absolute value and then taking square. This is finite. We will say finite power. We will say this is not finite energy. It is not finite energy, but finite power. So, our input has finite power and for systems that have bounded input, bounded output stable, the output, let us come back to this example, the output y actually will also have finite power. If you have to minimize the power in the error, then this also has to have finite power and the transients of h are all dying to 0. Because they are dying to 0, the average power in them is 0. So, they have finite energy, the transients. I am speaking only about the transients. The transients have finite energy and hence, average power is equal to 0 and only the value of h when evaluated j omega decides the average power in y approach and when these two signals are subtracted, this average power in e is minimized. That is what makes disturbing function optimal quasi linearization of a nonlinearity n. This is the theory behind using disturbing function. That is why it gives us the values that we want in a very useful sense of even though they are a little approximation, they give us the required values. So, we will see more in detail in the following lecture.