 Welcome everyone, this is a course taught on non-linear dynamical systems by Madhu and Belur that is me and my colleague Harish K Pillai, we both are in the control and computing group in department of electrical engineering IIT Bombay. So this course is relevant for primarily post graduate students who are interested in non-linear dynamical systems also senior undergraduate students are eligible for this course. The prerequisites for this course which is more important is some amount of linear algebra essentially Eigen values, null space, the concept of null space and positive definite matrices. We will also require some basics about differential equations in particular linear differential equations, homogeneous and particular solutions of linear differential equations. Some information about control in particular linear systems that we will require are about transfer function, some state space concepts and the Nyquist criteria for stability. Some useful books that we will need in this course is the book by Hassan Khalil on non-linear systems, the book by M Vidya Sagar non-linear systems analysis and the book by Shankar Shastri called non-linear systems analysis stability and control. These books will be very useful. So the outline of this course will be as follows, we will first begin with some properties of linear systems, input output systems and autonomous systems. Then we will see some features that is present only in non-linear systems. Then we will move into existence and uniqueness of solutions to non-linear differential equations. We will also see the notions of stability, linearization. We will also see the Lyapunov theorem for stability. We will see the Lassal invariance principle. Then we will see input output systems in particular L2 stability. We will also see sector bounded linearities in particular the Lure problem. The Nyquist criteria for stability even though that is applicable only for linear systems, we will review that particular part because that will play extremely important role even for non-linear systems. We will begin with passivity and small gain theorems as the main results for sector bound non-linearities. And then we will see more generally the circle and the Popov criteria. We will also see the describing function method in this course. The outline of today's lecture will be the definition of linear systems. We will review the principle of superposition. Then we will see some examples of non-linear systems. Then we will see some features that characterize only non-linear systems. We will see autonomous systems, what is their definition. Then we will see what is the notion of equilibrium point or equilibrium position. So we will begin with the definition of a linear system. So when do we call a system as linear? A system with input u and output y is called linear if it satisfies the principle of superposition. What is the principle of superposition? The inputs u1 and u2 give outputs y1 and y2 respectively. Then we can ask, what does the input u1 plus u2 give as the output? So in this context, when we mean the input u1, we mean the trajectory u1. So I would like to spend a few minutes on the notation we will use in this course. So this particular lecture, one should also keep looking regularly in the midst of the course because it contains important notational aspects also. So when we mean input u1, we mean the entire trajectory, complete function u1 as a function of time, which means we mean the values of u1 at all time instance. On the other hand, when we are interested in the value of u1 at a particular time instant t0, t0 is some real number, some time value, then we denote that as u1 at t0. This will be the notation in this complete course. So coming back to the principle of superposition, we can ask the question, can the output of the system for the input u1 plus u2, can that output be obtained by superimposing y1 on y2 or in other words, superimpose y2 on y1? So we will use this ability to superimpose as the definition of linear systems. It is the output of the system for input u1 plus u2 precisely equal to y1 plus y2. So this equality should be understood in the sense that it is equal at every time instant. Why? Because u1 and u2 are complete trajectories. So at every time instant, we want the output to be equal to y1 plus y2, the trajectory y1 plus y2, the corresponding outputs for inputs u1 plus u2. So we will like that the output is equal to y1 plus y2 for arbitrary inputs u1 and u2 and not just for some carefully chosen inputs u1 and u2. So it is important that this ability to superimpose works for arbitrary inputs u1 and u2. Moreover we will also like that if we scale the input by a real number alpha 1, then the output is the same output scaled by the amount alpha 1. In other words, for any real number alpha 1, the output of the system for input alpha 1 u1 is precisely equal to alpha 1 y1. These two properties, the sum of the outputs and the scaling of the output, these both can be captured by just one sentence. In short, a system is said to be linear if output of the system for the particular input alpha 1 u1 plus alpha 2 u2 is precisely equal to alpha 1 y1 plus alpha 2 y2. So again coming back to the notation, we will say is set to be linear if so-and-so property holds, then by definition it is an if and only if statement. We can rewrite the same definition in a few other equivalent statements. So a system with input u1 output wise is linear if and only if output of alpha 1 u1 plus alpha 2 u2 is equal to alpha 1 y1 plus alpha 2 y2 by definition of linearity. We can also state this as the system is linear if and only if output of alpha 1 u1 plus alpha 2 u2 equals alpha 1 y1 plus alpha 2 y2. So please note that we have this implication if and only if. In particular, we put this colon on the left side which means the left side of this statement is being defined by the right hand of this statement of the if and only if sign. And as I said, this should be true for all functions u1 and u2 and for all real numbers alpha 1 and alpha 2. For a linear system, we can ask what happens when we give the input 0. So the 0 function when that is given as input, this function is same as u of t equal to 0 for all time t that is u of t equivalently equal to 0. This is different from a function that is equal to 0 only at a few time instants. For example, the function u of t equal to sin of t or u of t equal to t square minus 3. These are functions that become equal to 0 only at specific time instants, but not at all time instants unlike the 0 function which is equal to 0 for all time t. So, what happens to the system when we give the 0 input? The output we will like to say that the output is equal to 0. So, how do we obtain this as a consequence of the definition we already saw? So, take the definition of linearity and take alpha equal to 0 and take any input u. So, recall that we had this sentence for any real number alpha 1. The output of the system for input alpha 1 u 1 is equal to alpha 1 y 1. Here if we substitute alpha 1 equal to 0, then we will obtain that a linear system always gives output equal to 0 when the 0 function is given as input. Let us see some examples of linear and non-linear systems. So, consider a static input-output system. So, if it is a static system which means that the output depends only on the value of the input and not on its derivatives or integral. So, in such a situation we can draw a graph of the output versus input. So, the first situation, these are three examples where the force is plotted against the input v, f is plotted against v, v is the input, f is the output. So, the first one is clearly non-linear. The last one which corresponds to saturation non-linearity is also non-linear. The middle one also is non-linear that requires a little more careful look. So, please note that just because the graph is a line, it does not mean that the system is a linear system. That is written here. So, a line is not equivalent to a linear system. But consider this system, input x and output f. Both are functions of time, x and f are functions of time. This is another example of a static linear system. The input-output relation does not involve derivatives integrals of the variables x and f. Hence, we can plot the output variable as a function of the input variable at any time instant. So, does this line pass through the origin? When we ask this question, then we see that the input 0, the 0 function gives output 0 and that important property is to be satisfied for a line also. And only then we can call that this system is a linear system. So, system, the system with input x and output f is a linear system. This is not related to the graph of x of the variable x as a function of time t. It is not related to that graph being a line. So, please note here that f and x are variables of the system. One is the input, one is the output and it is this graph which is incidentally a line. And if this line passes through the origin, then the system is linear. But more generally for systems which do not involve a static relation between the inputs and outputs, such a graph we do not draw. In that situation, we have to go back to the principle of superposition and check that for checking whether the system is linear. We now go into an autonomous system. When do we call an autonomous system linear? For that particular question, we will quickly see the definition of an autonomous system. A system is autonomous if there are no inputs. In other words, once the initial conditions are specified, there is a trajectory that evolves, a unique trajectory that evolves. There is no room for shaping or controlling. That is because there are no inputs to the system. To what extent the trajectory is unique, these are some important questions we will analyze in detail. So, consider the differential equation x dot is equal to f of x. Once the value of x at time t equal to 6 is specified, suppose it is specified as minus 3.4, then we are able to see that for the solution x of t is equal to a times e to the power 5 times t. This is how the solution to this differential equation looks. And if we use this initial condition, then we are able to get a unique value of a and this value of a corresponds to that initial condition. So, we see that this is an autonomous system. Another important example is that dy dt of x is equal to minus 3x plus 2. This is another autonomous system. These are systems for which once the initial condition is specified, there are no inputs and hence the trajectory is fully determined. So, more generally if x is a map from R to Rn in which the input space R we interpret as time and the output space Rn is a vector space which has n components. So, at each, so n components at each time instant, this we denote as x of t is an element of Rn. This is suppose the differential equation is x dot is equal to f of x at time t or in short, we will suppress the variable t and write x dot is equal to f of x. So, the dot means derivative with respect to time. So, here at every time instant x of t is an element of Rn. So, this shorthand notation x dot is equal to f of x has actually n equations within. So, the first equation is x dot is equal to f1 of x, second one is x2 dot equal to f2 of x etc. So, f also is a map from Rn to Rn. So, f takes a value of x which is an element of Rn and gives out another vector which is again an Rn. So, for every, for each initial condition, suppose there is a trajectory x of t satisfying the above differential equations. So, to what extent we can say that for every initial condition there exists a solution and to what extent is that unique? These are some important questions we will address. For the time being, please assume that for each initial condition, suppose there is a trajectory that evolves from that initial condition. So, this trajectory itself is a vector valued function x of t as a function of time. Then we can ask is this map linear? Which map? The map that sends each initial condition to a trajectory. Is that map linear? In other words, if B1 and B2 are two vectors in Rn and with these initial conditions, with the initial conditions B1 and B2, we have respectively solutions x1 and x2 as a function of time. Then the initial condition alpha 1 B1 plus alpha 2 B2 results in the trajectory alpha 1 x1 plus alpha 2 x2. Suppose this property is true, then we will say that that map is linear and in such a situation we will also like to say that the autonomous system is a linear autonomous system. So, again as I said these are required to be true for any real numbers alpha 1 alpha 2 and for any two vectors B1 and B2 in Rn. This is also equivalent to saying that the set of solutions to the differential equations forms a vector space over R. That is, if x1 and x2 are two solutions, then alpha 1 x1 plus alpha 2 x2 also satisfies the differential equation. This alpha 1 x1 plus alpha 2 x2 is also a solution to that differential equation. We can ask, is the trajectory 0 a solution to the differential equation? The trajectory 0 now again means the 0 function. So, we can now see that the system d by dt of x equal to minus 3x plus 2, here we can substitute the trajectory x of t equivalently equal to 0 and check whether the 0 function satisfies the differential equation and we will obtain that 0 is not a solution to this differential equation and hence this autonomous system is not a linear autonomous system. So, we come to some features that is present only in non-linear systems. This makes the study of non-linear systems extremely interesting and challenging also. So, what are some features that is present only in non-linear systems? The first important feature is what we will like to say finite escape time. So, let me explain these terms one by one. Escape in this situation means escape to infinity. So, escape to infinity means does the solution x of t become unbounded in Rn? So, can a solution approach infinity? Can it become unbounded in finite time? That is the question. Escape here means escape to infinity meaning it becomes unbounded and can this happen when t itself is finite? That is the question we are asking here. Or is it that the solution x of t can become unbounded only as t tends to infinity? So, this is the important question that we are going to address for a linear unstable system. So, exactly the definition of unstable we will see later. But for now, we see that the differential equation x dot is equal to f of x in which at any time instant x has only one component. So, x of t is an element of R. So, solving this differential equation we get x of t is equal to x of 0 times e to the power t. So, we see that x of t becomes unbounded as t increases for a non-zero initial condition x of 0. But we also see that x of t becomes unbounded only when t tends to infinity. It does not become bounded for a finite time t. If anybody gives us a finite value of time t, we can evaluate x of 0 times e to the power t and see that it is again a finite number. But for non-linear systems, the escape time can be finite for non-linear systems which is not possible for linear systems. Another important feature is for a linear system, we can ask if initially the system is at equilibrium, what is equilibrium? All the forces acting on the trajectory are at balance. In such a situation, we can ask is the system going to remain in equilibrium? If we are at an initial condition which is such that the system is at equilibrium, does it mean that the system will remain at equilibrium for all future time? This is a question of uniqueness and non-uniqueness of trajectories and we are going to address this situation for linear systems. So, for linear systems, it turns out that if initially the system is at equilibrium, then it cannot emanate out of equilibrium without perturbations. So, but a non-linear system can exhibit this phenomena without perturbations also. A trajectory can emanate out of equilibrium in the absence of perturbations also and in that sense, we see that we also have non-uniqueness of solutions possibly in non-linear differential equations. The analog of this particular situation is that the question, can we reach an equilibrium point in finite time? Under some suitable non-linearity, it turns out that we can reach the equilibrium point in finite time and this aspect is also not there in linear systems. So, for linear systems, what is possible? For linear systems, the solution can reach the equilibrium position only asymptotically not in finite time, but only as t tends to infinity. Another important feature for non-linear systems is the notion of equilibrium point. While equilibrium point is also present in linear systems, we will see that the equilibrium points for linear systems are all connected. Unlike non-linear systems where we could have multiple equilibrium points which are not connected, in which case we will call them isolated. So, we will see this in detail now. So, equilibrium point is a point B such that if initially the trajectory is at B, then it remains at B for all future time. This definition we will see more carefully very soon. Equilibrium point we now understand as an initial condition. If we start off there, we remain there for all future time. So, if B is an equilibrium point, are there other equilibrium points? This is the question we can ask. The next question we can ask is, if there are other equilibrium points, are these other equilibrium points close by and if they are close by, can they be really close? In other words, can they be connected? So, if the closest other equilibrium point is at least some non-zero distance away. In other words, there is some small distance within which there is no other equilibrium point other than the point B. In such a situation, we will call B is isolated. When will we call B isolated? If in the situation that there are other equilibrium points, every other equilibrium point is at least some non-zero distance away from the point B. In such a situation, we will say that there are multiple equilibrium points, but B is an isolated equilibrium point. So, for linear systems, in case there are multiple equilibrium points, they are all non-isolated. In other words, they are all connected to each other. We take any equilibrium point for a linear system and for any small enough distance, we will see there is another equilibrium point in the vicinity. Another important feature of non-linear systems is that we can have periodic orbits which are isolated. Even the periodic orbits can be isolated, just like the equilibrium points can be isolated. So, this is relevant in the context of robust sustained oscillations. We will see these terms carefully now. So, why are robust sustained oscillations important if their amplitude is fixed and if the frequency is fixed, then they are very relevant for building oscillators in a laboratory. So, such a situation it turns out is not possible for linear systems. So, in linear systems, if we get periodic orbits for a certain linear system, then the amplitude will very crucially depend on the initial conditions. If the initial conditions are different, then the amplitude will no longer be the same. It is very unlikely that by changing the initial condition, we will get the same amplitude. Also, the frequency of the periodic orbit also depends crucially on the system parameters. Small perturbations in the system parameters can change the frequency. In fact, it could also lose the property of periodic orbits altogether. So, why is this not acceptable? This is not acceptable in laboratory conditions. We will like that oscillators are just switched on and they give fairly reliable amplitude and frequency oscillations, so that we are able to build an oscillator using this particular differential equation. So, this is possible only using non-linear systems. So, now we will see what is an equilibrium point. So, consider the differential equation x dot is equal to f of x in which x at any time t, x of t is an element of Rn. There are n components in the vector x. A point A is said to be an equilibrium point if x of t is always equal to A, if this is a solution of the differential equation. So, take the differential equation x of t equivalently equal to A. For this particular trajectory, if we see that this is also a solution to the differential equation, then the point A is said to be an equilibrium point. So, what does this require from f? What we see is if it should always remain at A, then the rate of change of x with respect to time should be equal to 0 when evaluated at the point x equal to A. So, at the point x equal to A, x dot is nothing but f of x. In other words, when f is evaluated at A, then we get 0, the 0 vector. Now, we can ask is the converse true? What is the converse of this statement? Suppose A is a vector in Rn such that f evaluated at A is equal to 0, then does that mean that x of t equivalently equal to 0 is a solution to that differential equation? So, this really suggests that this converse should also be true and we will see that under some fairly mild assumptions, this is indeed true and it is also the only solution. So, what are these mild conditions? We will see that there is an important condition called the Lipschitz condition and Lipschitz condition we will like to say is mild because this is how most functions f would really look like. But this is a topic that we will do in detail later. So, now we will quickly see another important interpretation of a differential equation. So, consider the differential equation x dot is equal to f of x with x of t an element of Rn. So, at each point A in Rn f of A is a vector starting from A. So, at each point A we will evaluate f at A. This also is an element of Rn. This vector we will like to place as starting from A. What does this vector denote? It denotes where the point A towards where the point A is evolving both with direction and magnitude. So, f of A equal to 0 means the arrow there has length 0. In other words, there is no evolution from that point. In other words, the rate of change at that point is equal to 0. This is what we will like to also say is stationary. If at that place, if we start, then everything is stationary and the system does not evolve. This is also what we call an equilibrium point. So, what is vector field about it? At each point in Rn, we are sticking, we are attaching a vector there. This is unlike a scalar field where at each point, we could also specify a scalar value. For example, the temperature at every point in the room, this would be a scalar field. But in our situation, at every point A in Rn, we have a vector with equal number of components. Hence, we will say that this is a vector field and moreover, this vector at every point is precisely the rate of evolution when we are at that point. This is what is a differential equation. A first order differential equation is exactly this notion where at every point A, we will stick a vector there and this vector denotes the rate of change of that particular point under the action of that differential equation. So, we will end today's lecture beginning with the topic of scalar systems. So, consider the scalar differential equation. What is scalar about it? x dot t is equal to f of x where x of t has only one component. It is a real number. So, this is also called a one-dimensional system. In such a situation, f is a map from R to R. For example, f of x is equal to 3x minus 2 or f of x equal to x minus 3 times x minus 9 or x square or sin x. These are examples of f that we will see today. So, this particular situation is best seen using a figure. So, we are going to attach at each point a vector. So, consider f of x is equal to 3x minus 2 and suppose we take the point 4, x equal to 4. For this point x equal to 4, we will evaluate f at 4 and we obtain 14. So, what it means? And at this particular point, there is an arrow which is starting from the point 4 and it is to the right. Why it is to the right? Because this number 14 that we have obtained is positive and moreover, in addition to being to the right towards the direction of increasing x, it is a vector of length 14. At another point, for example, x equal to 0, we can check what is f evaluated at x equal to 0 and for that we get minus 2. So, it means that at 0, we draw a vector which is towards the negative direction of x and it has length equal to 2. So, for a scalar differential equation, x dot is equal to f of x. It means that at each point, we can draw a vector to the right or left depending on whether f at that point is positive or negative. So, in this particular situation, we see that f of x is equal to 0 precisely at x equal to 2 by 3. So, suppose 2 by 3 is a point here and f of x is a line. So, now we are going to plot a graph of f versus x. Even though x itself was a function of time, we are plotting f as a function of x and we see that we get this line which passes through the point x equal to plus 2 by 3. At this point, f becomes equal to 0. So, at this point, the vector has length 0. Everywhere to the right, we see that this vector is pointed to the right of this point. Why is it to the right? Because we see that f to the right of this particular point is all positive. So, this particular point, let me draw a slightly bigger figure. We are considering the differential equation, x dot is equal to 3x minus 2 and we are going to plot f versus x. Even though x itself was a function of time and we will also later plot x as a function of time. But for now, we are interested in drawing the vector field. We took a 0.4, there is a 0.2 by 3 here and there is a 0.0. At the 0.4, we already saw that the vector is directed to the right. At the 0.2 by 3, the vector has length 0 and at the 0.0, the vector is directed to the left towards decreasing direction of x. So, what does this mean? That when we draw, when we plot f of x versus x, we see that to the right of the 0.2 by 3, the arrow is marked to the right. Why is it to the right? Because at this particular point, say 1, we see that f is positive at x equal to 1. Because f is positive, it means x dot is positive. In other words, x is increasing. So, more generally, we can see that if we are given with a function f and if f is scalar, we can draw a graph and decide at which points x is increasing, which points x is decreasing by just seeing whether f is positive or negative at that particular value of x. This is a way we will analyze the other examples that we saw. So, consider the differential equation x dot is equal to x minus 3 times x minus 9, which we want to call as f of x. So, this is equal to 3, this is equal to 9 and the graph of this function looks roughly like this. This function has roots at x equal to 3 and 9 and hence it is passing through 0 precisely at x equal to 3 and 9. And if we take a 0.3, then we see that the vector at the 0.3 has length 0 and hence we plot it neither to the right nor to the left. On the other hand, consider the 0.6. At x equal to 6, we expect that f will be negative. We can check that f at 6 is equal to 3 times minus 3, which is minus 9. So, since it is negative, we can also see that from the graph here. This is a vector pointing to the left towards decreasing direction of x. And that we can also see because x dot is equal to f of x. So, at x equal to 6, x dot being negative, x would eventually start decreasing. It would decrease and that is precisely what this arrow shows. The arrow shows the direction in which x will evolve. On the other hand, suppose we take x equal to 11. At x equal to 11, we can easily draw the graph, the arrow to the right. Why? Because at x equal to 11, the function f takes a positive value. So, we are able to draw all the arrows for this particular example. All we have to do is, we have to see where the equilibrium points are. And to the left and the right of the equilibrium points, we can draw the arrows towards increasing direction of x or decreasing direction of x, depending on whether f takes positive values there or negative values. Another important point we can notice that if we start at the equilibrium point x equal to 9, then we will of course remain at 9 because x dot is equal to 0. x dot evaluated at x equal to 9 is equal to 0 and hence x does not change at all. It will remain in the point 9. Similarly, x equal to 3 is also an equilibrium point. So, as we can see, we have made a mistake here. All these arrows for x less than 3, because f is positive, it cannot be towards decreasing direction of x. On the other hand, we should be seeing that all the direction, all the, these directions have to be reversed, they are all towards increasing direction of x. So, this particular figure I will quickly draw again. So, this is the differential equation x dot is equal to x minus 3 times x minus 9. Another important feature we can see here is that if we start slightly to the right of the equilibrium point of the equilibrium point 9, if we start to the right, then x is going to increase and it will go further away from 9. So, for a very small perturbation of 9 to the positive side of 9 takes that initial condition further away from 9. Even though we noted that at the point 9, the trajectory remains at 9 for all future time, but slightly to the right for a very small perturbation, the trajectory goes away from 9. Also, slightly to the left of the point 9, we see that the trajectories are again directed away from 9. Why? Slightly to the left of 9, the function f is negative. So, the x will become further away from 9, it will further decrease. So, we will like to say that this particular point is an equilibrium point, but it is also an unstable equilibrium point. For very small perturbations both to the right and left, we see that trajectories are going to move away from this equilibrium point. On the other hand, please note that 3 is also an equilibrium point, but for very small perturbations to the right, all the arrows are pointed towards the point 3 and we expect that for small perturbations towards the positive direction of 3, the trajectories are moving back towards 3. On the other hand, if we move slightly to the left of 3, meaning if we start from an initial condition, for example, x equal to 2.9 at t equal to 0, then we see that x dot is greater than 0. That is why the arrow is marked to the right and hence it will increase and approach 3 again. So, this equilibrium point we will like to call is a stable equilibrium point. In the context of Lyapunov stability, we will see more precise definitions of stable, unstable, asymptotically stable equilibrium point. For now, for a scalar system, looking at the graph of f versus x, we are able to decide which are the equilibrium points. We are also able to decide whether these equilibrium points are stable or unstable. So, now we will see another example from the list of examples we saw. Consider this graph. We see that f, so this is equal to f of x. We see that f at 0 is equal to 0. Why? Because 0 is a root of this. So, we see that the point 0 itself is an equilibrium point. If we start there, we are going to remain there. Slightly to the right, we see that f is positive and hence the arrows are directed towards the right. On the other hand, slightly to the left of the point, we see that f is again positive and hence again the arrows are directed towards the right. In other words, the value of x is going to go on increasing whether it is to the right or to the left of the point 0 and only at the point 0, the value of f being 0, x does not change, x dot is equal to 0. So, now we will like to ask, is this equilibrium point stable or unstable? The property of stable or unstable, we like to give only to equilibrium points. This x equal to 0 is an equilibrium point. Now, we see that slightly to the left, we see that when x is slightly negative, for example, x equal to minus 0.1. The value of x dot is positive and hence x is going to increase and approach 0. Can we call the 0 as a stable equilibrium point? We can answer after we analyze to the right of the point 0 to the right, x dot is again positive and hence x is going to further increase and become away from 0. We see that for certain perturbations, it comes back to 0 and for certain other perturbations, it goes away from 0. So, we can say that there exist some perturbations such that those initial conditions do not come back to equilibrium point. So, in such a situation, we are going to say that this equilibrium point is unstable. When will we call it unstable? There are just some bad perturbations. There exist some perturbations such that those initial conditions do not come back to the equilibrium point and they go away. In such a situation, that equilibrium point is unstable. We are not going to be satisfied with some perturbations which come back to the equilibrium point. We are unhappy that there are some perturbations that are going to go away from that equilibrium point and hence that equilibrium point has been classified as unstable. So, we will see more about stability, instability, asymptotic stability in the following lectures. But we will end this lecture by seeing a similar graph which we will like that is done as homework for which examples. We have already seen 3x minus 2, we have also seen x square. Now, we will quickly decide what are the equilibrium points for this particular function sin of x and whether they are stable or unstable. So, there are several equilibrium points, there are several equilibrium points for the differential equation x dot is equal to sin of x. Please note that x itself is not a sinusoidal trajectory. It is a differential equation in which sin comes in. Suppose, this is an example. So, here we see that all the zero crossings are equilibrium points and depending on whether before and after that equilibrium point, whether this sin of x is positive or negative, based on that we are able to classify this equilibrium points as stable or unstable. So, here we can draw these arrows to the right and here to the left. Similarly, here again we can draw. So, we see that this equilibrium point is unstable, this one is stable, this is unstable, this is stable. So, for this particular differential equation, we are able to see that there are several equilibrium points and alternately they are stable or unstable. So, this is something that we expect the viewers to carefully verify. With this we end today's lecture. We will continue with these aspects from the next lecture in more detail. Thank you.