 Welcome to lecture number 12 on non-linear dynamical systems. So, we have seen a detailed analysis of the Lotka-Volterra pre-data pre-model and about the Van der Paul oscillator. Today, we will see how we can visualize the dynamics of these two important non-linear systems using a simulation done in the package called Sylab. So, Sylab is free and open source and it is helpful to use Sylab for understanding how the dynamics of various systems governed by differential equations can be simulated. So, let us just briefly see the Lotka-Volterra pre-data pre-model. So, even though the word predator is commonly used, we will use hunter to denote the pre-data pre-data specie. So, we have two species here, one is the prey and one is the predator, one is the hunter. So, the equations we have already seen look like this. At any time instant, x of h is the amount of hunter specimen and x p is the amount of prey specimen. So, we will see how for a certain initial condition the solution looks like. So, this we are right now in Sylab and we are going to execute a file. Very soon I will show the code of the file also, but we will begin with executing file called Lotka, Lotka main. This calls another function called Lotka-Volterra. So, when we execute this, the first important thing we require is the vector field. So, the vector field as we can see consists of arrows at different, different points. We see that at this particular point at 1, 1, the point 1, 1 is a equilibrium point is an equilibrium point while all around it we have arrows suggesting that it is a center. This particular plot can be obtained by using the champ command in Sylab. Very soon we will open and see the Sylab code that generates this. While this only indicates the vector field at different, different points, we will also like to see a trajectory. A trajectory, what is the trajectory? What is the solution to the differential equation? It is a curve which has these arrows precisely as the tangents at each point on the trajectory. So, in order to see the trajectory, we have a trajectory here. Suppose, we start at some particular point, then we have this curve. The arrows are not drawn on that particular curve but the arrows we can see from the vector field. So, we can also change the initial condition and see how the solution looks for a different initial condition. So, for that purpose, we will modify the code and change the initial condition. So, let us have a look at the code for this purpose. This is a code that was developed as a part of the talk to a teacher project at IIT Bombay. This is also being used now for the NPTEL course. So, there are some initializations like clearing the figure, the initial condition is being specified here. Let us make the initial condition to, for example, at 1, 1. This is where the initial condition is specified and this is the time for which we are going to perform the simulation and the differential equation itself is being specified inside this other file called dot com volt error dot SCI. We will open and see that also. The same differential equation has also been specified here for the purpose of the CHAMP command. So, CHAMP and CHAMP 1 together will draw the vector field at different, different points. We can specify how finely we want the vectors plotted. Do we want very few vectors plotted at far, far points or we want a finer grid of points at each point we want the vector field, vector drawn. So, the main part up to drawing the vector field is up to here. Then the plot also has some additional information like that it is, which axis corresponds to the prey population, which axis corresponds to the hunter population. Those information are being put here. And finally, the dot com volt error function file itself is being called from inside this particular function called ODE. So, the function ODE in psi lab solves the ordinary differential equation with this initial condition and with this initial time for up to this time duration. And finally, we are going to plot these both on the same graph on which we had drawn the vector field. So, let us go back. We have given the initial condition now as this. So, let us now run this code for that purpose. Let us close this figure. So, we have again the vector field. Now, we see that there is no point, there is no trajectory. Why? Because the entire trajectory has collapsed to just one point because we have given 1, 1 the equilibrium point itself as the initial condition. So, as expected, we do not expect to see any trajectory. The trajectory is a one point, a single point. So, let us now give some particular initial condition which is very close to the axis. This is very close to the y axis because the x component has been made to be very small. So, it turned out that the arrows we had been, we had drawn was for a very small region. The trajectory itself goes through a very large area and this is how the trajectory looks. In fact, if we give a point on one of these axes, then we have already seen that it just blows up. If you have the hunter population equal to 0 due to some reason, then the prey population just goes on increasing exponentially and eventually becomes unbounded. So, here we see that by starting very close to the y axis, also eventually we reach a similar situation. We could now give an initial condition that is fairly close to the equilibrium point. So, here we have got this as the initial condition and it is, we see that there is a very small circle here. It is a little too small compared to the rest of the region and hence it is not very visible. We could consider zooming in and see this particular region. So, this is as far as the Lotka-Volterra simulation is concerned. We were to also see this particular other, this other file. So, this is just a function. It defines given x what is x dot. X dot is nothing but f of x. So, f is getting defined inside this function. So, it takes at any time t and value of x, it gives us what is f of x at that point. So, this equations are nothing but Lotka-Volterra predata prey model and we see that time does not play a role explicitly. Now, we will see the Van der Poel oscillator for different initial conditions. Let recall that the Van der Poel oscillator equations look like this. We had a second order differential equation which we had converted to first order differential equation. We also saw an RLC circuit in which we could interpret x1 as the voltage across the three components. The three components, the resistor, inductor and the capacitor, they are all in parallel. The resistor was not the simple resistor we had seen. It was a special one which could either absorb or give energy in which sense it was a special, it was a active device. So, x1 was the voltage across these three devices which were connected in parallel and x2 could be given the interpretation that it is the current through the inductor and then this second order differential equation can be considered as this in which H is a little special because it denotes the characteristics of the resistance. So, let us see a simulation of this particular model. So, like before, we have this particular Sylab code already written. This also was written as a part of the talk to a teacher project at IT Bombay and it is being also used for the NPTEL course. So, we have the initial condition being specified here. The function itself for the differential equation is being specified inside another file called van der paul.si which we will open and see in a minute. So, first we are interested in seeing the vector field. So, for that purpose, let us run this. So, this is how the arrows look. We have the origin 0, 0 as the equilibrium point, the only equilibrium point and we have these arrows going all around suggesting that it is a center. Whether it is a center or not, we can come to know only by linearizing and seeing the equilibrium by seeing the eigenvalues at this which we had already seen that the origin is unstable equilibrium point but these arrows are not telling us much information. We will also see depending on whether we start from outside or certain limit cycle or inside, these arrows are going to tell whether we will be converging to the limit cycle either from the inside or the outside. So, let us, we have already specified an initial condition. First we have seen only the arrows and it is waiting for our, for a button to be pressed. So, for this particular initial condition, the trajectories are coming closer and encircling round and round and eventually converging to this limit cycle. So, that is for the case that we start from this particular point. We can take another two points also from outside this particular limit cycle. So, the limit cycle is, seems to be going between minus 1 and 1 on the horizontal axis and between 1 and 1 on the vertical axis also. So, let us take another initial condition. Let us take minus 2, sorry, minus 2 and minus 2 and run this program again. So, we have just a vector field now. So, we have right now the initial condition starting from here and it is encircling like this and converging to the limit cycle. We can take a point let us say here corresponding to 3, minus 4 and see whether this also encircles. This is also expected, it would not take long to check 3, minus 4, the vector field. So, after having started here, we see this particular trajectory. So, we see that the arrows are all tangent to this particular trajectory. We will now take a point from inside the limit cycle. First, we could take just the origin. We know that the origin being an equilibrium point, we do not expect to see any trajectory. And indeed, we do not see any trajectory. So, there is actually just one very thin point which is the entire trajectory because that is the equilibrium point. So, let us take a point also inside this limit cycle but not the origin. Let us say we can take point 2. Here is a vector field. So, here let us again blow this up. So here, when we start from here, it seems to encircle and go outward and outward and eventually converge to this particular limit cycle. So, let us open and see the file van der Poel.SCI because that is being called by the ODE function in Sylab. The van der Poel.SCI is just defining the first component in x dot as x2 minus epsilon times hx1. Hx1 itself was a function which we had seen in the slide. It is equal to this. This is h of v and its integral with h of 0 equal to 0. That is how capital H is defined. That defines capital H uniquely and that evaluated at hx1 is what capital H of x1 is. That is being defined here. So, that is being put in here and that defines the first component of x dot. Second component of x dot is nothing but minus x1. So, the output of this particular function file is f1 and f2 which denote nothing but x1 dot and x2 dot. This is going to be utilized by the ODE command in Sylab. So, this explains how we can use Sylab to visualize the dynamics of a differential equation. While a second order differential equation can be visualized using the vector field also by using the champ and the champ 1 command. More generally, we can use ODE command to obtain the solution to a differential equation and initial value problem in particular. So, let us like we did for the Lotka-Volterra Sylab code. We will also see the Sylab code completely. Of course, both the codes, all the four Sylab programs will be made available on the website. It is not required to copy this from the screen. It will be made available completely. This part of the code is to set parameters of the plot. All these commands can be obtained, can be seen in detail, can be used more effectively by using the help command in Sylab. As far as we are concerned, these codes will be made available. This completes this topic. We will continue on another topic now. We now continue to the next topic. So, we are also going to see something about linearization about a trajectory. We have already seen linearization about a point. Now, we will also see linearization about a operating trajectory. The significance of that, why it is important to study that, that is what we will begin seeing. So, we will quickly review linearization. After that, we will see what is the meaning of operating point and operating trajectory. We will also see a definition of stability, the notion of stability of close by trajectories. So, when we speak of linearization, till now we spoke about linearization about an operating point. Now, when we say linearization, we will distinguish in future between linearization about an operating point and linearization about an operating trajectory. So, let us quickly see what is the need for linearization. So, while we are studying nonlinear systems, it is acknowledged that nonlinear systems are harder both for analysis and for controller synthesis. But it is also true that the interest of the analysis of the controller design is to a limited region of the state and also to a small set of initial conditions that are close by, close to certain important point. Also, the input functions may not be of very large aptitude. So, given these situations, it is often helpful to linearize. So, the linearized system could serve the required purposes, which purposes? The analysis of the nonlinear system and controller synthesis for the nonlinear system, these both purposes perhaps is met by the linearized system. So, to what extent it is met, we have only partially seen, we already saw that under certain conditions. So, we will quickly review these conditions. Linearization helps for concluding about stability of an equilibrium point. More precisely, if the linearized system, when we are speaking about an equilibrium point, we take the nonlinear system and linearize it about the equilibrium point. The linearized system matrix A we take, if it satisfies this condition. What is this condition? The set of eigenvalues of A, intersection of that set with this imaginary axis is empty. Sorry, the E is empty, I have missed writing here, I will just write this condition here. Suppose, this condition is satisfied, what is the meaning of this? Set of eigenvalues of A and IR is the imaginary axis, if the intersection of these two sets is the empty set, empty set meaning, in this context it means that there are no eigenvalues of A on the imaginary axis, if this particular condition is satisfied, so this equal to phi is what has been missed on this slide here, it has been missed here. So, if the intersection of the imaginary, if the intersection of the eigenvalues of A and the imaginary axis is empty, then the conclusion, which conclusion, whether the system is asymptotically stable or unstable, that conclusion for the linear system also implies the same conclusion for nonlinear system. This is one of the most important results in the context of linearization of nonlinear systems, about that operating point, because nonlinear system could have many equilibrium points and we could consider the linearization about different operating points, as far as a particular equilibrium point is concerned, we can obtain this matrix A and if there are no eigenvalues of A on the imaginary axis, then the linear system, the linearized system may be unstable or may be asymptotically stable. For example, it is asymptotically stable if all the eigenvalues of A are in the left half, open left half complex plane, it is unstable if there is one or more eigenvalues of A in the right half complex plane. So, this particular conclusion on the linearized system we are able to do using the linearized system matrix A, that conclusion is the same for the nonlinear system also about that equilibrium point under this condition, under which condition, under the condition that there are no eigenvalues of A on the imaginary axis. So, what is the significance of this? This is an example of how nonlinear system does not have to be analyzed much, as far as this particular conclusion goes, as far as the conclusion whether the nonlinear system about an equilibrium point is unstable or asymptotically stable, as far as that conclusion goes, it is enough to study the linearized system. And this conclusion is indeed the same only under certain condition, only when certain conditions are satisfied. If those conditions are not satisfied, then it is not possible to say that the same conclusion holds. Now, we can ask is this something that is just for analysis or is this going to help for controller synthesis also? Is it that such a result can also be utilized for controller synthesis? This is something that we will see in the detail in the next few lectures. Another important thing is what about converse? If you know that the nonlinear system is unstable, is it true that you should the linearized system is also unstable? That is the other question when what we refer to here as conversely. So, these are important questions that we will address in the next few lectures. We will address the notion of controller synthesis for the linearized system and whether that controller will work for the nonlinear system also. So, let us come back to this question. Controller synthesis for stabilizing a nonlinear system at an operating point. As I said, when we say operating point, we need to now distinguish that operating trajectory and operating point in the context of linearization. So, let us first ask this question. Controller synthesis for stabilizing a nonlinear system about an operating point. So, the to be controlled nonlinear system gives a to be controlled linearized system. Upon linearization, the nonlinear system which had some inputs will give us a linearized system also with some inputs. So, suppose A and B are matrices for the linear system. We will do this in a little more precise way. In the next few lectures, I am just motivating how controller for the linearized system may work for the nonlinear system also. Suppose A and B are matrices for the linearized system, let us call it system lin and these matrices A and B are coming from the nonlinear system x dot is equal to f of x, u. Till now we had been studying only autonomous systems, there was no input u nor v. Now, we are speaking of a system which allows you to put a control input. So, the linearized system is this in which the state has been called z, z dot is equal to A z plus B b and the original nonlinear system was x dot is equal to f of x, u. We will see how A and B matrices are to be obtained from this particular function f under what conditions? What is the controller synthesis question? Under what conditions controller designed for system lin also works for system nonlinear? The controller we have designed for this system under what conditions it will work for this system also? Given that A and B were obtained from these from this nonlinear system, it may be it is possible that linear control theory for this system might automatically hold for this system also. So, in order to answer this question, we have to understand a little more detail about linearization about an operating trajectory. In the context of linearization, we will also speak about stability of the operating trajectory. So, we have already seen for a autonomous system what it means for stability of an equilibrium point. We also saw Lyapunov stability of the equilibrium point. Now the next question is what is the meaning of stability of a operating trajectory of a trajectory? So, let us first consider the case when that trajectory is a periodic orbit. So, suppose this is a periodic orbit and now we know that if we start on this periodic orbit, then we will keep going along this orbit. We could ask the question is this periodic orbit stable in the sense that trajectories that start close to this periodic orbit, do they converge to this periodic orbit? This is the question we ask in the context of van der Poel oscillator for example. We ask whether this periodic orbit is a stable limit cycle. What does it mean to be a limit cycle? We have these other trajectories that are converging to this and stable limit cycle means that we take such a cut and we look at all initial conditions close to this periodic orbit. We could ask are all these trajectories, which trajectories, all the trajectories that are starting close to this periodic orbit, are all of them going to converge to this periodic orbit? That is ideally the case that is how we want for robust sustained oscillations of an oscillator that we build in a laboratory, we want that no matter what initial conditions it should converge to that periodic orbit because that periodic orbit perhaps is carefully designed to have the right period and right amplitude. So, this is what we will say a limit cycle, a stable limit cycle. A stable limit cycle is one in which we have a periodic orbit, the cycle itself and trajectories that are close by come closer and closer to this periodic orbit. Of course, we know that it cannot come and intersect at any particular point. Why is it that the two trajectories cannot intersect? It can intersect such a possibility, this is possible only if the function is not lipschitz at this point, if the function is lipschitz then we know that two trajectories cannot intersect. So, we have all trajectories that are coming close and closer to the stable limit cycle also from the inside when they start. So, we have already seen results in this context. So, we already know what it means for a trajectory to be stable. We informally know it for the purpose of this limit cycle, for the case of periodic orbits we already speak about stability of periodic orbits. What about non-periodic orbits? Is that relevant? For periodic orbits in the case of autonomous systems, we have already seen for oscillators it is very important, but for non-periodic orbits also should be study this. So, we have an example here. For example, suppose for a non-linear system the optimal trajectory was computed by optimal control. We have a whole nice theory about optimal control and using that theory, suppose we compute the optimal trajectory for a particular system, this optimal might be for example, it is a trajectory when the fuel consumed is minimum. It might be a minimum fuel consumption trajectory for a rocket going up into the space. This is a very non-linear system, many inputs, many outputs. This rocket could be taken from the ground to the space in different, different trajectories and by using lot of computation may be, we compute a trajectory by which least amount of fuel is consumed in taking the rocket from the ground up to the space. So, this particular trajectory computation can be done offline, we assume this or for example, this trajectory might have been so called a minimum time trajectory. It might be the trajectory that is going to assure us that this rocket going from ground to space into the particular periodic orbit takes least amount of time, it might be a minimum time trajectory. So, we assume that this particular trajectory computation is done offline, it is done on a computer with lots of time allowed. Before the rocket goes up in the air, we already have a trajectory that will take it into the space in least amount of time or may be in least fuel consuming way. So, this is what is optimal control and once you have found this optimal trajectory, this optimal input that is required for staying on the optimal trajectory is pre-decided. For example, by intensive computation, now we can ask given that this particular system is not going to exactly go along this optimal trajectory, sorry, before we go into that. Suppose the open loop control input is pre-decided, the meaning of that the input that you should give so that that performance objective is met in an optimal way, that input is pre-decided and you just give this input to the system. This is same as saying that the control input is open loop, it is open loop control input and it is pre-decided. But it is also true that there are various uncertainties in the system, the fuel may not be of the correct quality that was assumed when you were deciding, when you were computing the control input. There might be other uncertainties in the space when the rocket is going up, it is encountering a situation that is not exactly accounted in the model. So because of these uncertainties, it is difficult to implement exactly that particular optimal trajectory using this pre-decided control input. That time we could ask the question, what if trajectories start close by, if the trajectories that are close to this optimal trajectory, do they stay close by? This is a natural question that arises in this context. If the close by trajectories do not stay close by, if the optimal trajectory is very good, but trajectories that are close to the optimal trajectory perhaps are moving away from the system, away from the optimal trajectory. If that is the case, then we will very soon define that to be unstable optimal trajectory. Then perhaps we can use feedback to stay close to this optimal trajectory. What is feedback here? Our control input was pre-decided, that pre-decide control input is indeed optimal if all the modeling has been done to account for all uncertainties. But in the presence of uncertainties, this actual input that we are giving is no longer optimal because there are some model uncertainties which we are not accounted for. So perhaps we could use feedback, we could measure the actual sensor values now and add the required value to the control input, to the pre-decided control input. So what is this linear controller supposed to do? It is supposed to, so design a linear controller to bring close by trajectories back to the optimal trajectory. If the trajectories that are close by are not coming back to the optimal trajectory, then we would like to design a controller that achieves this. So as far as this particular problem is concerned, even though the optimal input was pre-decided, if the trajectory was not the optimal trajectory, then a linear controller suffices for stabilizing the system back to this optimal trajectory. So linearization approach not only restricted to reference state, not only restricted to the operating point, but it can also be applied to a reference trajectory, to a operating trajectory. In the previous example, it was the optimal trajectory. This is the thing that we will study in detail now. So the procedure that we will see very soon is based on a Taylor series expansion and knowledge of the nominal system trajectories. So this reference state, reference trajectory we will call as nominal state value or nominal system trajectory. The system trajectory itself has a corresponding nominal input also. So what is the procedure now? This is the next thing we will see. Consider the first order non-linear dynamical system. Initial conditions are known. X dot is equal to f of x, comma u, often we suppress the T, we write X dot is equal to f of x, comma u. It is implicit that x and u are functions of time, u is the input to the system. Suppose for the input u bar, the system operates at along the trajectory x bar. So this is our reference input and reference trajectory. This is precisely the one that we were saying, this is perhaps an optimal trajectory. This is the trajectory x which turns out to be optimal in the sense that least fuel is consumed or the time taken is leased and this is an example and for achieving this optimal trajectory, this is the input that we give. So when we say that this input is optimal, it is going to achieve this, of course this differential equation is satisfied. This differential equation x bar dot is equal to f of x bar, comma u bar. So we will call u bar the nominal system input and x bar the nominal system trajectory. So this nominal input turns out to exactly reproduce the nominal system trajectory. This is assumed. Why? Because u bar is pre-calculated, it is like open loop control input. Now we will consider a perturbation in the neighborhood of the trajectory. What is xt is equal to x bar plus some small perturbation delta x. So delta x is a small quantity, it is a small state value. We will also add a small quantity to the nominal system input. So u is equal to u bar plus delta u. So please note this is the actual state, this is the nominal state and this we consider as a deviation. This is the nominal input, this is the deviation and this gives the actual input. So we are assuming that the deviations in the state and input are small. So we can take derivative of this. So derivative of xt is nothing but the derivative of this plus derivative of this. That is what has been written on the left hand side here and on the right hand side we have to keep x bar plus delta x and u bar plus delta u because f is not a linear system, it is not a linear map. So the right hand side, right hand side of which equation, right hand side of this equation. This is what we will call RHS, right hand side for the next slide. The right hand side can be expanded into a Taylor series expansion about what? About the system trajectory. So right hand side is equal to f of x bar comma u bar. This is a 0th order term in the Taylor series expansion plus derivative of f with respect to x times the deviation. This derivative of f with respect to x again depends on both x and u and we are going to evaluate it at x bar comma u bar. Why? Because our nominal system trajectory and nominal system input are x bar comma u bar. We are seeking a Taylor series expansion about this nominal system trajectory and nominal input. So this is a 0th order term and this is the first order term as far as delta x is concerned, as far as x is concerned. And we will also differentiate f with respect to u and again evaluate it at x bar comma u bar and we will call and this is the term corresponding to deviation in the input u. Plus, there are some higher order terms also. These higher order terms involve second partial derivatives of f with respect to x and u and they can be neglected under the assumption that these deviations are small. Since we are asking the question whether the operating trajectory close, whether trajectory is close to the operating trajectory come back. In that context, we are seeking analysis only about small deviations and hence these higher order terms can be neglected. So, after neglecting this, we get this particular differential equation. How is it that we have got this differential equation? Directly from here, we have substituted in this differential equation, the right hand side we have substituted as what we will see in the next slide and we have also used that x bar dot equal to, it is better that I write this one as slide, the reference trajectory and the reference input satisfy this differential equation. This is what we also called nominal trajectory and input. Then we are going to analyze not at x bar and u bar but x is equal to x bar plus delta x and u is equal to u bar plus delta u. It is about this particular x and u that we are going to analyze and for that purpose we wrote f of x comma u is equal to f of x bar comma u bar plus del f by del x evaluated at x bar comma u bar times delta x plus del f by del u times delta u times delta u and this again evaluated at x bar comma u bar. So, this we have expanded f of x comma u as 0th order term and together as first order term plus higher order terms which we have decided to neglect because the deviations x and u are small. So, now we are going to substitute this back into the differential equation into which differential equation at least the derivative operator is a linear operator because of which we get this and that is equal to after the neglecting of higher order terms in which this and this both have been evaluated at x bar comma u bar. Quickly speaking this is only approximately equal to why because we have decided to ignore the higher order terms. So, we will now define this particular term, this particular constant matrix as a and this particular matrix as b. This is the most important thing that we do when we linearize the system. This is what we will continue in the next lecture. So, we just see the definitions here before we stop for today's lecture. So, we are going to now define a as this particular matrix and b as this matrix. This is what we will see in the following lecture. With that we end lecture number 12. In the next lecture we will see more about the matrices a and b and why they are time varying. Thank you.