 Welcome everyone to lecture number 9 on non-linear dynamical systems. So, we had started with Lassel's invariance principle the last lecture. So, let us just quickly review this. So, suppose omega is a compact set that is positively invariant and suppose we have found a function v that is c1, c1 means differentiable and the derivative is continuous such that this v satisfies v dot is less than or equal to 0 on this set omega. For this v, we will now find a set E such that v dot equal to 0 on this set E and let m be the largest invariant set in E. Largest invariant set in E means it is invariant another dynamics of this dynamical system. It is contained in E and it is the largest such set. In other words, any other subset of E that has satisfies these properties is also contained in m. If these conditions are satisfied, then every solution starting in omega approaches this set m as t tends to infinity. So, approach a set we had seen this as a definition. So, what the Lassel's invariance principle says that every solution starting in the set approaches m. In other words, for every initial condition the trajectory converges to m. What is the meaning of converging to a set? So, the distance of a point p from a set is defined as the distance of p to different points in m and the shortest such distance. So, this is the definition of distance of a point p from the set m. Now, as x evolves as a function of time, x of t is a point and we look at the distance of x of t from the set m and this distance decreases as t tends to infinity. That is a statement of the Lassel's invariance principle. So, we had already encountered the situation of the pendulum example in which the natural energy function to take did not satisfy strictly less than 0. So, let us do this example again. So, this is what we call as friction. This is the situation of a pendulum. The original differential equation was of second order which was equal to like this. This differential equation of second order, we converted to a first order differential equation, two first order differential equations by introducing x2. So, x1 is the same as x and x2 is the derivative of x1 and these two first order differential equations we will now study the equilibrium point and the stability properties of the equilibrium point. So, for this dynamical system we will now see x2 and minus sign x1 minus bx2, this equal to 0, 0. This gives us x1 equal to 0 and x2 equal to 0 as one of the equilibrium points. Of course, we could have x1 equal to pi also that corresponds to the pendulum standing upwards, which we know is unstable. We can also obtain that as a conclusion by linearizing about that point and checking that the eigenvalues are at least one of them is in the open right of plane. That we will keep as an exercise. What we will now check is whether this equilibrium point, whether 0, 0 is stable, asymptotically stable. This is what we will check now. For this purpose, we will take the Lyapunov function coming as the energy of the system. So, take P of x equal to 1 minus cos of x1. Why? Because this is the potential energy. So, what was our x1 variable? This is our pendulum when it undergoes a deviation of angle x1. This is the angle. At that time, how much does it get raised? The amount by which it gets raised is the potential energy accumulated into the system and that turns out to be 1 minus cos x1, of course, multiplied by the mass and the gravitational acceleration g. But we have considered a model where those parameters are not arising. This can be considered as normalization of the equations or as normalization of the mass to 1. This is only the potential energy. The other energy term is actually x2 square by 2. Let us check what happens if we take just x2 square. So, this is not really energy because this is not kinetic energy. Second term is not kinetic energy but it is twice the kinetic energy. So, let us check what happens to v dot of x. So, this turns out to be del v by del x times f of x. And when we evaluate this, del v by del x is a rho vector in which the first component here is the derivative of this with respect to x1, which is exactly sin of x1. And the derivative of this with respect to x2, that is 2x2 times f of x. What was f of x? The first component was x2. Second component was minus sin x1 minus dx2. And when we multiply this product, this is like inner product, we get x2 sin x1 minus 2 times x2 sin x1 minus 2bx2 square. So, this is what we get as v dot of x. So, is this quantity positive or negative? That is the next thing we will investigate. So, we got v dot of x was equal to, this term is well behaved. And the other term is, this is our x1, this is our x2. So, one term of course does not change sin, it is always negative or equal to 0. But the other term x1 and x2, sin of x1 has the same sin as x1 close to x1 equal to 0. But x2 times this can change its sin depending on which quadrant it is. And hence for small values of x1 and x2, in other words close to the origin, we are not able to say that v dot of x is less than or equal to 0. This is not satisfied, close to 0, 0. This one can check oneself, in order to check oneself, one could first ignore this particular term. Why we can ignore this term? Because this is to put b equal to 0 just means that we have a pendulum without friction. And for the pendulum without friction, we know that the system is stable by intuition. We want to obtain that as a conclusion from the Lyapunov theorem of stability. And for that purpose, this particular quantity certainly changes sin. It will have different sin depending on x1, x2 on each of these four quadrants. And hence this v dot does not satisfy less than or equal to 0. So, this is not a valid Lyapunov function. Why? Because it is a Lyapunov candidate, it is positive definite, but it is not decreasing, it is not non-increasing around the origin. So, let us go back to our Lyapunov function and make a small change here. We will now divide this by 2. This perhaps we have already verified once. So, this now is indeed the kinetic energy. So, by doing this, we do not get this 2 term here and we remove this also here because of which this term now cancels out and we have v dot of x. Now, it is indeed less than or equal to 0. Why? Because v dot of x was equal to minus b x2 square. So, this at least proves that 0, 0 is stable. However, this had not helped us to prove that the origin is asymptotically stable even though by intuition we know that the equilibrium point is in fact asymptotically stable because we have friction which continuously dissipates off the energy. So, how do we obtain that? This particular function Lyapunov candidate does not help us. However, we can use Lassel's invariance principle for the same Lyapunov function. So, construct the set E, set of all x such that v dot of x was equal to 0. In other words, minus b x2 square equal to 0, it gives us set of all points where x2 is equal to 0. So, this x2 equal to 0 is nothing but the x1 axis. The x1 axis is a set of all points where rate of change of the Lyapunov function is equal to 0. So, this is our set E. Now, we want to look at the set M which is subset of E invariant, invariant under the dynamics of the system and largest such set, largest such set, largest such set which satisfies that it is a subset of E and it is invariant under the dynamics of the system. So, how will we find the largest such set? We will look for what values of x1, x2 are subset of E and are also invariant. When we try to do this, we will automatically get the set of all x1, x2 points that are invariant and containing E and hence it will be the largest such set. So, x2 equal to 0 is the requirement that the set M is contained in E. Now, we will put this in x1 dot equal to x2 and x2 dot equal to minus sin x1 minus bx2. When we put x2 equal to 0, we get x1 dot equal to 0. We also put x2. So, x of t is contained in set E which means x2 of t is equal to 0 always, is uniformly equal to 0, equivalently equal to 0, identically equal to 0. These are the different ways we interpret this symbol, this equation. So, if some particular value of x as a function of time is always equal to 0, it is like a constant function which automatically means that x2 dot is also equal to 0, identically. So, when we put that x2 dot equal to 0, then we also get sin of x1 equal to 0. This implies that sin of x1 equal to 0 which of course we know happens at either the vertically down position which is x1 equal to 0 or the vertically up position which is x1 equal to pi. Since we are interested about the stability properties of the point 0, 0 we get. This is of interest. Since we are interested at this point, we get that m is just the point 0, 0. When we ask the question inside the set E which are all those points which are invariant under the dynamics of f, the time we took the set E which means we put the equation x2 equal to 0 and we studied invariance. For invariance, we put the fact that x2 is always equal to 0 which means x2 dot also equal to 0. When we substituted that back here, we got sin of x1 also equal to 0. This term was already 0. This term we now got equal to 0 because of which we obtained that sin of x1 equal to 0 and which means that this can happen at the point pi, 0 also. The first component is equal to 0 that is one of interest which is vertically down position. So, we have obtained that the set of all invariance points that satisfies the property that is invariant and subset of E gives us only this point and this is the largest such set. Any other set would not satisfy the equations. We looked at looked for all the points that satisfy the equations and got this point. So, in other words, we have obtained that m is just the set, just the point 0, 0. So, by Lassel's invariance principle x of t converges to the set m which is just 0, 0. So, this in other words proves that origin is not just stable which we already concluded from the Lyapunov's theorem of stability but we in fact got that the origin is asymptotically stable. So, this concludes proof of the statement that the origin is asymptotically stable by using what principle? Not by using Lyapunov's theorem of asymptotic stability but by using Lassel's invariance principle which we used to conclude that the set m has just one point the origin and by Lassel's invariance principle the trajectories x of t converge to m and hence the origin is asymptotically stable. Now, we will investigate whether the linearized system at this point is also asymptotically stable. So, consider again x1 dot equal to x2 and x2 dot equal to minus sin x1 minus dx2. So, x dot is equal to this can be written as f1 of x and f2 of x as a vector. So, what is the linearization? We have already checked that the equilibrium point of interest is 0, 0. We have checked that this is an equilibrium point. Now, what is this linearized system? It is this particular matrix evaluated at x equal to 0, 0. So, let us find what this matrix is. The term that comes here is the derivative of f1 with respect to x1. So, in f1 which is this equation x1 does not come at all. In other words the derivative of f1 with respect to x1 is 0. What is the derivative of f1 with respect to x2? That is the term that comes here. That is precisely equal to 1. Why? Because f1 of x is equal to x2. So, derivative of f1 of x with respect to x2 is equal to 1. What is the derivative of f2 with respect to x1? So, where all does x1 appear in this equation? It appears only here. In other words derivative of minus sin x1 with respect to x1 that is minus cos x1. What is the derivative of this term with respect to x2? So, x2 does not come here. It appears only here and so we put minus b here. This as expected is a matrix, but it depends on x1 and x2. It depends only on x2 in this case. So, we are now required to evaluate this at x equal to 0 at the origin. So, which means that in the first two entries x1, x2 do not appear. It appears only here. When you put x1 equal to 0 we get minus 1 and of course b is greater than 0. So, let us check how the eigenvalues of this matrix look. So, upon checking one can do the calculations and check that eigenvalues of a are in the open left half complex plane. One can check that by using the fact that b is greater than 0, eigenvalues of that particular matrix we wrote are both in the open left half plane, which means that the origin is asymptotically stable and for a linearized system if a is Hurwitz, if the origin of the linearized system has all eigenvalues in the open left half plane, then we know that the non-linear system's equilibrium point is also asymptotically stable. However, that Lyapunov function could not help us with that. So, can we find another Lyapunov function? After all the Lyapunov theorem was only a sufficient condition for stability and asymptotic stability, since we already know that the equilibrium point is asymptotically stable, can you find another Lyapunov function to prove, to show asymptotic stability? The energy function already helped us to prove stability, but we want to prove asymptotic stability. So, we will consider finding a Lyapunov function for the linearized system. In other words, find p greater than 0 such that A transpose p plus pA is equal to q for q a negative definite matrix. This is the problem that we will solve now. Why? Because this particular Lyapunov function for the linearized system will also help as a Lyapunov function for the non-linear system. So, we can, in fact, choose for linear systems because A is herwits. For any q, we will be able to find such a p. So, take q equal to minus 1, 0, 0, minus 1. In other words, this q will correspond to our v dot of x. So, the corresponding v dot of x will turn out to be equal to minus x1 square minus x2 square. Why? Because v dot of x is nothing but x transpose qx for linear systems. So, when we put this particular q, we will get precisely this and this we know is negative definite. It is strictly less than 0 for all x1, x2 except, of course, x1 equal to 0 and x2 equal to 0. So, for this particular q, we will now look for a p such that find p such that A transpose p plus pA is equal to this q. Because that particular A is herwits, the p that we will obtain from this equation will turn out to be positive definite matrix. This is the equation that we will solve now. So, notice that A was equal to 0, 1, minus 1, minus b. For the purpose of solving, we could take b equal to 1. This is the rate at which energy decreases due to friction and this is required to be positive. So, we have taken b equal to 1. What do we get by solving for p? We will assume p as these entries, p1, p2. p is a symmetric matrix. Hence, this entry is also equal to p2 and p3. So, when we do A transpose p plus pA, that time we get this to be equal to, this is A transpose. What is written here is A transpose. Hence, p1, p2, p2, p3 plus the same matrix p1, p2, p2, p3 times A, which was equal to 0, 1, minus 1, 1. So, we will now evaluate this. This is equal to minus p2, minus p3. This is p1 plus p2 and here we have p2 plus p3. This is minus p2 minus p3 and p1 plus p2 is p2 plus p3. So, now we will equate this to q and while doing that, so we can add these two matrices to get finally, A transpose p plus pA is equal to minus 2 p2 p1 plus p2 minus p3. Here we get the same thing p1 plus p2 minus p3 and here we get 2 p2 plus 2 p3. So, since p was symmetric, we have got this particular matrix to be symmetric and that is the reason that we should be choosing q also to be symmetric and we have chosen q to be equal to let us find values p1, p2, p3. A particular theorem we already saw claims that this system of equations is solvable. So, there are only three entries to three equations 1, 2 and 3. Why? Because this entry equal to this is the same equation as this entry equal to 0. So, let us put minus 2 p2 equal to minus 1, p1 plus p2 minus p3 is equal to 0 and finally, 2 p2 plus 2 p3 equal to minus 1. So, the first equation just tells that p2 is equal to 1 by 2 which when we substitute in the last equation, we get p3 was equal to minus 1 minus 2 p2. Minus 2 times p2 is nothing but minus 1 again which gives us p3 equal to. So, we have taken A equal to 0, 1. Let us go back to this equation. We have got A equal to 0, 1 minus 1 minus B and we have put B equal to 1 and let us now take p equal to p1, p2, p2, p3. We have taken p to be symmetric. That is why we have taken the same entry here. So, let us solve for A transpose p plus pA. This gives us for A transpose, we will write 0 minus 1, 1 minus 1 times p1, p2, p2, p3 plus the same matrix p1, p2, p2, p3 times A which is equal to 0, 1, minus 1, minus 1. And when we solve this, this when we do, we get minus p2, minus p3, p1 minus p2 and p2 minus p3 plus this particular matrix product when we evaluate, we get minus p2 minus p3, p1 minus p2, p2 minus p3. And when we add these two matrices, we get minus 2 p2, p1 minus p2 minus p3, p1 minus p2 minus p3 and finally, 2 p2 minus 2 p3. So, this matrix we have finally got is nothing but A transpose p plus p times A. Now, we will equate this to q. So, we had already taken, so notice that this matrix is symmetric because we have taken p to be symmetric, this matrix has been obtained to be symmetric and hence it is important that this matrix be equal to a q which also should be assumed to be symmetric. So, we have taken q to be equal to minus 1, 0, 0, minus 1. So, when we equate this matrix to q, then we have it appears like four equations, this entry equal to minus 1, this entry equal to 0, this entry equal to 0 is again the same equation. So, it is not really four equations, but three. What is the last equation? This entry equal to minus 1. So, these four equations now we will write here. So, we have minus 2 p2 equal to minus 1, p1 minus p2 minus p3 equal to 0 and 2 p2 minus 2 p3 equal to minus 1. So, the first equation gives us 1 by 2 which when we substitute into the last equation, we get 2 p3 is equal to 2 p2 plus 1 which was equal to 2. When we put p2 equal to half, here we get 2 and these p1, p2 and p3 when we substitute into the second equation, we get p1 equal to p2 plus p3 which was equal to 2.5. So, what is our matrix p as a result of this? The matrix p was equal to p1, p2, p2, p3. So, p1 has been obtained to be equal to 2.5, p2 was equal to 0.5 which is the same entry here and finally p3 which was equal to 2. So, the claim is that this matrix p that we have obtained is positive definite because a was herwits and q was negative definite. We can check this. So, p greater than 0, how will we check this? One way to check that a matrix is positive definite is that all the principal minors, all the leading principal minors. So, for a square symmetric matrix, this is a one by one minor. It is a principal minor because it has the symmetric rows and columns taken to construct that sub matrix and only the leading ones. So, we take only this now and every such matrix we take and look at the determinant and each of these determinants should be a positive number. That is the necessary and sufficient condition for this matrix p to be a positive definite matrix. So, let us do that check for this. Here we have to take only two determinants. The first one by one determinant is nothing but 2.5, p11. The first sub matrix is equal to 2.5 that is greater than 0. What about the determinant of the whole matrix? The next leading principal minor is nothing but the whole matrix p. So, the determinant of the whole matrix is 2 into 2.5 minus 0.5 square which is equal to 5 minus 0.25 is equal to 4.75 that is positive. So, because both the first leading one by one minor and the second leading 2 by 2 minor are both positive determinant, this means that the matrix p is positive definite. So, if we had taken the Lyapunov function coming from this p for the linearized system. So, what is the Lyapunov function? It is x transpose px in which p was in which the matrix p was the one that we just now obtained. If we take this as our Lyapunov function and the origin turns out to be asymptotically stable by Lyapunov's theorem of asymptotic stability and the same Lyapunov function will also help us in proving asymptotic stability of the non-linear systems equilibrium point which happens to be the origin again. But if we had started with this Lyapunov function then we would not have required Lassel's invariance principle because the Lyapunov function theorem itself would have claimed stated that the equilibrium point is asymptotically stable. Unlike the energy function, unlike the physical energy that we had taken which helped us to prove only stability by Lyapunov's theorem of stability. So, this completes Lyapunov analysis. We have seen some solved examples also. We can have another set of problems which we will use as exercises. We will now move on to the next topic which is about periodic orbits. Why? Because periodic orbits are an important part in the context of building oscillators. So, before we move to that topic, there is one other slide that we had skipped. So, the Lassel's invariance principle which we saw in detail and also saw an example is different from Lyapunov's theorem in two ways. The first way is, unlike the Lyapunov theorem, the Lassel's invariance principle does not require the function v to be positive definite. Notice that we did not assume that v was positive definite. Second, the positive invariant set that we had constructed in the proof of the Lyapunov theorem that set omega was constructed using the Lyapunov function v. Here, we are assuming that we already have a positive invariant set. In fact, that is the reason that we are not assuming v as positive definite because on a compact set omega, v always achieves its minimum and we can subtract that minimum from the function v here by which we can always obtain another function v that indeed is positive definite. We will also see an application of Lassel's invariance principle. So, there are well-known results that turn out to be a special case of the Lassel's invariance principle. So, one of them is Barbashin-Krasovsky's theorems. What is the statement of the theorem? So, suppose x dot equal to f of x is a system in which x can have many components, x is an element of Rn and suppose the origin is an equilibrium point. Suppose there exists a function v from a domain D to R which is continuously differentiable and suppose v is positive definite function. In other words, v of x is greater than 0 for all x except origin x equal to 0 and also v satisfies that it is less than or equal to 0 on the domain D. Construct the set m that is made up of all the points where v dot is equal to 0. Suppose this particular m has the property that the only solution that can remain inside m is xt identically equal to 0, then the origin is asymptotically stable. Notice that this is precisely the situation that had happened for the pendulum example with friction. So, the Barbashin-Krasovsky's theorem is a more general statement to this effect. What can we speak say about global asymptotic stability of that equilibrium point we just now claim to be asymptotically stable? If v is radially unbounded further in addition to the above assumptions, if v is also radially unbounded then the origin is in fact globally asymptotically stable. So, this particular theorem we already saw for the case of a pendulum as far as asymptotic stability is concerned. Of course, the pendulum example it is not globally asymptotically stable simply because there are other equilibrium points. However, the Barbashin-Krasovsky's theorem says that if v were radially unbounded then the origin is in fact globally asymptotically stable. One can check that the Lyapunov function v we had used for the case of the pendulum example with friction that is not going to be radially unbounded otherwise the origin there would have been globally asymptotically stable according to this theorem. Now, we come to the other topic about periodic orbits. For this purpose, we will study periodic orbits in more detail for a plane where the trajectory evolves in a plane. In other words, at each time instant x of t is an element of R2. It has two components only. So, what is the objective? The objective is to design robust and stable oscillators. So, what is robust about this and what is stable about this? We want that the amplitude and frequency of the oscillations are not too sensitive to the initial conditions and are not too sensitive to the system parameters. So, as we had noted at the beginning of these lectures, only nonlinear systems can help. Why is that? Because linear systems first of all are very sensitive to R sensitive to initial conditions. In other words, if we start with a different initial condition, then the amplitude is different. Of course, for linear systems the frequency remains the same, but the amplitude is different. Moreover, the fact that there are periodic orbits is extremely sensitive to system parameters. In other words, the eigenvalues are on the imaginary axis. For small changes in the system parameters, the eigenvalues could be in the right half plane or in the left half plane, which means that we might have either no periodic orbits and all trajectories go to 0 or the trajectories could become unbounded and there is again absence of periodic orbits. In other words, linear systems are at the brink of instability and hence periodic orbits are extremely sensitive to the system parameters. So, for nonlinear systems, the question arises how to even claim that there are periodic orbits for this system of equations. So, one extremely important result in this context is the Poincare Bendixson criteria. So, what does the criteria tell? So, consider the system x dot is called f of x in which note that x has only two components, x here has only two components and suppose the set M, suppose there is a set M which is a compact set. It is a closed and bounded subset of the plane. Suppose, M has the property that M contains no equilibrium points or M could contain an equilibrium point such that that equilibrium point is either unstable focus or an unstable node. So, there are two situations for the second bullet. The first case is M contains no equilibrium points. The second situation is that when we linearize at that equilibrium point, if there is an equilibrium point, then at most one equilibrium point is allowed and when we linearize at that equilibrium point, then the linearized system has an unstable focus or an unstable node. In other words, both the eigenvalues of the matrix A which we get by linearizing at this equilibrium point are unstable. Both the eigenvalues are in the open right half complex plane. So, suppose M has this property, further suppose M is also positively invariant. If M satisfies these three conditions that it is a compact set, it is positively invariant and either M has no equilibrium points or at most one which is unstable, these three conditions are sufficient to ensure that M contains a periodic orbit. So, under these assumptions, the Poincare-Bendixson criteria claims that there is M is guaranteed to contain a periodic orbit. So, what is the intuition behind this? So, M is positively invariant and compact. In other words, trajectories that start inside M remain inside M for all future time and since M is compact, these trajectories are all bounded. They cannot become unbounded because they do not even leave M and M is bounded. Further, these bounded trajectories will have to approach periodic orbits or they can approach equilibrium points. As T tends to infinity, what happens to all these bounded trajectories? They either approach the equilibrium points or they approach periodic orbits. These are the only two possibilities. Why? Because the trajectories are all bounded and they exist for all future time. Now, if we rule out existence of any equilibrium points inside M, then we are forced to have a periodic orbit. This is what Poincare-Bendixson criteria says. Secondly, even if M had an equilibrium point, but if that were unstable, then the trajectories could not be converging to them or trajectories could not be converging to the equilibrium point. So, we would have a periodic orbit even if M had an equilibrium point which was unstable in that case. So, these three conditions on M ensure that there is a periodic orbit. So, please note that this is only a sufficient condition for existence of a periodic orbit. Of course, there can also be a continuum, there can be non-unique periodic orbits, there can also be a continuum of periodic orbits which we will see very soon. Another important criteria in this situation is the so-called Bendixson criteria. So, what does this criteria say? It is a sufficient condition. Please note with which conditions are necessary, which conditions are sufficient. The Bendixson criteria is a sufficient condition for non-existence of periodic orbits. It is a sufficient condition for non-existence of periodic orbits that are fully contained inside a region. So, what is the criteria? If on a simply connected region D of the plane, so we will quickly see what a simply connected region is. So, on a simply connected region of the plane, if this particular expression here satisfies the condition that it is not identically 0 and it does not change its sign. If these conditions are satisfied, then the system of equations x dot equal to f of x has no periodic orbits lying entirely in D. So, inside the region D, we check that this quantity is not always equal to 0 and it also does not change sign inside D. If those two properties are satisfied by this particular function, then there cannot be any periodic orbits lying completely in D. So, please note that it is only non-existence fully contained inside D of periodic orbits that is being guaranteed by the criteria. So, x dot is equal to x1 dot x2 dot which is equal to f1 of x, f2 of x. So, this is our dynamical system. As I said, we are considering evolution of trajectories in a plane. So, x has only two components x1 and x2 and hence this differential equation has only two equations inside this. So, now we differentiate the first one, we differentiate f1 with respect to x1 and to that we add the partial derivative of f2 with respect to x2. So, note that f1 depends on x which is x1 and x2. So, f1 can depend on x1 also and x2 also and similarly f2 can depend on x1 and x2. Hence, we have partial derivatives of f1 and f2 here partial derivative of f1 with respect to x1, partial derivative of f2 with respect to x2. This particular quantity is some function. Suppose that function is called G which depends on both x1 and x2. So, what does the Bendixson criteria say that this particular quantity you check that G of x1, x2 is not identically 0 in D. So, D is a region inside this region. This quantity is not identically 0. In other words, there is at least 1 point x1, x2 where this is not equal to 0. As soon as this is not equal to 0, at least at a single point, it means this is not identically 0 in D. It is allowed to be 0 at a few points, at several points. However, it should not be equal to 0 at all the points in D. That is the statement that this is not identically 0. Also, check that the sine of G, the sine of G can be equal to either 1 or minus 1 or 0. This is in general possible, but we require that the sine of this should not change. In other words, it should not go from minus 1 to 1, from minus 1 to 1 or 1 to minus 1. As long as it is always 1 or always minus 1, maybe at some points it becomes equal to 0. If the sine of this does not change when you check for different x1, x2 points in this region D, if these two conditions are satisfied, then the Bendixson criteria says then D cannot fully contain a periodic orbit. There is no periodic orbit that lies entirely in D. So, another assumption that we had made was that D is a simply connected region. So, what is a simply connected region? So, in the plane x1, x2, suppose this is a region D, we will call this region D simply connected. Connected of course means that D should not be made up of two such parts. This is in D and this is in D. So, this is not connected because to say that it is connected, we take any two points and there should be a curve, there should be a path between these two points and the path, the points on the path also should lie in D and this should be possible for every two points in D. So, if D had two components, while certain points are connected by a path lying in D, every two points are not connected. Look at a point there, a point here, a path from there to here is forced to go outside the set D. So, this such a set would not be called connected even. So, for a region that is connected, what do we mean by simply connected? So, now we take this region D. So, we take a closed curve inside this. This is just a closed curve and this closed curve can be shrunk to a point. We can take a smaller curve, slightly smaller curve. So, and this shrinking can eventually lead to a point and in the process of shrinking to a point inside, at no situation does the curve have to leave the set D. In other words, every closed curve can be shrunk to a point while being inside D. If that is the situation, if that is the property that D has, then we will say D is simply connected. So, all these regions that we usually think of are indeed simply connected. So, an example of a set D that is connected but not simply connected is a set which has holes. So, take this and we rule out this particular case. So, what is our D? Our D is this shaded region and this shaded region without this particular place, without this hole. So, this D with the hole, this is our D. This is an example for region that is connected. Why it is connected? Because we can take any two points on in D and there is a path that connects these two and all points in the path are also in D. But what about a closed curve? Notice that this closed curve cannot be shrunk, cannot be made smaller and smaller such that all the whole curve is in D. Why? Because it cannot be shrunk to a point. So, in the process of shrinking this curve to this point, to any point, it turns out that this hole, because it is inside the curve but it is not inside D, inside the region D, we are not able to shrink this closed curve to a point. We might be able to shrink other closed curves to a point but for the region D to be simply connected, every closed curve we should be able to shrink to a point. So, there are curves here which we cannot shrink to a point. Hence, this D is not simply connected but this D is simply connected. So, the Bendixson criteria requires that the region D for which we are checking is a simply connected region. So, on the simply connected region, we check whether the two functions, whether this function g, whether the function g here which is obtained from f1, f2 by doing this partial derivative operation, this g should not be identically 0 on this region and it should also not change signs from 1 to minus 1. It is allowed to be 0 at a few points in which case the sign is equal to 0 that is not of concern but it should not become from 1 to minus 1 or minus 1 to 1. If D satisfies these two properties at all points in D, g satisfies these properties, then the Bendixson criteria says that there is no periodic orbit lying entirely in D. What the Bendixson criteria does not say is that suppose this is a region g, it is simply connected, simply connected region D and suppose g was, suppose g of the previous slide changes sign. Changes sign means what? When we take different points x1 and x2, then it is equal to 1 at certain x1, x2 points and it is equal to minus 1 at some x1, x2, it is equal to minus 1 at certain other points, at some other points. If g is changing its sign from 1 and minus 1, then the Bendixson criteria only says this, what it says is that there is no such periodic orbit, there cannot be such a periodic orbit. But there could be a periodic orbit that is not lying entirely in D. Such a periodic orbit could be there which partly is inside D and partly outside D. Such a periodic orbit could still exist. The Bendixson criteria does not rule out such periodic orbit's existence, it only rules out any periodic orbit that lies entirely in D. This is ruled out. So, please note that there is a subtle difference in lying entirely in D and passing through D and the criteria only says that if g does not change its sign while being checked in D, then there is no periodic orbit that is contained in D. So, let us take an example of a linear system. So, the corresponding matrix A, this can be also written as x dot is equal to Ax, where A is equal to 0 1 minus 1 0. So, let us check the eigenvalues for this matrix A. So, determinant of S i minus A is equal to S square plus 1. So, please check that the determinant of this, the characteristic polynomial turns out to be this. So, the eigenvalues of A are plus and minus j. One eigenvalue of A is equal to plus j, the other one is minus j. In other words, there are two eigenvalues both on the imaginary axis, which suggests that there are periodic orbits. This, the equilibrium point 0, 0 is a center for this particular A. So, let us see what happens to the Bendixson criteria. So, what is our G, which we have defined as del f 1 by del x 1 plus del f 2 by del x 2. So, for this particular case, this is our f 1 and this is f 2. So, derivative of f 1 with respect to x 1 is equal to 0. Derivative of f 2, x 2 does not even appear in f 2, only x 1 appears. So, derivative of f 2 with respect to x 2 is also again 0. This is 0. So, no matter which region you take, no matter which simply connected region you take, G of x 1, x 2 is identically equal to 0. It is equal to 0 without even having to specify at which point x 1, x 2, we are checking this. So, this is the situation where Bendixson criteria is not applicable. So, Bendixson criteria, assumptions not satisfied. The assumptions were not satisfied. Does that mean that there are no periodic orbits lying entirely inside that simply connected region D? No, it does not mean that. It only means that because the assumptions of the Bendixson criteria are not satisfied, we cannot go ahead and conclude anything because Bendixson criteria is not valid, the statement is not valid. However, we know in this particular case that it is identically 0 and there are periodic orbits indeed. In fact, these are all the periodic orbits. So, from any initial condition on this plane, there is a periodic orbit passing through that. In other words, for every simply connected region that contains the origin, as long as this region is some region like this, there are plenty of periodic orbits. However, Bendixson criteria does not tell us that. Why? Because Bendixson criteria assumes that this G is not identically 0 and that situation is not satisfied here for the case of a linear system with periodic orbits and hence we are not able to use Bendixson criteria here. We will see some more examples of where Bendixson criteria is applicable in the next lectures. Thank you.