 Welcome everyone. Today is the third lecture on non-linear dynamical systems. This is between Madhu Balu that is me and Harish K Pillai. So, we had just begun with phase portraits of second order systems last week. So, consider this differential equation x dot is equal to A x in which A is a 2 by 2 matrix and now we are trying to see various situations, that arise depending on whether the eigenvalues of A are real or complex, whether they are repeated or distinct, whether A is singular or non-singular. So, the eigenvalues of A, let me recap, are the roots of the determinant of S i minus A and this decides, eigenvalues decide the key features. So, we will begin assuming that A has no eigenvalue at 0 which means A is non-singular. In such a situation, the origin in the plane is the only equilibrium point. The different types of equilibrium points for this situation are centre which we had just begun seeing, then node in which case it can be a stable or unstable node, then there is a focus, a stable or unstable focus, a saddle point and some other situations which for example, when there are repeated roots and when there is one or more eigenvalues at 0, those are the situations we will see separately. So, a stable node, a node can be stable or unstable. So, what is a node? It is a situation when A has two distinct eigenvalues and both are negative. In such a situation, it is called a stable node. The other situation when A has both real eigenvalues and positive is called an unstable node. To analyze this, we will quickly see how the vector field looks for this particular A. So, look at this figure. This is not the same example that is there on the slide, but it explains what is a stable node. This is the x1 axis. This is the x2 axis. What this says is, if you are along the x1 axis, then because x2 component is 0, when A acts on such a vector, again the x2 component is 0. That is the significance of a diagonal matrix A and similarly along the x2 axis, x1 component is 0 and these diagonal entries being negative imply that that is also along the x2 axis, the arrows. The relative length of the arrows certainly depends on the x1 and x2 components, but then as far as this picture is concerned, as far as quality to study is concerned, this explains how the various arrows are. So, the origin, the unique equilibrium point appears to be a stable node. It is a node all arrows are directed towards it. There is no rotation involved because the off diagonal elements are equal to 0 and all arrows are directed towards the origin. This is what we saw as a stable node. We will later see that it is asymptotically stable in the sense of Lyapunov. Let us quickly see what an unstable node is. Take the same A, except that the diagonal elements have sign opposite. Again, because of the diagonal nature of A, along the axis, the arrows are parallel to the axis themselves with careful attention to the arrows, whether they are in the positive direction of x1 or negative direction of x1. It will be away from the origin because of the positive sign of the diagonal elements. The off diagonal for points which are not along the x1 axis by just superimposition because this is a linear vector field by superimposition. For example, at this point, the x1 component of this arrow can be obtained by this point and the x2 component of the arrow can be obtained by the arrow at this point. This is the net arrow. So, this is what we can obtain by superimposition because A is a linear map because we have a matrix that decides the vector field at different points. So, before we go to stable and unstable focus, we will quickly see what diagonal has got to do with what we are studying. So, if we are given with a general A, let us say 4, 5, 6, 7. So, I am just guessing the various elements and suppose the entries are such A11, A12, A21, A22. Suppose the entries are such that this matrix is diagonalizable. It may not be diagonal itself. In other words, A12 and A21 might not be 0, but if it is such that there exists a non-singular matrix T such that T inverse At is equal to a diagonal matrix, then by choosing the columns of T as a basis, we still have this decoupled vector field. Decoupled vector field like we saw for x1, x2, we can see it is not along x2, x1, x2 axis anymore. But suppose this is one column of T1 and suppose the other column of T2 is like this, in general the two columns need not be perpendicular to each other. Suppose this is Eigenvector v1, this is Eigenvector v2 and suppose this Eigenvector corresponded to the Eigenvalue lambda1 and this corresponded to Eigenvalue lambda2. These are the x1, x2 axis. These are not the Eigenvectors. More generally, Eigenvectors are vectors v1 and v2 which may or may not be perpendicular to each other. These Eigenvectors are corresponding to Eigenvalues lambda1 and lambda2. So, if lambda1 is negative, then we can draw the arrows just like we had drawn for a stable node and if lambda2 is also negative, these arrows also can be drawn towards the origin and the other places the arrows can be filled again as I said by superimposition. So, more generally, if is diagonalizable, we have two directions called Eigenvectors along which we can draw the arrows either towards the origin or away from the origin depending on whether lambda1 is negative or positive respectively. In which case again we are able to decide whether the node is a stable node or unstable node. Our assumption till now has been that both the Eigenvalues are of the same sign. When they are of different sign, that is the next thing we will see. Sorry, before we see the situation when the Eigenvalues have opposite sign, we will start with what a center is. So, this is the situation when A has purely imaginary Eigenvalues. Take for example this because of this particular form and in which the diagonal elements are equal to 0, the Eigenvalues are plus minus 3 times j. So, this corresponds to, as I said, rotation about the origin either in the clockwise or the anti-clockwise direction which we will decide very quickly. So, take a point along the x1 axis. Suppose this point is equal to 4,0. The point on the x1 axis has x2 component equal to 0. When matrix A acts on this, we get minus 12, sorry, 0, minus 12. We see that we get a vector which is parallel to the x2 axis and in the negative direction. So, this is the arrow at the point 4,0. Similarly, when we draw these arrows at different points, we see that we have a rotation in the clockwise direction. Every point except the origin is, if we start at any point, then we are continuously rotating and it turns out that the vector A times V is perpendicular to the vector V itself. So, if we are at any point V, then the arrow at that point A V, that is perpendicular to this and we see that this is nothing but what corresponds to pure rotation in which the velocity is perpendicular to the radius vector. The clockwise or anti-clockwise just depends on whether we have a plus sign here or a plus sign here. So, this other example that is there on the computer corresponds to an anti-clockwise rotation because we have a negative sign here and a plus sign here. We have an anti-clockwise rotation for the second example of A and both the A's correspond to periodic orbits with the number 3 indicating the frequency. But since we are interested only in a qualitative study, the precise value of the frequency is not significant. Another important point to note here is we have a collection of periodic orbits. For each initial condition, the radius, the distance from the origin decides which periodic orbit it is. The x1, x2 space itself is made up of periodic orbits which are all very close to each other which form a continuum. From each initial condition x1, x2 there is a unique periodic orbit going around it and if we go a little away or little closer to the origin, then we have another periodic orbit. So, for the situation that A has imaginary axis eigenvalues, we have a continuum of periodic orbits and for a linear system it is not possible to have isolated periodic orbits. As we saw in one of our introductory lectures that we can have isolated periodic orbits for a non-linear system. But for a linear system when we have periodic orbits, it appears that we have a continuum of periodic orbits. In other words, if we start from a slightly different initial condition, then it is very unlikely to be on the same periodic orbit. If we are on this periodic orbit starting from this initial condition, unless we are perturbed, unless we perturb the initial condition to another point on the same periodic orbit, the periodic orbit is going to be different. If it is from this initial condition, then this initial condition corresponds to a different periodic orbit which means a different amplitude even though it is the same frequency. So, this is the inevitable situation with linear systems when we have periodic orbits. The next type of equilibrium point we will see is when A has complex eigenvalues and these eigenvalues are not purely imaginary. So, take for example, A equal to, so in which the diagonal elements are equal to minus 1 and the off-diener elements have opposite signs, 1 is plus 2, 1 is minus 2. So, this we will call is a stable focus. As I said, the off-diener elements cause rotation about the origin. Each of these cases, A is non-singular. Hence, the origin is the unique equilibrium point. So, let us take an example of a particular point and decide where the arrow is when we are at this point. So, this point for example is 4, 0. When A acts on this, we get minus 4 and below we get plus 8. So, this is a vector which is like this. There is minus 4 component towards the origin and 8 component along the positive x2 direction because of which we have this. So, when we take different points, we see that it is no longer perpendicular to the radius vector, but it is directed inwards. So, every point it turns out that we have some rotation and eventually the trajectories come to the origin. For example, at this point, if we draw the arrows at different points, all trajectories seem to be approaching the origin even though they do not approach the origin in finite time. Each trajectory, these trajectories do not intersect, but they all approach the origin and they reach the origin only asymptotically. So, this is a stable focus and unstable focus is also very easy to see only that the diagonal elements have positive sign. Now, all the arrows are directed away from the origin. Also, the rotation has been reversed because the signs of this and the previous example have been interchanged. So, here is an example where the arrows are all directed outwards. So, we have at any point, we have trajectory that is going away. Different points are all going away from the origin. So, this is what we will call an unstable focus. Finally, we will see what is a saddle point. So, the situation when A has real eigenvalues 1 positive, 1 negative. Again, for simplicity, we will start with the diagonal case. That time, because it is diagonal, again we have a decoupled nature of the phase portrait. So, we see because A is equal to minus 1, 0, 0, plus 2, along the x1 direction is approaching the origin, while along the x2 direction is going away from the origin and any other point is a superimposition of these two features. So, we see that unless the x2 component is equal to 0, which means we are along the x1 direction, all trajectories are coming towards the origin. Any other point where the x2 component is non-zero, while the x1 component is still decreasing, the x2 component is going to blow up. Why? Because the solution to this differential equation x1 t, x2 t, because it is diagonal, can be easily written as e to the power minus t times x1 0 e to the power plus 2 t times x2 0. So, unless the initial condition has x2 component equal to 0, the x2 as a function of time is going to grow exponentially. On the other hand, if the x1 component is non-zero, it is going to decrease and eventually come close to 0 asymptotically. So, this is what we will call a saddle point. The question arises, is the saddle point stable or unstable equilibrium point? We see that while the origin is an equilibrium point, for very small perturbations about the origin, trajectories either come to 0 if they are along the x1 axis or they do not come to 0 if they are not along the x1 axis. In any case, there are certain very small perturbations such that the trajectories when they begin from that perturbed initial condition do not approach the equilibrium point. So, in other words, there exists. So, this is the symbol for there exists. There exist initial conditions. These initial conditions are close to the origin. Close to the origin. What is significance of the origin? It is an equilibrium point. There exist initial conditions close to the origin such that the trajectories are not coming back to the origin. So, we have, in fact, the trajectories are growing. Trajectories are becoming unbounded. This is precisely the property that decides that the equilibrium point, the origin is an unstable equilibrium point. So, the saddle point is an unstable equilibrium point. It is not an unstable focus nor an unstable node. That equilibrium point is just an unstable equilibrium point. So, what is saddle about this? So, the graph of the Lyapunov function, we will come back to this later. This graph in the 3D plot looks like the saddle of a horse. That is the reason that this equilibrium point is called a saddle point. So, before we go to the other situation where there are repeated eigenvalues or one or more eigenvalues at the origin, we will just quickly recapitulate what was done. So, we have seen the situation when there are distinct real eigenvalues. When both are positive or both are negative or when they have opposite signs, then we saw the situation when the eigenvalues are both complex. In which case, if they are on the imaginary axis, we call it a center. This is the one that corresponds to periodic orbits and we saw that we will have a continuum of periodic orbits. For the situation that the eigenvalues are complex, if they are on the imaginary axis, then it is called a center. When the real part is negative, we call it a stable focus and when the real part is positive, we call it an unstable focus. So, whether it is stable or unstable depends on the real part of the complex eigenvalues. Now, the next situation, last situation that is remaining to be seen is when there are repeated eigenvalues and also the situation when one or more eigenvalues are at the origin. Coming back to the matrix A, when there are repeated eigenvalues, that time the matrix A may or may not be diagonalizable. So, suppose we have a repeated eigenvalue lambda 1 and if A is diagonalizable, that is when we will like to put a 0 here and if A is not diagonalizable, then we put a 1. So, this is called the Jordan canonical form. For this case when there are repeated eigenvalues and A is not diagonalizable, we are restricting ourselves to the 2 by 2 case and this is the Jordan canonical form. For the case when eigenvalues of A are repeated, but A is diagonalizable. So, what is the significance of a diagonalizable matrix? We saw that the eigenvectors are so called invariant directions. In this particular example, x1 and x2 directions are themselves eigenvectors. If we are along an eigenvector, then depending on lambda 1 being positive or negative, the arrows are directed either away or towards the origin. So, this is the case when lambda 1 is greater than 0. Let us restrict our study for that situation. The x2 direction is also an eigendirection, is also an eigenvector and because lambda 1 was positive, it is again directed away from the origin. So, we see that the eigenvectors are the invariant directions. What is invariant about it? If the point starts along an eigenvector, because the arrow is also directed along the eigenvector, we continue in that direction. So, there is no tendency to move out of an eigenvector. Let me repeat, eigenvector v is a non-zero vector such that A v is just a scaling of the vector v. So, we are interested in the first eigenvector v1, which is nothing but eigenvectors are not unique in magnitude. We can scale this vector to any number by any number and also get an eigenvector. So, it is a non-zero vector that satisfies this equation. So, this v1, if we are along this direction, if we are at a point v1, then the vector is parallel to the vector v1 because of this particular equation and hence the trajectory will remain along that particular direction. If we start here, then there is no reason to come out of the x1 axis. Similarly, if we are here, we will remain along the x1 axis. Similarly, here x2 also being an invariant direction, being an eigenvector, it continues to be along the x2 axis. So, we see that there are this particular complex plane contains certain invariant sets. What are those invariant sets? So, we will define the plane R2 is made up of invariant sets. So, what is this invariant set? A set S is called invariant. In this case, it is invariant under the dynamics of the differential equation x dot is equal to f of x. If we start inside this set S, then we will remain inside this set S for all future time. It is called invariant. Invariant means under the dynamics, under the dynamics of f. If we start inside S, then x of t is also going to be inside S for all t greater than or equal to 0. So, that is the significance of an invariant set. That a set S which could be a subset of the plane R2 or it could be the plane R2 itself. It is set to be invariant if the initial condition is inside S, then the entire trajectory is inside S for all future time. Hence, this is also called a positively invariant set. What is positive about it? Because we are interested only for positive values of time t, x of t is inside S. So, what are the invariant sets inside R2? If we have this differential equation and let us take the special case when matrix A acts on the vector x and A is 2 by 2, which means x has 2 components. So, of course, R2 plane itself is an invariant set. Why? Because if it begins inside the set R2, there is no reason it will leave the plane R2. If the origin, the origin is an equilibrium point 0, the set S consisting of just the origin. S consists of only origin. This is also an invariant set. Why? Because if it begins inside this set S, because it is an equilibrium point, it will remain at the equilibrium point for all future time and hence the set S is also an equilibrium, is also an invariant set. So, all equilibrium points is an invariant set. For this particular case, when A is a diagonal matrix, for this particular A, there are some more invariant sets. So, take the set S, which is defined as all the points where x2 is equal to 0, the set of all points x1, x2 such that x2 is equal to 0. This set is also an invariant set. Why? Because if we are along the x1 direction because A was a diagonal matrix, because it was a diagonal matrix, x1 axis itself being an eigenvector, we see that the set S1, which is defined to be the x1 axis is also an invariant set. Of course, the x1 axis itself contains the origin, which is also an invariant set. In other words, another set, let us call the set S2 defined as all points x1, x2 such that x2 is equal to 0 and x1 equal to 0, which is nothing but the equilibrium point is an invariant set, but we are interested in some non-trivial invariant sets. For example, we could take x1 positive. This particular situation is along the positive x1 direction excluding the origin. This is also an invariant set. If it is once inside the set S2, it remains inside the set S2. Consider S3, which is the same x1, x2 except that now x1 is negative. This is another invariant set, which corresponds to the negative x1 axis. If the point starts here, then it is going to always remain on the negative x1 direction. These are different invariant sets. So, we are usually not interested in the equilibrium point as an invariant set. We are also not interested in the plane R2 as an invariant set, because these are the trivial invariant sets. We are interested in some more sets, which are larger than the equilibrium point and smaller than the set R2, which are invariant under the dynamics of f. The eigenvectors are examples of such invariant sets. The eigenvectors, the entire null, entire direction except the origin is also an invariant set and the two sides of this eigenvector, one on the positive side, one on the negative side of the origin, also form invariant sets. Coming back to the case, when a is diagonalizable for that situation as we saw, in some basis, a already looks diagonal. We have x1 axis which is an invariant direction, x2 axis which is also an eigenvector, hence that is an invariant direction. And it turns out that this invariance, this two directions being invariant is not particularly related to the eigenvalues being distinct. For the case when a has repeated eigenvalues, but if it is diagonalizable, it still is an unstable node. Of course, in this case, every direction is an invariant direction. Every line through the origin is an invariant set because the two eigenvalues are repeated. But for the situation when a is not diagonalizable, let us consider the case when a is equal to 2, 0, but with a 1 here. This example of an a has only one eigenvector. The other eigenvector is what we want to call a generalized eigenvector. This a, which a are we dealing with now? We see that the x1 axis, if we take the vector v equal to 1, 0, av is nothing but 2, 0, 0, 0, 0, 2 times v. So, the x1 axis is an invariant direction and all arrows are directed away from the origin, but there is no other invariant direction. There is only one independent eigenvector and hence if you take an example, let us say v is equal to 1, 1. When a acts on v, we get 3, 2. Let me check this. So, for this particular vector at 1, 1, the vector has both x1 and x2 components of that arrow non-zero. So, we see that because there is only one independent direction, x2 axis is no longer an eigenvector, but there are these other arrows that cut. How exactly they cut? They depend on the particular form of the Jordan canonical form, but along the independent axis, there is only one x1 direction. So, this is the significance of a non-diagonalizable a, that there is only one eigenvector x1 and everything else is emanating out of this x1 direction if it is very close to x1. But if it is along the x1 axis, then x1 axis being an eigenvector is an independent, is an invariant set under the dynamics of f and hence it does not leave the x1 axis. So, this brings us to the final case when a is singular. When a is singular, there might be one or more eigenvalues at the origin. So, let us see the case when there is only one eigenvalue at the origin first. So, when a is singular, it means that there exists a non-zero vector x0 such that ax0 is equal to 0. This x0 is also said to be in the null space of the matrix a. The origin is always there in the null space, but when a is singular, there are some non-zero vectors also which are in the null space and such a non-zero vector x0 is non-unique. Why? Because if you are given with x0, then we can multiply x0 by a real number b and also get bx0 to be in the null space of the matrix a. So, all these points x0, bx0, any scaling of the vector x0 are all equilibrium points. Why? Because they satisfy the derivative of x at that point, evaluated at the point x0 is obtained by a acting on x0 which is equal to 0. So, we see that in this case, all the equilibrium points are connected. They form a line. The null space which is a linear subspace, in general they form a subspace and in our case because a has only one eigenvalue at the origin, they form a line. So, as we have seen in the beginning of this series of lectures, we saw that isolated equilibrium points is not possible for a linear system. For a linear system, the equilibrium points as we have seen happen to be in the null space of the matrix a. If there are some non-zero vectors in the null space, then they are all connected. They form a line. So, the isolated equilibrium points is possible only when we have a non-linear dynamical system. So, we have just begun seeing the repeated eigenvalue case. When a has repeated eigenvalues, a may or may not be diagonalizable. We will quickly review this part. So, when the eigenvalues are repeated, then they have to be real for the case that a is 2 by 2 matrix. If the matrix is diagonalizable, then we have two independent eigenvectors. Then each eigenspace is an invariant subspace. Invariant meaning it is invariant under the dynamics of the system. But it is also possible that we have only one independent eigenvector, in which case other directions either turn towards this or turn away from this depending on whether the eigenvalue is positive or negative. So, one can have a look at how the arrows look using champ command in psi lamb or quiver command in mat lamb when a is singular. So, take for example a equal to, in this case, this is an example such that ax0 is equal to 0. This is of course not the only vector x0 that satisfies ax0 equal to 0, because any constant minus 5 times x0 also is in the null space. Also in the null space is also said to be the kernel, kernel of the matrix A. So, what is the significance of this? We see that the x1 axis is an eigenvector, but correspond to eigenvalue 2 and hence we will draw the arrows away from the origin. But the x2 axis are all equilibrium points. So, each of the arrows I have length 0. So, if the x1 component is non-zero, then we see that the trajectories are having their x1 component increasing as a function of time increasing with exponent equal to 2. But the x2 component is always going to become equal to 0 when A multiplies to it and hence we see that these arrows are all parallel to the x1 direction first of all. And secondly along the x2 axis because x1 is equal to 0 along the x2 axis all these points are equilibrium points they form a connected set. The origin is not the only equilibrium point for this example, but each of these points are equilibrium points. So, this is what we see for the case when A has one eigenvalue at 0. The next example is when A has both eigenvalues at 0. This is again an example of repeated eigenvalues. So, let us first take when A has two eigenvalues at 0 and when A is diagonalizable that is when we have 0 here. So, A is a 0 matrix. So, the entire R2 plane is made up of equilibrium points. Any point x1, x2 is an equilibrium point. Why? Because what does this matrix say x dot is equal to 0 times x which is equal to 0 for any point for any point x1, x2. So, this is a less interesting case, but still this situation is likely. The other situation when A has repeated eigenvalues at 0, but A is not diagonalizable is when we have this for example. So, in this case we see that if we have vector x equal to 1, 0 then Ax is equal to 0. So, the vector 1, 0 and all linear multiples of this x are in the kernel of the matrix A. They are in the null space of the matrix A and hence the x1 direction is a set of equilibrium points. What is important about the x1 direction? They all have x2 component equal to 0, but if we take a vector v which is equal to 2, 3 in particular the second component x2 component is not equal to 0, this particular vector here when Ax on v we get something that is parallel to the x1 axis. So, we see that the arrows look like this. They are all in increasing direction of x1 when x2 component is positive and they are all along decreasing direction of x1 if the x2 component is negative. Why? Because A is this matrix when Ax on a vector v it gives us the second component of v as the first component of A times v. So, this is an example where we have only one x1 axis which is the equilibrium point set of equilibrium points and every other vector is being turned towards either positive direction of x1 or negative direction of x1 depending on whether x2 component is positive or negative. So, this completes our study of equilibrium points for second order systems. We have seen the case when A has repeated eigenvalues, distinct eigenvalues and when A has real or complex eigenvalues. So, the next important question we will start studying now is when does there exist a solution to the differential equation? If we are given with a differential equation, x dot of t is equal to f of x in which now x has n components. At any time instant t, x has n components and hence f is a map from Rn to Rn and for this situation suppose we are given with the initial condition x0, x at time t equal to 0 is some vector called x0 which is an element of Rn. We are interested in the question suppose this is our space Rn, this is our point x0, then the direction is given here by f evaluated at the point x0. We are interested in answering the question when does there exist a trajectory that starts from the point x0 at t equal to 0 and there is a unique trajectory for some time duration for a time duration 0 to delta in which delta is some positive number possibly very small but for this duration of time we have a unique solution to the differential equation x dot is equal to f of x. So, this is the question we will answer in the next few lectures starting from now. So, let us look at this differential equation. So, given d by dt of x is equal to f of x and the initial condition x0 is equal to x0 an element in Rn when does the solution exist then we will ask if a solution exists when is it unique under what conditions on f at the point x0 do we have a solution and when is it unique. So, please note that we are interested in a solution possibly for a very small interval of time. It might be difficult to guarantee existence and uniqueness of solutions for a large duration of time but we are interested only for an interval 0 to delta in which delta is greater than 0 possibly quite small. So, we ask is continuity of f the important property here or is it differentiability of the function f at the point x0 that is required here. So, it is important to note here that while continuity of the function f is sufficient for existence of solutions uniqueness of the solution is not guaranteed by just continuity of the function f. On the other hand while differentiability of the function f guarantees both existence and uniqueness of the function f both existence and uniqueness of solution to the differential equation x0 is equal to f of x this differentiability of f is not essential for guaranteeing existence and uniqueness of the solution. So, keeping note of this we can ask what is the important property required for existence and uniqueness of solution to a differential equation it appears to be a property that is a slightly more strong condition than continuity but might not be as strong as differentiability of the function f at the specified initial condition x0. So, it turns out that this property is a important property called Lipschitz condition on the function f. So, what is the definition? This definition is valid for a function f from Rn to Rm even though in our case f is always from Rn to Rn we will define this definition of Lipschitz for a case when f is a map from Rn to Rm. So, it is said to be locally Lipschitz at a point x0 if there exist a neighborhood B of x0 of radius epsilon we will see a precise definition of a neighborhood very soon. A neighborhood B of x0, epsilon with epsilon greater than 0 and a constant L greater than 0 such that an inequality is satisfied. What inequality? f at x1 minus f at x2 norm this distance is less than or equal to L times x1 minus x2 and this inequality is required to be true for all x1 and x2 in the neighborhood in that neighborhood of the point x0. So, this neighborhood is being called as a ball B centered at x0 and of radius epsilon. So, this is the precise definition of the ball. So, B x0, epsilon is defined to be the set of all points x such that distance of this point x from epsilon is strictly less than epsilon it is not more than it is not more than epsilon away from the point x0 even equal to epsilon away we are not including into the ball B x0, epsilon and hence this is called an open ball around x0 of radius epsilon around which point the ball is centered that is centered around the point x0 and what is the radius that is epsilon and we are saying it is an open ball because this distance is strictly less than epsilon. So, this ball is contained in Rn because we are taking all points in Rn that satisfy this condition. So, for this for some ball around the point x0 with a radius strictly greater than 0, we should be able to guarantee that this inequality is satisfied for all x1, x2 inside this ball. So, this number L a positive number L is said to be a Lipschitz constant it is not unique because if you have found a constant L such that this inequality is satisfied for all x1, x2 in the ball B x0, epsilon then you can take a number larger than L for that larger L also this inequality would be satisfied and hence we see that this Lipschitz constant is not going to be unique. But in general this Lipschitz constant L will depend on x0 and on epsilon it will depend on the point x0 itself and also on the radius epsilon radius epsilon of the ball of the open ball around x0. So, using this definition of Lipschitz function it is possible to specify under what conditions solution to a differential equation exist and when it is unique. So, we will see some examples of a Lipschitz function and of some non-Lipschitz functions. So, the line fx is equal to minus 4x is locally Lipschitz at the point x equal to 3. If it is Lipschitz then we are we should be able to give a number L such that that inequality is satisfied and here we can take L is equal to 4. So, notice that we can take the slope of the function f absolute value of the function f or we can take something larger. To understand the Lipschitz function we will take graph of a function f for the situation that f is a map from r to r. Suppose this is our point x0 and this is the graph of the function. So, what does this say? f is said to be Lipschitz at the point x0 if there exists a ball of radius epsilon which means that this point is x0 plus epsilon this point is x0 minus epsilon and both these points are not included in the ball because it is an open ball. In other words this interval is an open interval. So, for this particular ball we require some inequality to be satisfied. So, we take all the points take any two points x1 and x2 in this ball and we look at the corresponding distance between them and when we connect. So, it is required to draw a larger figure to be able to see what the Lipschitz condition is specifying on the function f this is the point x0 this is the ball in this case it is an interval of width 2 epsilon and open interval of width 2 epsilon in which the center is x0. Suppose we take x1 here and x2 here they do not have to be on the opposite sides of the point x0. So, what is being specified is this is the value at x1 this is the value of f at x2 and the distance between fx2 minus fx1 the distance between fx1 and fx2 that distance is nothing but this gap this gap divided by this gap this ratio in absolute value should not exceed L fx1 minus fx2 in absolute value in this case it is just absolute value more generally it is a norm should not exceed L there should exist a number L such that this inequality is satisfied for all x1 x2 in the ball around x0 of radius epsilon and open ball in this case it is just an open interval. So, this particular ratio is nothing but absolute value of the slope of this line that connects this point and this point which point the point with x1 fx1 here and x2 and fx2 here when we connect these two by a line then the slope of this is precisely this but without the absolute values once we take the absolute values then it is absolute value of the slope of this line and the Lipschitz condition on f at the point x0 says that there should exist a ball around the point x0 of radius epsilon and a number L such that the line has slope of absolute value at most L there should exist one number L such that this slope is bounded from above by L the absolute value of the slope. So, this property of Lipschitz condition is a key property we will see examples of Lipschitz and non Lipschitz condition functions and it will play a key role for existence and uniqueness of solutions to a differential equation. This is what we will see in detail from the next lecture. Thank you.