 Welcome everyone to the fifth lecture on non-linear dynamical systems. In the previous lecture, we had seen existence and uniqueness theorem for solution to an ordinary differential equation with a given initial condition. So, let us just quickly see that theorem again. So, consider the differential equation x dot is equal to f of x and f is a map from Rn to Rn and x naught is a vector in Rn which is specified as the initial condition. Suppose f is locally Lipschitz at the point x naught and then the statement of the theorem is then there is a delta greater than 0 such that there is a unique solution x of t to the differential equation x0 is equal to x naught for the interval t0 to delta. So, there are two important statements in this theorem. First is there is a solution x of t to the differential equation. This is the existence part and second there is a unique solution. That solution which is guaranteed to exist is also guaranteed to be unique for an interval 0 to delta. Beyond time delta, there is no guarantee either of existence or uniqueness that requires separate condition on f. So, we had just begun seeing the proof. So, what is the outline of the proof? We define an operator P that takes one estimate of the solution trajectory to give a better estimate of the solution. So, this is what we call the Picard's iteration. So, Pxn is estimate of the solution trajectory at the nth iteration. The desired solution we will construct the operator P such that the desired solution will satisfy P of x equal to x. So, this particular trajectory x which satisfies P of x equal to x, we call this trajectory the fixed point. Why? Why do we call it the fixed point? Because P takes x and gives the same x. The operator P will be constructed so that the fixed point is precisely the solution to the differential equation. The Lipschitz condition on f will help to prove convergence of this operator P that takes one estimate and gives a better estimate upon each iteration. So, the Lipschitz condition will help to prove convergence to a unique fixed point also in a suitably complete space. So, for that purpose, we just saw the statement of the Bana fixed point theorem which is also called the contraction mapping theorem. So, fixed point in our situation is a trajectory x of t for the interval 0 to delta for some delta greater than 0. So, what are the Picard's iterates? So, the operator P takes a continuous function x of t and gives another continuous function y is equal to P of x. So, we define this operator P as like this. P of x is a function of time. The value of P of x at time t is equal to x naught plus the integral from 0 to t of f of x tau d tau and this is the definition of P of x at time t where t varies over the interval 0 to delta. So, we already saw that x of t is a solution to the differential equation x dot is equal to f of x with this initial condition x 0 is equal to x naught if and only if x is a solution to the integral equation x of t equal to x naught plus integral 0 to t f of x tau d tau. Notice that x occurs on both sides of this equation. This is an integral equation. Similarly, the differential equation also x occurs here and here. So, this solution to the differential equation and solution to the integral equations are the same. Moreover, this integral equation way of seeing the equation also allows us to say that x is a function such that it is a fixed point of the operator P. So, in this context we began seeing the Banach fixed point theorem. So, we for that purpose we saw the definition of a normed vector space and the notion of complete. So, a complete normed vector space we call the Banach space. A subset S of a set x is said to be a closed subset if what we call the boundary of the set S is also sorry there is a mistake here. The boundary of S is within S. So, it is required to have S in place of x here. But a more precise and correct definition is a subset S is said to be closed in x if the complement of the set S in the set x is open in x. So, this brings us to the definition of open. So, when do we call a set open q which is a subset of x is called an open set if for every x naught for every point x naught in q there exist some neighborhood of x naught which is also contained in q. So, let x be a normed vector space. A map P from x to x is said to be contractive if there exist some real number rho strictly less than 1 some positive number rho which is strictly less than 1 such that this particular inequality satisfied for all x 1, x 2 and x we required the notion of contractive in the definition of the in the statement of the Banach fixed point theorem and for that purpose we are reviewing this definition. Even though an operator P might be defined from x to x it might turn out to be contractive over only a subset S and that is the situation where the Banach fixed point theorem is able to conclude about a fixed point. So, what is the contraction mapping theorem? What is the Banach fixed point theorem? Let x be a Banach space and let t be a mapping from x to x. Suppose S is a closed subset of x and suppose t is a map which also takes S into S. S is a subset of x. This t which takes x into x need not take the subset S into subset S but suppose t also satisfies this property that it takes S into S and further on S t is contractive which means that there exist that number rho such that that inequality we saw on the previous slide holds for all x 1, x 2 in S. If these three properties are satisfied then the contraction mapping theorem says there exists a unique fixed point x star in S. So, this statement has two important claims. First is it claims that exists a fixed point. Second it also claims that this fixed point is unique in S. The next important statement of the theorem is this x star can be found by successively operating t on any initial x 1 in S. We take any initial point x 1 in S and make t act on x 1 then we make t act on t of x 1 and when we do this successively then it will converge to x star the unique fixed point in S. So, how do we use the contraction mapping theorem for the proof of existence and uniqueness? We already defined the operator P. We will now define a suitable x and S and the subset S and we will show that this operator P we already defined is a contraction on S and then we will use a contraction mapping theorem. So, what is this x? So, x we will define is a set of all continuous functions from this interval 0 to delta to Rn. So, the notation for x is c 0 from this domain to this co domain Rn. This 0 means that it is required to be just continuous. It could be differentiable twice differentiable that is an extra property but we are asking for all functions that are at least continuous and hence the 0 appears here. So, over what interval it is defined from 0 to delta? The time duration that delta is to be carefully chosen yet. Now, we can ask a question is x complete with respect to some norm? After all for the contraction mapping theorem we require a Banach space x. So, with which norm is x complete? So, we already saw that for a point x in capital X we saw the sup norm for this space of functions, for this space of continuous functions we define the sup norm as the maximum as t varies in the interval 0 to delta of the Euclidean norm of x of t. At any time t, x of t is a vector in Rn and we can take the conventional 2 norm, the Euclidean norm and this Euclidean norm itself is a function of time and we will see what is the maximum of that norm function as t varies from 0 to delta and that is called the sup norm. It is also called the max norm. So, we already saw that with respect to this sup norm this space of continuous functions on this interval to Rn is a complete normed space. The next important property was, the next important requirement was to define the closed subset S. So, we take some r greater than 0 and we define the set S of all those continuous functions in this particular set which satisfy the property that x – x0, x0 here is actually just a vector but we also think of it as a function. We will see this in more detail. But the distance from this x0, the supremum of the distance of this as t varies over the interval 0 to delta is at most r. So, we take all those continuous functions which satisfy the sup norm condition and we pick these functions and put them into the set S. How do we choose the value of r for this definition of S? So, we have the differential equation f in the differential equation d by dt x is equal to f of x. We already are given that f is locally Lipschitz at the point x0 and what is the significance of x0? x at time t equal to 0 is equal to x0. So, because it is locally Lipschitz, we know there exists a neighborhood b, the ball b centroid at x0 and of distance and of radius equal to r. This closed ball, we will very soon define it to be a closed ball. We know that because f is locally Lipschitz at point x0, there exists such a ball such that the Lipschitz condition holds inside this ball. To say that the Lipschitz condition holds means that for all x1 and x2 inside this ball, this inequality is satisfied. So, we pick this r from the locally Lipschitz property of the function f. Also this S, the way we have defined is a closed subset of x. It is a closed subset because of this inequality being a non strict inequality. So, notice the lesson or equal to r above. For the same reason, we will conveniently choose the ball b x0, r as the closed ball. So, b x0, r is defined to be a set of all x such that the distance from x0 is at most equal to r. So, before we go further in the proof, let us quickly note that b x0, r is a closed subset of rn with the Euclidean norm and the point x0 is an element of this closed ball. It is in fact, the center of this ball. On the other hand, S, the subset S is a closed subset of x, the space of continuous functions over the interval 0 to delta and this space x has a sup norm because we are dealing with two types of norms here. One a norm over rn, the Euclidean norm and another a norm over x, the sup norm. Because we are dealing with these two norms, it is very important to be careful about which norm at each place we use the norm function. So, for this subset S, we have this particular function x of t which is always equal to x0, always equal meaning as time t varies from 0 to delta, x of t is the constant function, it is equal to x0. So, this constant function is also an element of the set S. So, what is the meaning of that? The trajectory x of t is always equal to x0, it remains at x0 for all time t. So, what is the set S? S is a set of trajectories that remain within distance r from the point x0 for the time duration 0 to delta. So, what does the operator do? Operator p do, it takes x, small x in capital X and gives another function again on the interval 0 to delta. So, hence we see that p is a map from x to x. We now show that in fact, p maps S into S for some delta suitably small. Notice delta as some number that was to be chosen yet. So, for delta suitably small, we will show that p maps not just x into x, but in fact, S into S. Then we will use the local ellipsis property of the function f to show that p is in fact, a contraction on S, again for a sufficiently small delta. This we will call delta 2 greater than 0 and once we have these two conditions delta 1 and delta 2, one which ensures that p maps S into S and another which ensures that p is a contraction on S, we will define the delta to be equal to the minimum of the 2 of delta 1 and delta 2. And since delta 1 and delta 2 are both positive, the minimum of the 2 will be a delta that meets the conditions in the theorem. For this particular delta, we will use the contraction mapping theorem. So, the first part of the statement was to show that for delta quite small, px minus x naught sup norm is less than or equal to r, where S is in, when x is in S. To show this particular inequality, we will imply that p takes an element x in capital S and gives you a function which is also in capital S. Why does it give a function again in capital S? Because the distance of p of x from x naught from the constant function x naught in the sup norm is at most r. If we show this, this will ensure that p is a map from S to S. So, in order to show this, notice that p of x of t minus x naught in the 2 norm is equal to this. So, once we take the norm function inside the integral sign, it turns out that this right hand side will become larger. So, this inequality, this norm of integral 0 to t p of x of tau d tau is less than or equal to integral 0 to t of this whole thing inside the brackets. So, notice that we have just subtracted and added f of x naught and while doing this, this particular quantity can increase because of the triangular inequality. So, now what we will do? We will integrate not just from 0 to t, but 0 to delta 1. After all, t is some number at most delta 1. So, if we integrate this positive quantity up to delta 1, it is only going to become larger. And once we do this, we will also use the Lipschitz property of the function f and replace the first term in the norm with x tau minus x naught times l, because f is locally Lipschitz at the point x naught. This is satisfied for all x and x naught inside that ball. This other quantity we just leave as it is. So, we have used the Lipschitz, locally Lipschitz property of the function f. Since x of tau minus x naught is at most equal to r, why? Because the function x is inside s. Hence, this particular quantity is at most r. So, we have replaced x of tau minus x naught in the two norm by r. That is the maximum distance it can be away from x naught. And the second quantity, because it is integral of a constant, we have removed f of x naught and replaced integral of d tau by delta 1. Finally, we see that this particular quantity which we are integrating, this is also constant. It is not varying as a function of tau and hence we called, we equated that to delta 1 l r and after taking delta 1 common, we obtain this expression. So, what have we shown? We have shown that the two norm of this particular function at any time t is bounded from above by this quantity. And notice that on the right hand side there is no t. So, for all time t, the left hand side which depends on time t is bounded from above by this particular number which does not depend on t. So, in fact, if we take the supremum of the left hand side, even the supremum will be bounded from above by the same quantity. So, what does this show? That this particular p of x minus x naught in the sup norm is at most equal to this. So, now we will choose delta 1 such that p of x belongs to s. So, if p of x should belongs to s, then choose delta 1 to satisfy delta 1 times l r plus f of x naught 2 norm is at most equal to r. If we choose this delta 1 such that this is satisfied, then we see that p of x minus x naught in the sup norm is bounded from above by r and hence p of x goes into s. So, we can take any positive delta 1 that is less than or equal to r times r divided by this quantity and we will then get the p maps s into s. So, we can take delta 1 equal to this. The next important step was to show that p is a contraction on s. So, for some delta 2 greater than 0 which we will carefully choose now, we will show that p is contractive. So, for this purpose, notice that p of x t minus p of y t into norm is equal to the norm of this. From the definition of the operator p, we see that we obtain this and when we take the norm inside the integral sign, then we get that this is at most equal to this. By using the locally Lipschitz property of the function f inside the ball b of x naught comma r, we see that this quantity is bounded from above by this after taking the L outside this integral sign. Moreover, this particular quantity we have written here is at most equal to this. Why? Because x and y both at any time t, we can take the difference between them in the two norm and integrate them. But instead of taking at any time tau, we could also look at the maximum difference between them and this maximum difference is only going to be larger and hence we have obtained that this particular inequality is less than or equal to this particular quantity. By replacing the sup norm here, by replacing the two norm there with the sup norm here, this quantity can only become larger and hence this inequality less than or equal to. Finally, this quantity which we are integrating, it is over the interval 0 to t, but we could go ahead and integrate up to delta 2. This quantity because it is a norm, it cannot be negative and when we integrate further instead of only up to time t but up to delta 2, then we see that we get L times delta 2 times sup norm of x minus y. So, here the supremum is being taken as t varies from 0 to delta 2 and here also the sup norm was being taken as t varies from 0 to delta 2. So, what have we obtained? We have obtained that P of x of t minus P of y at time t, the difference norm of that, the two norm of that is bounded from above by some number by some quantity that is independent of time t and this is true for each time t in the interval 0 to delta 2 and hence we can take the supremum of this quantity. Even the supremum will be bounded from above by the same number L times delta 2 times sup norm of x minus y. So, finally, we have obtained this inequality sup norm of Px minus Py is at most equal to L times delta 2 times sup norm of x minus y. So, this should give us a hint as to how to choose delta 2 so that P is a contraction on S, that was the objective of doing this inequality. So, what would this be a contraction? It would be a contraction if this particular quantity L times delta 2 is strictly less than 1. So, if you set delta 2 times L equal to some number rho and that number rho is strictly less than 1, then we will obtain that P is a contraction on S. So, finally, we do as follows. Choose any rho strictly less than 1 and define delta to be the minimum of these two quantities. So, notice that this we had called as delta 1, the second one we had called as delta 2 and the minimum of these two quantities when we take that as delta, it will ensure both. It will ensure that P is a map from S to S and it will ensure that P is a contraction and once these two are guaranteed by the contraction mapping theorem, we know that there exists a fixed point in S for the operator P and moreover, there exists a unique fixed point for this operator P, unique fixed point inside the subset S. So, this proof has two parts. So, this completes the proof. Just a small discussion about the proof, it has two parts, one about the existence and one about uniqueness. So, notice that both come together with the contraction mapping theorem. The contraction mapping principle assures us both existence and uniqueness. But of course, in general, the conditions on F for existence of a solution to the differential equation are different from conditions on F for uniqueness of the solution to the differential equation. These conditions are usually different and suppose the existence is given, suppose due to some particular property on F, it turns out that we have a solution to the differential equation. For example, for this particular differential equation, solution exists. For the claim that a solution exists, we are not able to use the theorem of existence and uniqueness. Why we are not able to use? Because the theorem for existence and uniqueness requires this particular function F of x to be locally Lipschitz at the point of initial condition. Suppose the initial condition is equal to 0, then we already saw that this particular function is not locally Lipschitz at x equal to 0 and hence we are not able to utilize that theorem. However, we know that the solution exists. Why we are able to conclude that the solution exists? Because of certain other properties of the function F. For example, there is a Cauchy Piano theorem that says that if F is continuous, then solution exists. For certain situations it is possible that we are not interested in uniqueness, but we are interested in just existence of a solution because of which locally Lipschitz property on of the function F might be too severe, might be too harsh and function F may not be locally Lipschitz. Because of that, that particular theorem we are not able to utilize to claim existence and uniqueness, but just existence might also come under milder conditions on F. So, Cauchy Piano theorem is one of the various statements that relaxes the conditions on the function F and at least gives us existence. So, it says that if a function is continuous at the point x0, then a solution exists over a small interval 0 to delta. There is also another result. There is a result by Karatheodori which says that which is also under milder conditions on the function F, not even continuity. It turns out that if F is not even continuous, it is still possible that a solution exists, but then we do not go into that. That is also said to be solution in the sense of Karatheodori. So, in general it is important to keep in mind that the conditions for existence and the conditions for uniqueness are not usually the same. And hence, if existence is given by some other property, then it might be easier to show that locally Lipschitz ensures uniqueness of the solution. So, one of the ways to prove uniqueness of the solution under assumptions of existence of solutions to the differential equation is by using the Bellman-Grunwald inequality. This is the result we will see now. So, what is the Bellman-Grunwald inequality? We will see only the simplified version. Suppose a non-negative continuous function R that takes capital R to capital R, that takes real values and gives real values. And suppose this non-negative continuous function R satisfies that R of t is at most R0 plus L times the integral 0 to t R of tau d tau. So, what is on the left hand side? This is like an integral inequality. R appears on both sides. R appears here and also here. So, R at any time t is at most some constant plus another constant L times the integral from 0 to t of the same function R. If R is a continuous non-negative function that satisfies this property, then R of t is at most equal to R0 times e to the power L t. So, notice that if we have a function R that satisfies this integral inequality in which R appears on both sides, then we want to make a claim about R being bounded from above by this other function that is now not depending on R on the right hand side. So, R now appears only on the left hand side. So, what does this Bellman-Grunwald inequality say? It says that if a non-negative function R is bounded from above by some constant times the area covered so far. This so far has been included to indicate that the integral from 0 to t of that same function R of tau that is the area covered so far. If R is bounded from above by such a constant times the area covered so far, then R can have at most exponential growth. This is a simplified version of the Bellman-Grunwald inequality. The general version is much more powerful and harder to both understand and prove. So, we will use the Bellman-Grunwald inequality to prove that if we assume the existence of a solution to the differential equation, then locally lipsch's property of the function f in fact proves uniqueness of the solution to the differential equation. So, this also helps us to look into sensitivity of the solution to the differential equation to initial condition. So, consider this solution. If a solution to the differential equation is guaranteed to exist, then we know that it satisfies this equation. This is the integral equation. Suppose another solution y of t is solution to the integral equation with this initial condition. Now, we can ask if x0 and y0 are close by, does it mean that xt and yt are also close by? So, let us just take the difference. So, now let us see what happens to the distance x of t minus y of t is less than or equal to by the triangular inequality x0 minus y0 to norm plus integral from 0 to t of f of x tau minus f of y tau d tau. Now, using the locally lipschitz property of the function f, we can simplify this particular term in this integral. So, this is less than or equal to x0 minus y0 plus l times integral from 0 to t x tau minus y tau. All the norms appearing in this page are the two norms. So, we see that xt minus yt, the norm of that satisfies this particular inequality that kind of appears in the Bellman-Gronwald inequality. So, this particular non-negative function, norm is a non-negative function. It is also continuous. Both x and y are continuous because their integral of some function which is also continuous. If f is locally lipschitz, it is also continuous and integral of this continuous function is in fact, differentiable and of course, x and y are continuous. Hence, their difference is also continuous. So, this non-negative continuous function is bounded from above by a constant plus l times its own integral so far. So, now we use the Bellman-Gronwald inequality, Bellman-Gronwald result to say that because of the statement in the inequality in the Bellman-Gronwald principle, this two norm is bounded from above by 2 times e to the power l times t. So, let us just compare these two inequalities. So, after having a look at statement of the Bellman-Gronwald inequality, we see that on the left hand side, r of t, a non-negative continuous function is bounded from above by this particular integral on the right hand side. So, r naught plus l times integral from 0 to t, r of tau. If this is what r satisfies a non-negative continuous function, then r of t is bounded from above by this r naught, this constant times e to the power l times t. So, this when we apply to this particular difference of two solutions to the differential equation, we see that x of t, this is our r of t, is bounded from above by some constant x naught minus y naught 2 norm plus l times the integral of r of tau d tau. This quantity which we are integrating here is same as the quantity here. If that is the case, then by using the Bellman-Gronwald inequality, we are able to conclude that x of t minus y of t 2 norm cannot be larger than x naught minus y naught 2 norm times e to the power l times t. So, this helps us to conclude uniqueness. How do we show uniqueness? If x naught is equal to y naught, then x of t minus y of t 2 norm equal to 0 for t in what interval? For t in interval guaranteeing existence. Also, t should be restricted to an interval, so that the solution remains inside the domain of f, where it is locally Lipschitz and locally Lipschitz neighborhood of f. t should be restricted to a small enough interval, where both these are satisfied. So, if t is sufficiently small, then we see that this 2 norm is equal to 0 if the initial condition is the same. In other words, if the initial condition is same, then the difference in the trajectories is forced to be equal to 0. Why? Because this difference in the trajectories is less than or equal to 0 times this number. At the same time, this inequality also tells us to what extent the solutions are sensitive to the initial condition. So, suppose we know that these two initial conditions were not same, but they were close by. In other words, the distance between them was equal to 0.01. Suppose, x naught minus y naught 2 norm is equal to 0.01. Does that imply? This is the question we asked in the context of sensitivity. x t minus y of is similarly small. What is similarly small? As small as this particular amount or at least of the order of this. So, we know now that if the initial conditions are this much away, then x of t minus y of t 2 norm is less than or equal to 0.01 times e to the power l times t. So, for that duration, we can compute e to the power l t. This is some number that of course grows as t increases, because l is a positive number. It is a locally Lipschitz condition, the l that appears in the locally Lipschitz property of f. So, it is a positive number. So, it is indeed a function that is growing, but we are now asking about sensitivity to initial condition. So, if the initial conditions are close, then the solutions at any time t is apart from each other, but at most this distance from the initial condition times this number. So, the fact that this number becomes large is not the topic of discussion now. The topic of discussion now is how sensitive is the solution to the initial condition. So, if the initial condition is order 0.01 apart, then at any time t, x t minus y t is also order 0.01 apart. So, this explains sensitivity to initial condition. If l is small, in fact, these both are not growing too far apart. So, using this particular Bellman-Growman inequality, if under some particular theorem we already have existence, then we can see that locally Lipschitz property of f guarantees uniqueness also. Why? Why it guarantees uniqueness? Because if the initial conditions are close to each other, then the solutions are also close. In fact, if the initial conditions are equal to each other, then the solutions are equal. So, this completes the proof of existence and uniqueness to solution of differential equation and this also completes sensitivity of the solution to the differential equation, sensitivity with respect to the initial condition. After having finished the proof, this is a good moment to see closely related topic. So, we saw the existence and uniqueness of solutions theorem. So, let us have a quick relook. So, consider the differential equation x dot is equal to f of x, where f is a map from R n to R n and at the initial condition x naught, suppose f is locally Lipschitz, then there is a delta greater than 0 such that there is a solution and there is a unique solution. In fact, x of t to the differential equation x0 is equal to x0 for the interval 0 to delta. So, please note that we are starting from t equal to 0 to some delta greater than 0. So, this is an interval in positive time for the future. So, there is a solution, a unique solution for some time in the future. An important question is, is there a unique trajectory in the past? So, what about existence and uniqueness of a solution in the past? So, for this particular issue, we can easily modify our theorem. So, replace t with tau with the define by defining tau equal to minus t. So, as t evolves into the future, tau evolves into the past. So, the differential equation x dot is equal to f of x becomes d by d tau x, d by d tau of x equal to minus f of x. In other words, d by d tau of x of tau, x is a function of tau now, is equal to minus f at x of tau. So, how does one obtain the vector field for this dynamical system? We just reverse the direction of all the arrows in the vector field of the differential equation x dot is equal to f of x. Why? Because each arrow is not f of x, but minus f of x. So, if f is Lipschitz, notice that minus f is also Lipschitz. Hence, the Lipschitz condition on f guarantees existence and uniqueness of a solution in the past also. So, what are the implications of this particular observation? So, two solutions x of t and y of t cannot meet at x final. If at a point x final, if f is Lipschitz at that point x final, if f is locally Lipschitz at x final, then it is not possible that there are two past trajectories x of t and y of t which have the same final condition x final. Similarly, autonomous systems, everything that we have been doing so far is for autonomous systems. One of the properties that we can claim about autonomous systems is that the autonomous systems cannot reach the equilibrium point, the equilibrium state in finite time. Why? Because whenever it reaches an equilibrium state, that equilibrium state already had one past, which was the same point for all time, but there cannot be another trajectory that comes and meets this equilibrium state if f is locally Lipschitz at this equilibrium state. So, if you want to have a particular system, if you want to design a controller in which you reach the steady state in finite time and remain there, then one would require non-Lipschitz controllers or plant transfer function to reach the equilibrium. In this case, we interpret the equilibrium as a steady state. If you want to reach the steady state in finite time, then one would need either non-Lipschitz controllers or non-Lipschitz plant transfer function. Why is that? Because with Lipschitz, we can reach the steady state only asymptotically. It is not possible to reach in finite time. So, another important topic is we have been seeing only local existence and uniqueness conditions. What is local about it? We saw that there exists a solution and it is unique only for an interval 0 to delta. Even existence could not be guaranteed for large enough time, but it could be guaranteed only for a time interval 0 to delta. All that was guaranteed was that delta is greater than 0, but it is possible that this delta is a very small value and we are unhappy with this result about the existence and uniqueness for so small an interval possibly. So, it is possible that can solutions exist over the interval 0 to infinity. Is it that the solutions indeed exist and they are unique, but our theorem is not able to guarantee it. Is the theorem too harsh? Is it that it is assuming locally Lipschitz property on F because of which we are able to guarantee existence and uniqueness only for a small interval 0 to delta, but there might be some other result, some other way of proving that the solutions exist from 0 to infinity. So, are the conditions assumed in our theorem too harsh because of which we are able to prove only local existence and uniqueness. For this we will see one small example. So, it is indeed true that sometimes solutions indeed exist for only a finite time. So, our theorem can also accordingly claim existence and uniqueness only for a short interval. Why would they exist for only a finite time? Because it is possible that the solution becomes unbounded in finite time. So, for consider the differential equation x dot is equal to x square, where so x dot is equal to x square means that f of x is equal to x square. So, notice that this is Lipschitz. In fact, it is locally Lipschitz at every x naught in R. So, please note that this dot here does not mean it is multiplication of x square and f of x. It is the end of a sentence x dot is equal to x square is a differential equation and for this differential equation f of x equal to x square and this particular function f is locally Lipschitz at every point x naught. But notice that one Lipschitz constant does not work for the full R. So, solve, so we can explicitly solve this differential equation x dot is equal to x square to get dx by x square equal to dt and upon integrating both sides, we get x to the power minus 1 divided by minus 1 equal to t plus some constant c 1 and upon rearranging this minus 1 and x of t, we will call minus c 1 equal to c 2 and we get x of t equal to 1 over c 2 minus t. So, when we put the initial condition at t equal to 0, suppose it was at x naught x 0, then when we substitute, we get x of t equal to 1 over 1 over x naught minus t. So, let us just make this, so our differential equation solution, this is how the solution to our differential equation looks. So, let us see what this means. If x of 0 is equal to some number, let us say 4, then we see that x of t equal to 1 over quarter minus t. So, we see that for t equal to 0, of course it is equal to 4 and as t tends to 1 by 4, this quantity becomes unbounded. So, a graph of x versus t, x starts from 4 and it becomes unbounded. So, within a small interval up to 1 by 4, already it is so large that it is unbounded. So, we have solutions defined only over, for this particular initial condition, we are able to define existence of a solution only from 0 to 1 by 4, while it is a closed interval on this side, it is an open interval. For t equal to 1 by 4, we do not have a solution to, this solution does not exist. So, what we have seen is, when the initial condition is equal to 4, we had solution only up to 1 by 4. Suppose, initial condition was equal to 1, then we have a unique solution for some delta, but when we try to increase this delta, we see that x of t is exist and is unique. How long can we extend this? We see that by explicitly solving this differential equation, we get x of t equal to 1 over 1 minus t. So, x t is defined, exists only for t in the interval 0 to 1, 0 to 1 for this particular initial condition. So, for each initial condition, it becomes unbounded in a finite time. In how much time it becomes unbounded, that time depends on the initial condition. So, a set of solutions to this differential equation, if it starts below, then the solution exists for some more time. If it is at 0, of course, it remains at 0 for all future time, because it is locally lipschitz. At 0, the solution cannot emanate out of the equilibrium point. There is a unique trajectory and hence it remains always at 0. But if x of 0 is negative, then what happens? x of t is equal to some number, some number 1 over x naught, which is negative minus t. So, the solution always exists. So, when it is negative, then we see that the solutions are coming close by. So, we see that if x of 0 is negative, then the solutions exist for all future time. They are not becoming unbounded in finite time and they are all approaching 0. But if x of 0 is positive, then the solution grows and becomes unbounded in a very short time, in a finite amount of time. And hence, we cannot have global existence of solutions when initial condition is positive, but it appears we can have global existence of a solution when x of 0 is negative. So, it appears like for certain situations, there do exist solutions from t equal to 0 to infinity. While there are other situations for the same differential equation, there are certain other initial conditions for which the solutions exist for only a finite amount of time, in which case we cannot have global existence of the solution, let alone global uniqueness. So, for this particular differential equation, we might have some additional assumptions under which we have a unique solution from 0 to plus infinity. And it is possible that for certain initial conditions, those conditions of the theorem do not hold, in which case we do not have global existence. So, those additional conditions, how to formulate them is the topic we will see in the following lecture. Thank you.