 Today, we are going to discuss some topics in preliminaries. So, again the topics include some from analysis and some from linear algebra etcetera. So, specifically, so let me mention what I am going to do in this one hour. First, discuss linear dependence and independence of functions. So, that is very much used in the study of linear homogeneous equations later. So, linear dependence and independence of functions. And then, we will also discuss the calculus lemma, which was used, which is going to be used in a qualitative theory of differential equations. So, this is one specific calculus lemma. There are many things in calculus, but this what I am going to state about this lemma. And then, an important formula, this is referred to as differentiation under integral sign. So, this is going to be used. For example, in the study of boundary value problems using green functions and an integral sign. And then, we will also use, we will also learn something about Taylor's formula, which will be used in the linearization theory of non-linear equations. Finally, we will also discuss some fixed point theorem and some related issues. So, this is referred to as Banach fixed point theorem. So, we are going to learn about that. So, let me start with linear dependence and independence of solutions for that. So, let I be a non-empty interval in R. So, this can be finite, infinite, does not matter and put x is equal to… So, this is collection of all functions, real-valued functions defined on the interval. As of now, I am not putting any structure on the solutions, continuity or differentiability, anything. So, they are just functions. And it is easy to check that x. So, check x is a real vector space. So, if we are given two functions in this set x, u 1, u 2, then I can define their addition and multiplication by a real number. And if u belongs to x, minus u also belongs to x and minus u is the additive inverse of u. And the identically 0 function is the 0 in this vector space. So, this is not difficult to check that x is a real vector space. So, we are going to discuss what is meant by linear dependence or independence of two elements in this vector space x. So, let u 1, u 2 belong to x. So, this is general definition in any vector space. So, u 1, u 2 are said to be linearly independent, a 1, u 1 plus a 2, u 2 equal to 0 implies a 1 equal to a 2 equal to 0. The only thing we have to remember here is what is meant by this. So, this is 0 in x. So, a 1, u 1 plus a 2, u 2 is again a function from the interval i into real numbers. So, left hand side is a function and this is the 0 function. So, this implies a 1, u 1, t plus a 2, u 2, t equal to 0. Now, these are real numbers for all t in the interval i. So, whenever this happens, if that implies a 1 equal to a 2 equal to 0, then u 1 and u 2 are said to be linearly independent. So, otherwise u 1, u 2 linearly dependent. So, now, we are going to get some sufficient conditions on u 1 and u 2 for them to be linearly independent. So, that is the next thing. So, suppose a 1, u 1, t, a 2, u 2, t is 0 for all t in i. So, u 1 and u 2, I am picking two functions and suppose their linear combination is 0. So, when can I conclude a 1 equal to 0, a 2 equal to 0? That is my question. So, for this purpose pick t 1, t 2 in i and t 1 different from t 2. So, pick any two distinct points and from this, now if you substitute t equal to t 1 and t equal to t 2 because that is valid for all t in i. So, therefore, I have a 1, u 1, t, t 1 plus a 2, u 2, t 2, t 1 equal to 0 and a 1, u 1, t 2 plus a 2, u 2, t 2 equal to 0. And now look at these two linear equations. They are homogeneous because right hand side is 0 and consider that matrix u 1, t 1, u 2, t 1, u 1, t 2, u 2. So, if, so here a sufficient condition if the matrix u 1, t 1, u 2, t 1, u 1, t 2, u 2, t 2. So, this is the coefficient matrix is non-singular then it follows that because these are two homogeneous equations and the determinant is non-zero that matrix is non-singular it follow that it has only trivial solution a 1 equal to a 2 equal to 0. So, with this condition, so if we are able to pick two distinct points in the interval i such that if this matrix is non-singular then we are getting a 1 equal to a 2 equal to 0. So, that implies u 1, u 2 are linearly independent. So, let me shorten it. So, that is only a sufficient condition. It does not appear to be necessary, but in this class x is x is a huge class of functions. So, it is difficult to find an example you can try that. So, converse may not be true. So, I just make a remark. So, you may check may converse may or may not be true. So, I am not sure about that because that class is too big to may not be true. So, what I am trying to say is suppose you pick any two distinct points in the interval i and suppose for all the choices if this matrix is singular and yet the functions u 1 and u 2 are linearly independent. So, that you can, so this question I put a question mark. But when you take special functions, so we are going to now take a subset of x, then we can say much more. That is the next thing we will go. So, consider c 1 i. So, this is a subset of x. So, these are once continuously differentiable functions. So, this class is much smaller than x and now again let us study when two functions are linearly dependent or independent. The previous discussion which is just took two functions, you can also consider any finite number of functions and discuss their linear dependence or independence and that algebra will be more complicated, but the same idea. So, instead of two distinct points, you have to consider n distinct points and do that. So, again let me just restrict the discussion to two functions. So, let u 1 u 2 belongs to c 1 i and a 1 u 1 plus a 2 u 2 equal to 0. So, I would like to see under this condition when and imposing some conditions on u 1, u 2 if necessary, when does it follow that a 1 equal to a 2 equal to 0. That means, u 1 and u 2 are linearly independent. In this situation when they are both differentiable. So, we get the second equation automatically. So, just differentiate this. So, my differential notation is a dot, a 1, a 2 are just constants. So, when I differentiate the first equation, I get that. So, dot is remember d by d t. Now, you fix one t naught, pick any t naught, then we have this a 1 u 1 t 0 plus a 2 u 2 t 0 equal to 0 and a 1 u 1 dot t 0 plus a 2 u 2 dot t 0. So, unlike in the previous situation, now just one point will do and we are getting again two equations. And from this, we would like to conclude a 1 equal to a 2 equal to 0 and for that this coefficient matrix should be non-zero. So, this matrix, so if again this u 1 t 0, u 2 t 0, u 1 dot t 0, u 2 dot t 0 is non-singular, it follows that a 1 equal to a 2 equal to 0. And this matrix is called Ronski matrix of u 1, u 2 at the point t 0. And its determinant, let me write here, determinant is called. So, we will introduce some notation, Ronski of u 1 and u 2 at t equal to 0. So, let me just again write that thing. So, put w u 1, u 2 at t 0 equal to determinant of this matrix u 1 t 0, u 2 t 0, u 1 dot t 0, u 2 dot t 0. So, what we saw in the previous slide is that, so if w u 1, u 2 t 0 is not 0, then u 1, u 2 are linearly independent. So, when you put more structure into the functions, for example, here we have taken differentiable functions. So, we see a simple condition and that to just at one point. So, even if the Ronski n is non-zero at one single point, those two functions u 1, u 2 are going to be linearly independent. However, converse may be false, may be false meaning, so there exist functions u 1, u 2 such that w of u 1, u 2 at t is 0, 0 for all t in the interval, whichever interval you are considering, but u 1, u 2 are linearly independent. So, this I am striking the example. So, let me just mention that. So, example, so you work out the details. So, i is some interval containing both positive and negative real numbers, say for example, minus 1 and 1 and u 1 t is t cube and u 2 t is mod t cube. So, this is for t belong to i. So, you ready to see that u 1 is differentiable. So, you might have some trouble seeing that u 2 is also differentiable. So, let me just mention here that. So, u 1 dot t that you easily do it. So, this is just 3 t square. So, u 2 dot t simple exercise. So, this is 3 t square, if t is positive, just you break this into several regions. So, you will see that, if t equal to 0 and minus 3 t square, if t is negative. So, it is something 3 t square, but the sign changes that is important and using this definition and this computation, you see that, Ronskian of u 1, u 2 t is 0 for all t n i. However, we show that u 1, u 2 are linearly independent. So, for that, so let a 1, u 1 t plus a 2, u 2 t is equal to 0 for all t in i. So, in particular you pick t equal to plus 1, you get a 1 plus a 2 equal to 0 and if you pick t equal to minus 1, you get minus a 1 plus a 2 equal to 0 and this will imply a 1 equal to a 2 equal to 0. So, such examples do not occur for some spatial solutions again, some spatial functions. So, these are, so this remark, so you are going to see this in the study of linear second order equations. So, consider this second order equation. So, u double dot plus p t u dot plus q t u equal to 0, so linear homogeneous equation of second order. So, in some interval, let me just write any interval. So, if u 1, u 2 are solutions of this equation, then the Ronskian equation, u 1, u 2, at infinity, the Ronskian equation, that is a function is either identically 0 or never 0. So, very, so if it is 0 at one point, then it has to be 0 everywhere and if it is not 0 at one point, then it has to remain non-zero and this happens only for this u 1, u 2 are solutions of this homogeneous. So, that is spatial functions and further u 1, u 2 are linearly independent if and only if, see earlier we proved, if that is Ronskian is not 0 even at one point, then u 1 and u 2 are linearly independent. Now, here it is if and only if. So, we saw through this example, this converse may not be true, but when u 1 and u 2 are solutions of this homogeneous second order equation, then converse is also true if and only if the Ronskian is not 0. So, that is an important thing you are going to learn in the study of second order linear equations. So, now we move on to the calculus lemma. So, that is the next. So, let me state it. So, this is what I have in mind. So, this we are going to need in the study of qualitative analysis of non-linear systems. So, let chi be a real valued function defined on the interval a, b and again this may be finite, infinite it does not matter this interval finite or infinite or infinite satisfy. So, these are the assumptions on the function chi either one of these two chi is bounded above in that interval chi is non-decreasing or chi is bounded below is non-increasing. Then limit chi t as t tends to b. So, b could be infinity. So, it does not matter this limit exists means it is a finite number finite real number. So, quick proof. So, assume 1. So, 2 is similar and in fact, if chi satisfies 2, then minus chi satisfies 1. So, if limit chi t exists, then limit minus chi t also exists. So, it is sufficient to prove dilemma in one case. Since chi is bounded above, so put alpha is equal to supremum chi t, t belongs to a and since we are assuming chi is bounded above, so this is finite. So, for any epsilon positive, so now we are going to show that the limit exists for any epsilon. So, if you consider alpha minus epsilon, alpha minus epsilon is not supremum of chi. So, therefore, there is at least one point in the interval where that alpha minus epsilon will be strictly less than chi of t 0. So, for any epsilon, there exists t 0 in the interval a b such that alpha minus epsilon, this is no more supremum. So, this should be less than chi t 0. Now, we use the second hypothesis. So, if t belongs to a b and t bigger than equal to t 0, then since chi is non-decreasing, so chi of t naught is less than equal to chi t. So, if you put together the previous inequality, so therefore, we have alpha minus epsilon less than chi t 0 and that is less than equal to chi t and alpha is the supremum of all chi t, chi t belongs to a b. So, this is automatically less than equal to alpha. So, for all t bigger than equal to t 0. So, if you rewrite this thing, so just remove this chi t 0. So, this implies alpha minus chi t, which is always bigger than equal to 0 is less than epsilon for all t bigger than equal to t. And this is same as saying that, so therefore, limit chi t as t tends to b exists and equals to alpha. So, that is very simple thing, but very useful one. So, this we will see in many situations. Next, we are going to discuss about this differentiation under integral sign. So, for this recall again from fundamental theorem of calculus, so if f from a b to r is continuous and if you define f of t, the indefinite integral a to t f s d s, then f is differentiable, f prime t is equal to f t. So, what I am going to say is a generalization of this. So, where, so this you remember, so this is a generalization of this. Now, so again let alpha beta sum into while a b be differentiable functions, f is a function of two variables. So, t s, so t belonging to a b and s belonging to this into alpha t beta t. In this notation, usually you assume that alpha t is less than beta t. So, if beta t happens to be less than alpha t, then you interchange it or beta t alpha t. So, there is no restriction on the values of alpha and beta. So, alpha t at some t alpha t can be less than beta t at some other t beta t can be less than alpha t that does not matter. Be a continuous function of both the variables and I am just taking partial derivative of f with respect to the first variable is also continuous. So, since we are essentially using Riemann integration, so these hypotheses are needed is also continuous. So, these are the hypotheses on the these functions alpha beta and small f. So, define now I am going to define a function of t alone f of t just like in the previous case, but now with all these things. So, integration is also the integral limits are now variables alpha t beta t and I am going to integrate with respect to s. So, remember that t is also t is just fixed. So, once you fix this t, so this you define for t in a. So, then f is differentiable. So, this is the formula it is not really difficult to prove it just like you do in the previous case you write f of t plus h minus f t by h and then you manipulate the integral limits you will get it. So, it is not at all hard. So, this is alpha t beta t d f by d t. So, now I am assume that is continuous. So, this integral makes sense and now since the limits are themselves variables. So, you get some additional terms and these are like this t beta t d d by d t and since this in the upper limit it comes with a plus sign and alpha t is in the lower limit that comes with negative sign d alpha by d t. So, just and this is very useful when you study boundary value problem and the next topic of discussion is Taylor's formula. So, let me again start with one variable that is very familiar with you to all of you. So, let f from some interval a b to r be a c to function. So, I am assuming now f is twice differentiable and that second derivative is also continuous function and you fix some x 0 in a. So, using manually theorem and other things. So, it is not difficult to see that f of x 0 plus y is equal to f of x 0 f prime x 0 y plus f double prime let me just write xi y square. So, this xi is a point between x 0. So, because we are assuming it is a c to function. So, when y is small. So, this quantity is bounded and usually we write that as f of x 0 plus prime x 0 y plus o of y square or y small and this is very much used in linear linearization of non-linear equations and now once we have this for one variable case we will see how to extend it for multivariable case. So, again let me start with one real valued function multivariable case. In fact, we reduce it to the one variable case. So, let D belonging to R n be an open set f or same as that f. Now, it is depend on this D to R be a c to function. So, in this case c to function means. So, all the first order partial derivatives and second order partial derivatives with all the variables n there will be n variables they all exist and they are continuous. So, again let x 0 belong to x and not x D and you choose a small ball. So, since we are assuming D is open and choose R positive such that B x 0 R is also continued in D. So, D will be something like that x 0 is here and I am just taking a ball of radius r that is fine. So, now we are just picking y there and trying to pick any y pick y in B x 0. So, define. So, this is now f is from D to R. So, now I am going to define a one variable function. So, this is f of x 0 plus D y. So, D is between 0 and 1. So, now if you apply the one variable case. So, with y equal to 1 and x 0 equal to 0. So, therefore, we have f of 1 is equal to f of 0 plus f prime 0 into 1. So, that is fine and now f double prime xi and xi is between 0 and 1 f double prime. So, just try to I mean ignore this second order derivative we are not interested again that is bounded. So, we are going to. So, we are just interested seeing the linear terms. So, let us calculate what f 0 is and f prime 0. So, f 0 if you look at the definition of the capital F function. So, f 0 I put t equal to 0 there. So, this is just f of x 0 and what is f 1? f 1 is f of x 0 plus 1 and by chain rule f prime t let me just write f prime t. So, this is a function of n variables. So, by chain rule. So, this is just del f by del x j evaluated at x 0 plus t y into the differentiation of this argument with respect to t and there is only dependence is through this t y. So, just get y j. Remember y is a vector. So, this is just y j equal to 1 to n. So, since we are in n variable case that happens. So, f we are interested in f prime 0. So, therefore, f prime 0 is del f by del x j t equal to 0 means just x 0 and this you can write in the notation of gradient. So, this is gradient of f at x 0 dot y. So, this is a vector and that is a vector. So, this is scalar product. So, this is gradient. So, therefore, if you put back x 0 plus y f of x 0 plus gradient of f at x 0 dot y plus let me just write that as y square. So, we are interested only in the linear terms and now you can just extend it to many variable case. I will let D be as before and now let f from D to R n. So, you can also do it for R m. So, let me just for simplicity do z for R n and then f will have n components. So, f 1 f 2 f n and each f j is now from D to R and now we can apply the previous discussion with this apply the previous formula in this for f j and if you do for all f j then. So, we have this then you put together we have this f of x 0 plus y now this is a vector. So, this is also a vector. So, you should get a vector here, but why is only a vector. So, that in we want to get a vector out of that. So, you should multiply by a matrix and this is just. So, this is n by n matrix and this called Jacobian of f and if you look at the previous formula and apply to each f j. So, this D f x 0 is nothing, but. So, it is a matrix and each row is gradient f 1 at x 0. So, you write the gradient as a vector. So, the first vector coming from f 1 second vector from f 2 etcetera. So, this is a matrix. So, this is n by n matrix. So, finally, let me quickly discuss this fixed point theorem for this we need the notion of a matrix space. So, let x be a non-empty set a function D from the Cartesian product to R is called a matrix or distance function if it satisfy the following three properties. So, if you take any two points in x. So, their distance is always non-negative and equal to 0 if and only if x equal to y. This properties refer to a positive definiteness and second one is symmetry. So, distance from x to y is same as distance from y to x symmetry and third one is called triangle inequality. So, D of x y is less than equal to D x z D z y. So, these three properties should hold for all x y z in x and. So, this again this notion comes from the Euclidean distance. So, this is for example, in R if you take x equal to R and the distance defined by the usual Euclidean distance. So, that is a matrix and in R n again the standard Euclidean distance. So, these are examples of matrix spaces. So, x D so x is a non-empty set and D is a matrix on x you refer to as a matrix space. So, we have another few minutes. So, let me just a sequence x n in x is said to be a Cauchy sequence limit. So, remember this when you put D. So, these are all real numbers now x m x n and this should go to 0 as m n goes to infinity. You can also write it in form of epsilon. A sequence x n converges x n in x converges to some x in x if D x n x tends to 0 as n tends to infinity. So, a matrix space is said to be a complete matrix space if every Cauchy sequence in x converges some point in x. So, it is easy to check if a sequence converges to some x and that x is unique. So, a sequence cannot converge to two elements in x. So, that limit is unique. So, now we are going to state this fixed point theorem. So, let x D be a complete matrix space. So, completeness plays an important role. So, without completeness the conclusion of this theorem may be false. T from mapping from x into x itself be a contraction. So, that is so I am going to define what is contraction. So, if you take any two points x and y in x and now you take their images under t. So, T x T y are their images and you compute the distance between them and this should be less than or equal to alpha distance x 1. So, for all for some 0 less than alpha less than 1 and all x y in x. The conclusion is then T has a unique that is their exit x star in x such that T x star is equal to x star and x star is unique in with this property. So, that is meant by that unique fixed point as a corollary. So, let again same setting. So, let T from x to x be such that. So, T need not be a contraction, but what we are assuming is T to the n is a contraction. So, T to the n means you compose T with itself n times. So, T square is T composite T etcetera is a contraction for some n bigger than equal to 1. Then T has same conclusion unique fixed point. Let me indicate how this corollary follows. So, proof of the corollary. So, since T n is a contraction we apply Banach's theorem. So, there exists unique x star such that T n x star is x star. So, because T n is a contraction we are assuming T n is a contraction and this implies if I again compose with T. So, T n of T x star is equal to T x star and by uniqueness. So, this is by uniqueness T x star is equal to x star because x star is unique. Now, we are showing that T x star is also a fixed point for T n and that is why you should have this T x star equal to T x star. And further if T x is equal to x for some x. So, if T has another. So, that implies T n x is again x and that again by uniqueness implies x is equal to x star. And especially with this hypothesis of the corollary is very useful that you are going to see in the existence of solutions of differential equations that is very much useful. So, with this thing we will conclude this lecture on some preliminaries. Thank you.