 In this lecture, we will study the notion of big O and small O of a sequence and also for a continuous function. We will also see the definition of order of convergence which is very important in numerical analysis. Well, let us start with an example. Consider these two sequences sequence n and sequence n square. Both are unbounded sequences and now, you can also see that both these sequences tends to infinity as n tends to infinity right. Now, if you ask the question which of these two sequences will tend to infinity faster? Obviously, you can immediately tell that n square will go to infinity faster than n right. Let us take another example. Now, we will consider two sequences 1 by n and 1 by n square. Now, these two sequences are bounded sequences and both will tend to 0 as n tends to infinity. Now, again we will ask the question which tends to 0 faster? Obviously, you can see that the sequence 1 by n square goes to 0 faster than the sequence 1 by n. Therefore, we have now intuitively developed the feeling that if two sequences are converging to some limit same limit, then one may go faster than the other right. This is the basic idea of the small O and big O because we are interested in measuring who goes faster to the limit than the other. That is the question of order of convergence and it is all about comparing two sequences that are converging to the same limit in terms of the speed at which they converge. And this is the basic idea of big O and small O or little O introduced by Edmond Landau and Paul Bashman. Let us see the definition of big O in the context of sequences. Assume that we have two sequences sequence a n and b n both are sequences of real numbers. Then we say that a n is big O of b n if there exists a constant c and a natural number n such that modulus of a n is always less than or equal to constant times modulus of b n. And this should happen at least for sufficiently large n that is what we mean by saying that there exists a natural number n such that the condition holds for all n that is small n greater than or equal to capital n. Now what this definition tries to say well we will just put this definition in a different form and see what it means. We can do this if all b n's at least for sufficiently large n are not equal to 0. Then you can see that modulus of a n divided by modulus of b n is less than or equal to c that is equivalent to saying that the sequence a n by b n is bounded. Let us take a simple example let a n is equal to 1 by n and b n is equal to say 10 by n. We know that both these sequences are going to 0 right. Therefore, you may just think that a n by b n may go to infinity because b n is going to 0, but that is not going to happen obviously because when b n is going to 0 simultaneously a n is also going to 0 and at what speed both of them are going in the same speed right. Of course, you can see it with simple calculation this is equal to 1 by n divided by 10 by n that is equal to 1 by 10 and that is the constant that is appearing here in this particular case you can in fact, say that a n is equal to 10 times mod b n. There is no need to put mod in this particular example and that is what this says. Therefore, a n in this example is a big O of b n. Similarly, you can also take another example where a n is n square and b n is say 10 by n then you have a n by b n is equal to 1 by n square divided by 10 by n that is equal to 1 by 10 n right and in fact, this is going to 0. Therefore, you can always bound it by any number say for instance 1 also you can take as capital C and say that for sufficiently large n mod a n divided by mod b n is less than or equal to 1 right. So, in this case also you can say that a n is big O of b n. In fact, in the second example you can say something more than merely what happened in the first example. In the second example a n is in fact, going much faster than b n right that is the basic idea of small O. Let us define small O now again you have two sequences a n and b n. Now, we say that a n is small O of b n if for every epsilon greater than 0 there exist a natural number n such that mod a n is less than or equal to epsilon times b n for all n greater than or equal to capital n. So, in this case what we are doing is we have a sequence a n and we are saying that you give me any epsilon see that is more important you give me any epsilon I am giving you guarantee that my a n the sequence will become less than that small number that you gave into b n ok at least for sufficiently large n that is what you are saying it may not happen right from the first term of a n, but you go after 10 or 15 or some sufficiently large small n then this will surely happen that is what you are saying and you are pretty confident about whatever epsilon that somebody gives you that is what the definition says you give me any epsilon I can pack this sequence in this, but still you are saying that may I can make my a n something less than or equal to epsilon times this. So, you will have that confidence only when that a n by b n is converging to 0 then only it can happen. So, that is what the definition says that your sequence a n by b n is intuitively if you see it is converging to 0 that is what in a particular case when all this b n are not equal to 0 at least for sufficiently large n this is equivalent to saying that a n by b n is in fact, converging to 0. If you recall in the last slide we have taken a example where a n is equal to 1 by n square and b n is equal to 10 by n you can see that a n is going to 0 much faster than b n. So, if you take a n divided by b n you will see that it is 1 by 10 into n. Now, you give me any epsilon greater than 0 I can always find a sufficiently large n such that this is less than or equal to epsilon for all n greater than or equal to capital N. Therefore, in this case a n a n is equal to small o of b n. So, the notation small o means a n has to surely go faster to the limit than b n then only you will say that a n is small o of b n whereas, in big o either a n should go faster or equally as fast as b n then also you can say that a n is big o of b n there is a subtle difference between big o and small o. So, this is what we will remark you have two sequences a n and b n you can see that a n is small o of b n means what a n is definitely going faster than b n that also means a n is going as fast as b n because going faster than b n is something more than that therefore, a n is also big o of b n. So, if a n goes faster than b n then it is also a n is big o of b n because we are demanding more stronger conditions here than this that is why if this happens this will surely happen, but the converse is not true you can take many examples the first example that we have given in the previous slide is also an example where a n is big o of b n because a n and b n are going at equal speed that is if you recall we have taken a n equal to 1 by n and b n is equal to 10 by n. Remember in this case the converse is b n is always staying ahead of a n. So, it is not the position that matters it is the speed at which they tend to 0 matters. You can see that a n divided by b n is equal to 1 by 10 and that is a fixed number therefore, this cannot go to 0. Therefore, in this example a n is big o of b n, but definitely a n is not small o of b n. The same idea is also given in this example we are taking a n is equal to n and b n equal to 2 n plus 3. Now, let me summarize what we will learn from big o and small o and b n notation. We have two sequences in particular let us take both the sequences are converging to 0. Then a n is big o of b n means the sequence a n tends to 0 at least as fast as the sequence b n that is what is meant by big o and small o means that a n tends to 0 definitely faster than the sequence b n this is small o. So, I hope you have understood the notation of big o and small o and the subtle difference between them. Once you understand this you can also extend this idea to continuous functions also. Let us give these two definitions in terms of functions consider a point x naught in R and f and g are continuous functions defined in a small neighborhood of x naught. Then we say that f is big o of g as x tends to x naught this is very important all this o notations are defined as x tends to something. Therefore, you should always write this where this x tending to in order to compare one function with the other function. Notationally we will write f of x is equal to big o of g of x as x tends to x naught that also one should write. The definition is almost similar as we did with the sequences when can we say this that when there exist a constant and a real number delta such that f of x mod is less than equal to c times mod g of x whenever x minus x naught is less than or equal to delta. What it means? I do not want this condition to happen everywhere on the real line in a small neighborhood of x naught if it happens that is enough because we are only worried about x tending to x naught. Therefore, as you go closer and closer to x naught this should happen. If that is the case then we say that f of x is big o of g of x. Let us similarly define the notion of small o again let x be some real number and f and g are continuous functions defined in a small neighborhood of x naught. Then we say that f is small o of g as x tends to x naught. Notationally we write as f of x equal to small o of g of x when this happens if for every epsilon greater than 0 there exist a real number delta greater than 0 such that mod f of x is less than or equal to epsilon into mod g of x whenever x minus x naught is less than equal to delta. Again this condition should happen in a small neighborhood of x naught that is what is important here and again you can see that this the role of this epsilon which says that f of x divided by g of x should tend to 0 as x tends to x naught and that is provided if g of x is not equal to 0 otherwise that is what this inequality means. Again the same kind of remark that we made with sequence will hold here we have two functions f and g which are continuous functions defined in a small neighborhood of x naught. Then you can see from the definitions of big o and small o that f of x is small o of g of x will always imply that f of x is big o of g of x. But again the converse is not true you can take examples in a similar way as we did with the sequences. So, I leave it to you to think about various examples for why the converse is not true again let us summarize. Suppose you have two functions f and g and we know that f of x is tending to 0 and also g of x tends to 0 as x tends to some point say a. Then we say that f of x equal to big o of g of x means f of x tends to 0 at least as fast as g of x tends to 0 as x tends to a that is what is mean by big o and what is mean by small o small o means f should go to 0 definitely faster than g of x going to 0 as x tends to some point say a or x naught in the previous definition we have taken. This notations are very important in numerical analysis and why in particular we are more interested in taking whether a sequence or a function tending to 0 because we will be studying errors for various methods and we wish our errors to go to 0. Now our interest is to see how fast this error goes to 0 as some parameter tends to something this is what we will be interested in the subject and therefore, this notations will come quite often and it is very convenient for us to use this notations than every time telling what they actually do. Let us take an example well this example is familiar because we have discussed in our previous lecture. Let us take the function f of x equal to cos x and let us try to write the Taylor formula for cos x around the point a equal to 0. If you recall this is given by the Taylor polynomial given by this plus the reminder term given like this. Now if I want to use the Taylor polynomial instead of cos x then what is the error that I am committing in that representation well that is precisely this reminder term or the truncation error. This is what we have seen in the last class unfortunately this is a unknown expression because of this quantity xi which generally we do not know, but we only know that xi lies between x and 0. So, that makes this expression to be unknown. Now our interest is to understand how fast does this error goes to 0 for that let us just define this reminder term or the truncation error as a function of x. Let us call this as r of x and sorry this may be xi here. So, r of x is a function of x in fact, this xi also depends on x as x changes the xi will also change therefore, it is really a complicated function. Now our question is as x tends to 0 what will happen to r of x well it is very clear that as x tends to 0 r of x will also go to 0 why because although this term cos xi is unknown, but definitely we know that it is bounded. Therefore, this quantity is some bounded quantity means it remains constant and this term is going to 0. This is the first one is bounded and the second this one is going to 0 therefore, the entire thing will go to 0 as x goes to 0. That is quite clear now our interest is to see how fast it goes to 0 well that is not a very difficult question because we can understand how this term goes to 0. So, you can immediately say that r of x goes to 0 as fast as x to the power of 2 into n plus 1 tends to 0 as x tends to 0. So, that is what we meant by saying that r of x this is the function just to compare this with the definition we had 2 functions f and g and we say f of x is big o of g of x. The same instead of f here I have r of x and r is big o of what this function. So, just take this as g of x. So, r of x is big o of x to the power of 2 into n plus 1 as x tends to 0. This is something like you are comparing your error with something which you know very well how it behaves. See this you know how it behaves intuitively you can imagine how fast it goes to 0. Now what you are saying is my truncation error in Taylor series is going as fast as this function goes to 0 as x tends to 0. Often we say that the function r is of order 2 into n plus 1 ok. This kind of words are quite often used in numerical analysis to say how our error goes to 0, how fast the error goes to 0. Suppose you say that my error is going to 0 with order say 5 it means your error is big o of x to the power of 5. Similarly, you can interpret in terms of the small o also. Finally, we will quickly define the notion of order of convergence. You have a sequence a n and you know that the sequence a n is converging to some limits say a. Then we say that the order of convergence again you can see that the order of convergence is something to understand how fast this sequence is going to a. So, we say that the sequence going to a with order of convergence at least linear if we can find a constant c less than 1 and the natural number n such that modulus of a n minus a is less than equal to c times a n minus a for sufficiently large n that is n greater than or equal to the capital n that you form right. So, this is what is called at least linearly. So, it means the sequence is converging at least of order 1 that is what it means. Similarly, you can also say that the order of convergence of the sequence is at least super linear if there exists epsilon that converges to 0 and the natural number n such that a n plus 1 minus a is less than or equal to epsilon n times a n minus a modulus a n minus a for all n greater than equal to capital n that is for sufficiently large n onwards this condition should hold. Similarly, you can define for quadratic at least quadratic convergence and this can further be generalized to any alpha we say that the order of convergence of the sequence is at least alpha that is very important it is converging at least up to this speed that is what it means. If you can find a constant c and the natural number n such that this condition holds you see this a n plus 1 minus a modulus is less than or equal to c into modulus of a n minus a alpha for all n greater than or equal to capital n. Often we will also use the notation mod a n plus 1 minus a divided by mod a n minus a and limit n tends to infinity and that is maybe some constant say lambda because this is if this is happening of course, to the power of alpha. So, this is also one equivalent definition for this in this case often in some books people call this as rate of convergence in some books even order of convergence is otherwise called as rate of convergence. Therefore, there is no standard usage of these words we will come across such expressions in non-linear equations and with this I thank you for your attention.