 Good morning everybody. An example in today's lecture based on an important feedback given by a participant from NIT Surat. Unfortunately I don't remember the name but he had tried to check the sizes of different types such as integer, float, whatever. And he also tried to get the size of a pointer. That is where he got into an error. While I will describe the nature of that error separately after we discuss the pointers tomorrow. But that led me to think about the possibilities where you lose computational precision not merely because of the round off errors that we saw yesterday. But also because there might be an overflow and you may not be able to accommodate the result values into the limited sizes of variables that you have due to the limitations imposed by the implementation of the C compiler. For example integers are normally supposed to be 4 bytes and long is supposed to be 8 bytes. But there are many implementations which have long also as 4 bytes. While 2 to the power 31 minus 1 might appear to be a very large value, there are problems which require much larger values to be handled. I have tried to construct an example here to illustrate what happens in such situation and how do we deal with such situations. So in today's lecture we are going to look at the limitations on representation and how to handle them. But an equally important point which is often not made in the introductory teaching is the notion of computational time and time complexity of algorithm. Complexity is sadly considered to be a topic to be discussed in advanced courses when we talk about algorithms and complex data structures and so on. However we believe that it is important that our students understand the importance of writing efficient algorithms and therefore they must have some understanding of the computational time involved. It is possible to introduce the notion of algorithmic complexity to them in very simplistic terms which will go a long way in encouraging our students to write efficient programs. We will of course discuss functions and arrays in today's sessions. The third lecture hour after tea break I will utilize to illustrate one more example and of course we will have the interaction in that session exclusively. So I begin with some comments on the computational time. We know that our computers work very fast but any computer will take a finite amount of time in order to carry out any computation. So if we are not careful about designing our algorithm and writing our program then we may force our Dumbo computer to do many more computations than what are necessary. The order of magnitude of time required to execute any program is called the time complexity of the algorithm. And we must therefore design our algorithm or steps in our program such that the execution time is minimal. After all modern digital computers are about doing computations speedily and therefore we must be conscious of any badly designed program which will cause the execution time to increase. How do we find the execution time of a program? Look at the programs that we have written or look at any larger programs that you would have written earlier. One mundane way of carrying out this measurement is just to use a stopwatch or even simply the wristwatch that each one of us has. So basically when I say dot slash a dot out and press return I look at the starting time and when I get the command prompt after completion of the execution of that program I look at the final time. And the difference would tell me how much time my program has taken to execute. Of course this execution time will vary when I run the same program on different computers because different computers will have different speeds. But the point to be made is can I make the same program run faster on the same computer. And it is in that context that we look at the time complexity of an algorithm or execution time of a program. Now the simplistic time measurement is not a proper reflection of the actual execution time of the program. Why is it? Because typically in most programs we will have some input statements which will require us to type some value. Suppose we have to give large number of values using the keyboard then we may lose not just several seconds but maybe minutes in just typing these values. Obviously this time which is attributed to my slowness or my human speed cannot be considered to be part of the execution time of the program. In technical terms this time is called think time. Later on when I will comment on performance benchmarking and simulation you will notice that think time is an important parameter to be considered in all such simulations. However for the purposes of our understanding it is important to note that if we use a stopwatch or any kind of real life watch then we must also be able to monitor either through a separate watch or using the same watch the time that we spend in input out operations. Typically also the output operations if these are large need to be excluded from the computational time that the algorithm takes. Consequently because this need of measurement of time has been recognized right from the early days of computing and most operating environments provide special commands to measure the program execution time. In Unix or any variant of Unix any variant of Linux such as Ubuntu that we are using we can simply say time followed by dot a slash out. So ordinarily we would be giving this command to execute our program or the name of an executable file if we have created that using the compiler. But if we just give this command this will produce the normal output and perform the normal computations that are contained in our program. If we say time followed by this command then we get some special output. So if we say in Ubuntu this command this will first produce the normal output of the program so that means program will be executed. But what the computing environment does what the operating system does is whenever the program starts execution it sort of sets up its internal stopwatch and that stopwatch keeps clicking and measuring the time. In fact operating environment does not have an equivalent of a single stopwatch it can actually measure separately the time spent actually by the instructions of my program. As well as the time spent by the operating system itself in the management of input output management overhead of executing my program etc etc. Typically the time statistics that is displayed is in this format. So it will it will show three entries real user and says the real time means the actual total clock time which program took to execute. This is after I press return and I get the next command prompt. So this is a sample entry it says 7 minutes 18 sorry 0 minutes 7.181 seconds. The entry which is most important for our purposes is the entry against user. This user does not mean the time spent by the actual human user. But this user in the context of operating system means the time taken on the computer for execution the processing time taken by the user program as opposed to the system program which is the operating system itself. So to recapitulate the real time is the total clock time which the program took to execute from start to finish which includes time spent in giving input values or producing output values. But the user time and system are two independent time counts where the user time is actually the computing time taken by the computer to execute our program. System is the overhead time as I explained taken by the operating system for controlling the execution of our program. In a nutshell then it is the user time which represents our algorithmic time complexity. Consequently whenever we wish to measure the performance of a program we execute it under the time command and we can get the time statistics in the format that I just explained. We can then execute different versions of the program to see if a version performs faster. To illustrate this point and also the point made about the limitation of representation in terms of the precision that is available on the computer. I have chosen a simple problem for which we will write not just one program but several versions and examine the execution of those versions using the time command. The example is about estimating the value of pi. It is a computationally meaningful problem and the way this estimation is being done in this example is to use the area of a circle. I have drawn here a circle on the right hand side if the radius of this circle is r the area of the circle is clearly pi r square. However if the radius of this circle is one unit then the area is simply pi because r square will be equal to 1. Now in order to estimate the value of pi I should somehow be able to estimate the area of this circle. What I do to estimate the area of this circle is first I consider the quarter of the circle. Let us say the top right hand quarter. The area of this quarter circle will be pi by 4. Now I can estimate the value of pi by estimating the area of this quarter circle. Observe that this quarter circle is contained in a rectangle formed across the two perpendicular radii of this quadrant. The area of this rectangle is very clearly known which is one unit because it is one into one. I have to estimate the area within this rectangle for the portion which falls inside the circle. How do I do that? One mechanism is to use discretization. For example if you look at the x axis and y axis in actual practice the x axis is made up of several points and so is y axis. So imagine that I have n points on x axis and n points on y axis. Then the area of this square would be n into n and if for every point in this area I can determine which points fall within the circles and which points do not fall within the circle. I should be able to get a count of points which fall within the circle. The ratio of that count to n square should effectively give me an estimate of the value of pi. This is explained in the next slide through an additional component of the diagram where I have shown that if we discretize the space in this quadrant let us say we have n points in x axis and n point in y direction. Then if n is large enough the count of points as I mentioned can represent the area of any space. In fact compare this with the example that we discussed earlier about digital images. We said that an image can be seen to be consisting of various points called picture elements or pixels and you will have n by n image or n by n image where you have so many discrete points. If the points are close enough that is if the density of points is sufficient we can consider all these points adjacent to each other actually making up the area. So the issue is if we consider large number of points this argument will hold. How do we proceed with this kind of construction? Well we look at this square again. In this square every point has a i coordinate and a j coordinate. The i coordinate is x axis coordinate j coordinate is the y axis coordinate. Now for every point i, j if we determine whether the point i, j is within the circle or outside the circle obviously i is limited to be having values between 1 and n and j is also limited to have values between 1 and n. Actually 0 0 is also a point I am just ignore it for the time being. But the value if I count the points which lie inside the circle for example this point, this point, this point they don't lie inside the circle but these points do. I have to figure out a way to determine whether a point lies inside the circle or not but if I do and count such number of points then by simply calculating n square and taking the ratio of count to n square and multiplying it by 4 I will get the estimate of pi. Where does this 4 come from? This 4 comes from the fact that this part of the circle constitutes one fourth of the area of the full circle which is pi as we had determined earlier. So 4 times this area will be the value of pi. Any point i, j in this space is within the circle only if i square plus j square is less than n into n. How do we get this? It is actually a simple exercise in trigonometry. If I draw any line from or any radius from the origin extending to the end of the circle here, here, here, here, anywhere that line length will be always one unit because that is the unit of the radius. Now that line length can be estimated to be equal to the square of this height and the square of this length. This is represented by i, this is represented by j. Consequently i square plus j square will be equal to the square of the length of this line. This line may stop up to the circle or it may extend depending upon which point I am considering. Now obviously if this length square is less than n square which is actually this point then the point is inside the circle otherwise it is outside the circle. As I mentioned earlier we should start with count equal to 0 and then for all values of i ranging from 1 to n and for each i for all values of j ranging from 1 to n we can determine whether the point is within the circle or not by checking this condition. If this condition is met we simply increment the count by 1 otherwise we consider the point to be outside the circle and ignore it. At the end of this simple iterative process we will be determining the value of power which will be given by 4 times the count divided by n into n. So you can see that estimation of power which could otherwise appear to be a fairly complex issue can be done rather simplistically by using iterative computations. Here is a program to compute values of power. I am defining the standard macros get num integer get num put num integer get num floating point etc etc get string put string and whatever this is standard stuff and this is the program. So let's see what this program does. It defines pi to be a floating point number. Of course pi is going to be a fractional number. So we expect pi to be floating point. We will define i, j, n and count as integers because these are all integer values. So if you look at this program, this program defines all the variables i, j, n, count etc and floating point variable pi. It starts with count equal to 0 and it sets up a nested iteration for i equal to 1 to n in steps of 1 and for each value of i for j equal to 1 to n in steps of 1. So this will cover all n square points. Now if i star r plus j star j is less than n star n. This is the condition which will tell me whether the point i, j is within the circle or outside the circle. If it is within the circle, I increment count by 1. Otherwise I will simply ignore it. Observe how simple and cute the algorithm is. At the end of these nested loops, when I come out I have got a value of the count which will be equal to the number of all the points which are inside the circle vis-a-vis the n that I have decided upon to use. At the end, I simply calculate the value of pi as 4 times the count divided by n by n and output that value. Now we would like to do two things. First we would like to execute this algorithm to ensure that we get the correct value of pi. Second we would like to examine what is the execution time that this program takes when we execute it for different values of n. Observe that right now we are not talking about making the program more efficient but we are talking about just determining the execution time for different values of n. Obviously we will relate this notion of n to the time complexity of the algorithm later on as most of you would have automatically guessed. So here is the compilation. I use the cc compiler. I could also use gcc. Estimate value of pi.c is the program and after it is compiled I execute it using dot slash a dot out but I use the time command. It asks me for a value of n. I give a value of 1000. This value is used by the program to determine the value of pi which happens to be 3.137548 on a particular computer on which I executed this program. Because I have used the time command I also get the execution time statistics. So the real time is 4.130 seconds. The user time which is by the way the time that my program took to execute is 0.016 seconds and system time was 0.004 seconds. Notice that the real time is 4 seconds most of which was used by me to type in this value 1000 because when it said n question mark I was required to type in 0000 and press return that is what took 4 seconds but otherwise the execution time is very small. Obviously this measurement itself will not be very precise because if I have only 3 decimal digit representation in which the computer gives me the time and if the first digit is 0 then I am not sure of the accuracy. However by increasing n I can guess that the execution time will increase and I will have perhaps a better idea of the execution time. So the same algorithm is executed again and again once for n equal to 10000 and next time for n equal to 20000. From now on we will concentrate only on the user time because as I said that is the time which represents the execution time of my program really. Here I observe that it is 0.996 seconds or almost 1 second for 10000 value of n. When I double the value n is 20000 then the user time becomes 3.916 seconds or almost 4 seconds. So if I approximate this to be 1 second and this to be 4 seconds then I notice that for 20000 value of n the algorithm takes 4 times as much time as it took for 10000. So when n is doubled the execution time has quadrupled. I can therefore guess that if my algorithm runs on a problem of size n that means it handles n points then the time complexity increases proportional to n square. This would also be obvious by the fact that inside my program I have a nested loop and each loop runs n times. So the inner loop runs n square times. Consequently as n increases the type taken to execute that program is likely to be proportional to n square a fact which is borne out by the observation of the time command here. An interesting thing happens. So first we notice that the time taken for n equal to 10000 is this much and for n equal to 20000 is 4 times that much. It therefore appears to increase in proportion to n square. Now we execute this program for a still larger value of n. If we execute it for 50000 we get the statistics of time. User time is 21.4 seconds. So this has increased very rapidly and indeed in proportion to n square roughly if you take n as 50000 and compare it with the execution time for n equal to 10000 then n equal to 10000 executed in 0.996 seconds whereas this has taken 21 seconds. This value of n is 5 times that of 10000 and therefore we expect that the algorithm should take 25 times because we expect the algorithm to be taking time proportional to n square. 21.4 is close to 25. Suspicion of hours is collaborated by the behavior of the program. However, there is an extremely funny thing that we notice here. The value of pi has suddenly become minus 0.61674. Now this is an example. Let us go back two slides to check what were the values of pi that were predicted or estimated by executing our program for 1000, 10000 and 20000 value of n. Let us go back to three slides. So here is the first execution where n was 1000. The value of pi was estimated to be 3.137548 not accurate but fairly close to pi given that we had considered only 1000 points. When we consider 10000 points the value became 3.1411191 closer to pi. When we executed the algorithm with n equal to 20000 this became 3.141392 further closer to pi. We would have expected naturally that when we give n equal to 50000 we should have got a very accurate estimate of pi. Instead we are getting a negative value here. So why do we get a negative value for pi? Let us see the program again. In this program have I made any mistake? Well pi is going to be a floating point number because it is going to be a fractional number so I have declared it as float. i and j are counting variables since n itself is fairly small that is 1000, 10000, 50000 I have declared it as int. I have declared i and j as int. I have also declared count as int. So where is the mistake? I do not see any problem here. As a matter of fact the algorithm is correct but the problem is occurring because of either computational inaccuracies or representation inadequacies. Let me ask this question to anyone who would care to answer first can anybody pinpoint what could be the possible sources of error? Why is it that when I give n equal to 50000 I am getting suddenly a negative value for pi rather than getting the right value? If any center is answering I would like to go over to that center. Do we have any interrupt from a center? Good morning sir. Riaz from COEP. Sir I think it is due to the over show. Int here is 4 bytes and it is 50000 into 50000. That value over shows due to that that trouble comes. Thank you. Very good point made. So let us look at this program once again. What is correctly pointing out is that i and j and n may have values which can be easily represented as an integer number assuming integer to be 4 bytes. However when we look at n square then n square we may not be able to represent appropriately as a single integer value. And he is right when you multiply 50000 by 50000 suddenly the result goes outside the representation range of n. What could we do? Well we could use perhaps floating point values. We could use long n. If we redefine n and count as long long will ordinarily be expected to mean 8 bytes. If we have an 8 byte integer indeed then we should have no problem because n square values of count, values of i square, j square etc. all be within the representation limits of 8 bytes. Unfortunately on most of the computers that we use in our labs for our students such as the desktops that you would have in your labs int and long both have the size of 4 bytes only. This is very unfortunate. This has happened because the original int on small computers as I mentioned was 2 bytes and long was 4 bytes. When you have 32 bit computing on most machines now int has become 4 bytes but long has remained 4 bytes because of the implementation limitation of the c-compilers. On larger machines long is indeed 8 bytes. In fact it is not uncommon to find on machines which are used for scientific computations that long could be even larger than 8 bytes. However given the situation let us say I modify my program. Of course the hash defines let us not bother about them. They are just there to handle input and output properly but look at the definitions. I am now defining pi to be float, int i, j, n to be int and count to be long. As I mentioned to you when I executed this program on my computer it still did not give any different result and that is because long turned out to be only 4 bytes and therefore the value of count which was actually increasing beyond the capability of 4 bytes was not accommodated. Similarly the value of n square, i square, j square also could not be accommodated as they became larger. Suppose I define all of these quantities as float what would happen? Here is an example the execution result for n equal to 50,000. This example is illustrative because if you look at count, n and so on what I am doing here this is by the way a good example to tell students how to divert the program. They have not got the correct result they got negative value for pi so they want to find out what is wrong. Similarly they may not be able to guess the exact detail as was pointed out by our friend from COEP Pune but they would like to determine it by putting in a few more output statements. This is what I have done. I am not only printing out the n which I read suspecting whether n itself has not been properly represented inside then I am printing out n square to find out where n square is representable and finally when I calculate the count I also print the count suspecting that count itself may not be represented properly. So observe that in this version of the program what I am doing is I am printing n and n square at the beginning and then I am printing count followed by pi. Now when I execute this program for 50,000 I get n as 50,000 but n square as minus some rubbish number obviously this is not n square and this is happening because of the overflow as was correctly pointed out. Observe that this value of count appears to be positive 276862174 but is this the correct value of count? How do we determine whether this is correct value of count or not because it is a large positive value and therefore we may think that this is a correct value of count. However we can do a simple mental arithmetic we know that 50,000 into 50,000 is 2500000000 we can actually write down so many zeros. Now that is the value of n square since the value of pi is given by dividing count by n square and then multiplying it by 4 this thing multiplied by 4 should be greater than 50,000 multiplied by 50,000 it is not and therefore we should suspect that there is something wrong even with this value. Some of you will recognize this value to be close to the representation limit of an int or even a long which is 4 bytes so what is happening there is as you keep adding 1 to the count after some time nothing gets added I therefore modify the program further and I declare everything to be double so I say double pi, i, j, n and count which I initialize to 0 why double and why not float after all let us go back to the previous slide whatever be the large value of n square and count that floating point representation is capable of representing that range in fact I can represent up to 10 to the power 100 so where is the worry the worry is not in the range of numbers that can be represented by float but the precision that I get so let us examine the variables and various terms which are involved here I have things like n square, i square, j square these squares will be correctly computed even if these quantities are defined as float in terms of the actual range although the precision will be limited but what will happen is the count which is being incremented by 1 after some time if count is declared as float it will reach its limitation of let us say 7 digits of precision of the mantis of that floating point representation now beyond that if I keep adding 1 to 8 it will make no difference recall what happened to our avogadro number when we tried to add some small value to it avogadro remained avogadro all across so any count beyond a certain point if simply declared as float will not be able to adequately represent the correct count so we do a smart trick here we use the double precision floating point representation observe that this usage is very artificial why because none of the quantities concerned are truly fractional numbers they are all integer numbers except when we get a fraction is in the final computation of part but use of double for count ensures that count gets represented by 8 byte floating point number and the precision of the mantissa part of that number will be adequate to cover the correct value of correct range of values for count therefore count will never either saturate or become negative or whatever and I will get more accurate value here is the execution statistics for this program version for n equal to 10,000 I have these statistics real time user time taken is 1.06 seconds for n equal to 20,000 I get 3.141392 and for 50,000 well thankfully I now get 3.141512 this is more accurate than what I had earlier observe that in this particular case n square is being printed correctly and count is being printed very accurately there is no overflow because the larger precision that I have in the double precision representation of the floating point numbers observe that the user time is 24.902 seconds roughly similar to the execution time that we had observed earlier so there is no difference in so far as execution time is concerned but we have finally ensured that we get an accurate estimate of pi now we concentrate on further reduction in execution time we observe that the major computation is happening during evaluation of the if condition what is the if condition if I square plus j square as then equal to n square then count is equal to count plus 1 we notice that this computation is done within a nested iteration each one runs n times so totally this comparison and therefore this computation is done n square time we further notice that in each iteration the value of i and value of j will be different because that is what is being varied by the control structure of the for command however the value of n is fixed any which way so why is it that we are calculating n square again and again and again n square times can we not compute n square outside both the iterations and keep that computation assigned to a simple variable and then use that variable for this comparison within the iteration this should substantially reduce the execution time in fact if we notice that there are three multiplications which are being made i into i j into j and n into n out of these three one will be eliminated because it would have been done earlier once only and the final value would be used and therefore I should expect at least one third reduction in the computation time consequently I change my program in this nature I have written only a part of the program here I define a new variable n2 obviously I will define it as double in consequence of my earlier observations and I will say n2 equal to n square at the beginning of my program and now I try to estimate pi using the same algorithm count is zero I set up a loop for i to vary from one to n I set up a nested loop for j to vary from one to n but inside the nested block I make a simple comparison if i square plus j square is less than n2 I increment the count observe that this particular multiplication is eliminated from inside the loop which executes n square times so as I said should I expect one third reduction in computation time well when I rub this modified program version 2 I actually don't get any betterment of the time in fact I think that slide is missing but let me go back to two slides if the earlier version was executing in 24.9 seconds then I would expect this version to execute at least in 8 seconds less because that is one third of the total time that is spent in computation but when I execute that algorithm it takes about the same time 24, 23 seconds or something like that which means there has been no improvement this is where I start wondering am I missing out something I have removed the n square computation from within the loop I have calculated that value and put it in a variable and I am using that variable value here so why should this program version take roughly the same time as my earlier version here is a quiz on that the execution time of the two versions is not appreciably different because a multiplication does not take very long time it is the division and addition operation which is time consuming b since i and j are varying i star i and j star j each takes much longer than n star n so removing n star n does not make much difference c our Dumbo machine somehow figures that n is not changing so it calculates n star n only once on its own and uses that value repeatedly and d I do not know and also I cannot guess so let us have answers to this quiz think for a minute the question I will repeat the execution time of the two versions is not different appreciably because of any one of the following reasons you have to point out what is the right reason the first reason says that multiplication is not costly the second one says since i and j are varying the multiplication of varying numbers take longer the third one says that what we were trying to do the machine had done even with the earlier version and the fourth one of course admits that I do not know and I cannot guess I have three answers from see her get technology ASC Amritapuri and NIT Surat Kal we will quickly go over to each oh there are many more answers coming up we will go by first come first serve so we will go to see her get Amritapuri then NIT Surat Kal and Nirma Amnabhad oh everybody is giving C as the answer okay now here is a question I should have suspected that because the participants in this workshop are teachers and therefore the correct answer C is given by the way I got the answer so you need not speak but I will ask you one question which you must answer truthfully see her get technology Pune whom I am seeing here and that question is if you pose this quiz to your students who are attending the computer programming course for the first time do you expect them to give the correct answer C anybody from see her get institute sir I don't think so that the students will be able to set the right answer at this juncture but once they get to know the concept they will be able to tell the right answer thank you Sonal in fact that is the right observation thank you we are not going anywhere because everybody is saying the same thing except PSG and HGS ITS some people who think B is the right answer in fact when I conducted this quiz in my class most people said D I do not know that was the truthful answer some people actually said B suspecting that if some variables values are having inside a loop the operations on them will take a longer time if you individuals from these two centers which think B is the right answer I must take some time to explain why it is wrong please note that when an expression is evaluated let's go back to the previous slide of the program consider this program I star I and J star J now here is the point when this expression is evaluated at that point in time there is a value in location I there is a value in location J it does not matter whether that value has changed from the previous situation or it is same as far as Dumbo is concerned it will take exactly the same amount of time to take the value from the location I multiplied by itself take the value from location J multiplied with itself and add the two values so it does not matter whether the values of I and J have changed recently because of the double iteration or they are fixed earlier the time taken to execute a computational expression is exactly the same because what the Dumbo or our C compiler or C computer is concerned with is what is the current value of the variable in a location it does not remember the past state it does not remember whether the value changed now or recently so it is an important point and all of us should remember to emphasize this point to those of our students and you are right like some of you many of my students guessed that this might be the reason but this is black magic and computer follows only the rules of expression evaluation so there cannot be any change okay so B is obviously not the correct answer and C in fact is the right answer here is however an interesting technique to improve the performance suppose instead of using the entire square region I use only a triangular region I notice that this is symmetric so the number of points in the upper triangle are exactly same as the number of points in the lower triangle if I do that then counting of the number of points is suddenly halved so I vary from 1 to n and I do not vary J from 1 to n but I vary it from R to n if I do that observe that I will take values 1, 2, 3, 4, 5, 6 up to n but J will take values only R to n and therefore I will be limited to considering points in this lower triangular portion this should speed up my algorithm by a factor of 2 because the area of the encompassing triangle will now be n square by 2 consequently the value of pi that I will estimate will be given by 8 times the count divided by n square this is a simple extension and I can write a modified version 3 the execution timings with 20,000 and 50,000 turn out to be 1.912 seconds and 11.633 seconds compare them against the 4 second and 24 second timing earlier and you observe that they have essentially halved in conclusion of the discussion on these execution timings I will mention this first if the problem size is n then the execution time of a program is often a function of n therefore the time is represented typically by some expression like k1 times n square plus k2 where k1 and k2 could be arbitrary constants it is useful to tell our students right in the first year introductory course this notion of representing time complexity typically by a polynomial note the word size which is being used for the first time in general in any computational problems there would be a large variable which shall determine the computational efforts clearly in this case of estimating value of pi n the total number of points that we consider is the main size or representative of the size the factors k1 and k2 will depend upon the actual computational time required for addition, subtraction comparison, loop control whatever whatever but this will be essentially the expression for this problem there could be a problem where you have three nested iterations is going from 1 to n in which case the time taken will be something like k1 times n cube plus k2 times n square plus k3 in general it is not uncommon to observe a polynomial time expression for the execution time we may consider k1 and k2 to be important for example if you take the third version of the program that we wrote the third version reduced the number of points to be considered by half and consequently the computational effort was reduced by half what it means effectively is that the factor k1 became half of what it was earlier now that does improve the execution time substantially but not to the extent that it would have happened if I was able to reduce the dependency of time on n square and could make it depend entirely on n actually this formula is not correct the formula should be time equal to k1 times n square plus k2 times n plus k3 that is the proper polynomial representation the point to be noted here is that as n becomes larger or tends to infinity the individual factors such as k1, k2 and lower order terms of powers of n such as n or n raise to 1 or n raise to 0 do not contribute much in the limit the execution time will be proportional to n square and that is why such algorithms are said to have a time complexity of order n square at this stage we will only note that time complexity of an algorithm is an important area of study and our algorithms should be as optimal as possible with this we conclude the first lecture