 Now, let us discuss minimization of errors, so if you want to consider how errors can be minimized we should know how they are estimated, now here is one example shown on the slide. Suppose our goal was to measure the resistance of a particular resistance, we have taken 8 readings, so 8 different trials of the experiment were done and then the measurements are reported here. So, 4.615 ohms, 4.638 ohms, 4.597 ohms, incidentally you can see how the table is written on that in the first row you have r divided by ohms, so this helps you to avoid writing ohms against each number in the table, another way this could have been done is you could have put ohms in brackets, the symbol ohm in brackets you write r and against that you put the ohms in brackets. So, now you take all these readings and now you know that when you want to report you will report the average value, so average of all these readings is 4.625 ohms, now some other parameters associated with this set of readings, the standard deviation, standard deviation s is equal to 0.017 ohm, you know how the standard deviation is calculated, you take the mean and each measurement is subtracted from the mean and then the difference is squared, you know sum all these up and then divide by the total number of readings. Now, another parameter associated with this set of readings is the range which is the difference between the maximum value and the minimum value, so in this case it is 4.659 minus 4.597 which is equal to 0.062 ohms, now let us see how the standard deviation and the range these can be used to specify the error in the average value, error in the average value, so when you report your result after doing this 8 experiments you are going to report the result as 4.625 ohms plus or minus an error, now what we are going to discuss is what is that error, what is 4.625 plus or minus, what is that plus or minus quantity that depends on the standard deviation but it is not exactly equal to the standard deviation, please understand this, now this is the point that we are going to discuss and this is what we will explain or help you understand how averaging reduces the error, if you average more number of readings your error is reduced, otherwise there is no point in taking more readings and averaging, this is a very crucial thing that students miss, I am not going to discuss the theory behind this, I am going to give you the result, the theory behind this is related to the distribution of the errors, for example here on this slide you show you see a histogram there, this histogram shows the number of readings in the interval delta r equal to 0.02 ohms, so on the x axis is a reading r on the y axis it is a number of readings which lie within an interval 0.02 ohms, now there is nothing very special about choosing delta r as 0.02 ohms, you can choose any other interval if you want, you can choose a smaller interval, if you choose too large interval then you will not get a histogram with variation obviously, so now you start from the 4.6 minus 0.01 to 4.6 plus 0.01 that is one range, see how many readings are there which lie in that range, now that is what is the height here, so if you take that range 4.6 minus 0.01 it is 4.59 is a lower value, now 4.59 and 4.61, so 4.597 this is one value which is more than 4.59 and 4.61 all other values seem to be more than that, so if I take the interval 4.59 to 4.61 I will get the number as 1, there is only one reading in this range, however what is shown here is 2, 2 readings there, pardon 4.613 it is more than 4.61, so it may be that the interval starts from 4.595 it is possible in which case it will go to 4.615, it depends on how you draw the histogram, so it may be it is starting from 4.595 in which case the upper value is 4.615, then 4.613 will be included you will get 2, then the next you consider 4.615 plus 0.02 which is 4.615 to 4.635, so let us see that 4.615 to 4.635, so now so that number there is 4, 4 readings, is it 4 is correct? 4.615 is 1, 3, 623, 4, so this is correct, so I think that is the way this histogram is organized, so that is how this histogram is prepared, now the theory that is developed assumes an ideal condition, if you take every large number of such readings and you were to draw a histogram by reducing this interval delta R, so you will get a shape that will be very close to a Gaussian, so that is how your readings will be distributed, now it is under this assumption that the theory for errors is developed, so we are not going to discuss the theory, these are the elements of the theory, now what does the theory say, so the theory says that error in a mean of observations is less than the error in a single observation, and the relation between the two is given as follows for the limiting case of very large number, the error in the mean sigma m, sigma shows the error, sigma is a symbol used to indicate the error, sigma m is given by error in a single variable or error in a single observation of the variable sigma divided by square root of the number of readings, now what are the estimates for sigma, the estimates one estimate for sigma is, sigma is equal to standard deviation s into square root of n by n minus 1, where n is the number of readings, so note that sigma is the error in a single observation, you are trying to estimate the error in a single observation from the standard deviation calculated for a number of readings, that is what is interesting here, another estimate for sigma because the calculation of standard deviation is little bit involved, the calculation of the sigma from this formula s into square root of n by n minus 1 is little bit involved, so one can use an approximation for this, this approximation there is a theory behind this also, it states that the estimate of the error in a single observation sigma is approximately equal to the range r, which is the difference between the maximum and the minimum reading divided by the square root of the number of readings, this formula is fairly accurate for n lying between 2 and 12, so 2 to 12 readings and in fact in practice the theory shows that since the error in the mean of observations goes down as 1 by square root n and not 1 by n, therefore there is not much point in increasing the number of readings too much, so they say you take about 9 to 12 readings, that is sufficient if you want to reduce the random error using averaging, let us do some calculations and see how this formula apply in our case, so in our case we have 8 readings and the standard deviation was 0.017 ohm of the 8 readings, so your error in a single observation estimate of the error in a single observation sigma is square root of 8 by 7 into 0.01 n by n minus 1 square root into s that was the formula that is 0.019 ohms, this means for example in our table for one reading is 4.615 ohm, if you want to show this reading including the error then it should be shown as 4.615 plus or minus 0.019 ohm, that is this is the error in a single observation, however now after averaging we obtain the value of 4.625 this is the mean, now it is important to note what is the error in this mean, it will not be 0.019 ohm, now that is the crucial thing that is what is the advantage you are getting by taking the average that will be given by sigma m the formula for which was sigma divided by square root of n, so that is how the number of readings is reducing your error in the mean, so 0.019 by square root of 8 it gives you 0.007, so if you had to take only one reading then you should report your reading as 4.615 plus or minus 0.019 that is what would have been your error plus or minus 0.019, but after averaging 8 readings the result is 4.625 plus or minus 0.007, so this quantity is less than that quantity, that is the advantage you have gained by averaging, this is how averaging reduces the random error. Now instead of calculating this quantity from the standard deviation which is an involved calculation one could have done a very rough calculation using the range, so in our readings the range was 0.062 ohms, difference between the maximum and the minimum readings, so if I were to use that then I could have estimated sigma as range which is 0.062 ohms that is r divided by square root n, so square root 8 that is your sigma and this value if you estimate it comes to 0.022, so instead of 0.019 you would have got 0.022, so that is the kind of deviation that occurs because of using a simpler method which is more approximate, and then you carry through this 0.022 instead of 0.019 you put 0.022 and then you can estimate sigma m and that is how you report here, so this is how random errors are minimized by averaging a number of readings. Finally your experiment involved calculating a quantity which is a function of several variables each of which are measured independently. For example what you want to determine is some parameter z which is a function of a, b, c and so on. Now each of the parameters a, b and c are measured several times in several trials and then you have done averaging and all that and then you obtain the error. Now how will you obtain the error in z because of all these errors? Now here is the formula for obtaining the error in a function of several variables, so formula is very straightforward if z is a parameter you want and f is a function then delta z is given by dou z by dou a evaluated at a particular value we will see what is that into delta a plus dou z by dou b into delta b and so on. Now the suffix 0 here indicates the value at the mean point, so dou z by dou a should be evaluated at z obtained by replacing a, b, c and so on by their mean values. So a is replaced by average a, b is replaced by average b and so on. So you evaluate dou z by dou a and you substitute a equal to its average value, b equal to the average value and so on. So here are some examples which you can take it as an assignment because I am not going to work out for all these cases. Some straightforward cases we can see if z is equal to a plus b then when you apply the formula what you find is delta z equal to dou z by dou a is 1 so it will be delta a and dou z by dou b is again 1 so second term will be delta b. So delta z is equal to simply delta a plus delta b. If z is equal to a into b you will get b into delta b as the first term dou z by dou a into delta a. I am sorry dou z by dou a into delta a will be z equal to b into delta a, b into delta a plus a into delta b and so many times you will find the formula written as since delta z if z is equal to a plus a into b delta z is equal to b delta a plus a delta b. So if you divide by a b then you get delta z by a b is equal to delta a by a plus delta b by b but a b is nothing but z so delta z by z. So when you have product then relative errors can be estimated easily delta z by z is percentage error or relative error. So percentage error is sum total of the percentage error in each whereas if z is equal to a plus b then delta z the absolute error is sum total of the absolute errors. This is the difference between a b and a plus b. So one can work out similarly for a by b and more complicated functions shown on this slide. Now we have discussed reduction of random errors by repeating the readings and averaging. How do you reduce systematic errors? This is a much more involved topic because it all depends on what is the kind of error which is present. Yeah there is a question here. Regarding the number of readings we said that n can lie between 2 and 12 the number of readings that we take. Yes. So suppose with that the percentage error comes out to be say even 0.01% but what if the case when even 0.01% can be say millions of rupees like in finance or stock market. So there we cannot rely on few number of readings where the error of 0.1% can be millions of rupees or dollars. Let me clarify one thing. We do not say we have not said that n should be between 2 and 12. All that we have said is if you use the formula R by root delta n to estimate the error sigma in a single observation then this formula can be used between 2 and 12. That is all what we have said. All that we are saying is if you want to use the formula sigma equal to R by square root n which is a simple way of estimating sigma. Sigma is the error in a single observation. If you use that formula then this formula will be reasonably accurate if n is between 2 and 12. That is all what it means. This is not the reason why we are saying that you restrict your number of readings to 9 or 12 or something like that. That follows from this formula here. Sigma m is sigma by root n. It is from this that we are saying. This has just to do with using the simplified formula for estimating sigma. So if you see sigma by root n you can see supposing I take 9 readings. My error in the mean is sigma by 3. Even if I take 16 readings it is sigma by 4 where I have changed from 9 to 16. So it has only reduced from sigma by 3 to sigma by 4. If I go to 25 readings it will go to sigma by 5. 36 readings sigma by 6. So you see as you increase your number of readings initially you get a rapid improvement but thereafter the improvement tapers off. That is all what we are saying. It is from this point of view we said that you know you take about 9 readings or 9 to 12 something like that. So now let us come back to reduction of systematic error. So here the methods used are not so general. So method for reducing random error is very general. In all cases random error can be reduced by simply repeating the measurement and taking the average and using that formula you can report the error. But reduction of systematic error is specific to the experiments. Still some guidelines can be given you know by considering some examples. So we will consider two methods which are important and which will work in several instances. So the first method here is choice of sequence of measurements. Now when you are taking a number of readings one should consider how your result will be affected by the sequence in which you make the readings. Let us consider an example. Let us take the experiment of terminal velocity as a function of the diameter. We want to measure the terminal velocity of the ball as a function of diameter. Supposing I choose four different balls of four different diameters. Now I want to reduce the random error. So which means that I will take ball of say the biggest diameter A and supposing I choose to make nine trials of the experiments. Nine readings I will take of the terminal velocity. I will drop the ball A first time then I will use my approach of stop clock and the distance and then I will distance by time is terminal velocity. I will make one measurement one reading. I will drop the same ball again ok. I will take one more reading like this I will take nine readings. Again nothing special about just the number nine you can take twelve readings if you want right. So nine readings I take and then I average. So this way I obtain the terminal velocity for a ball of diameter A the particular ball A. Now I will do the same thing with ball B same thing with ball C and D and then I will see I will make a table ball A what is the terminal velocity B what is the terminal velocity. So then I will try to see whether as a function of diameter there is any change in the terminal velocity this is the experiment. So how many readings I should take if I follow this approach nine trials for each diameter total of four diameters. So nine into four thirty six readings I have to take. Now these thirty six readings when I take I may take about an hour or even more to do the experiment. For several reasons the temperature of the viscous medium can increase supposing I am starting the experiment at about ten thirty in the morning. Now that is the time when temperature goes on rising fast temperature of the environment I do not have an AC and so on imagine a situation. So because the environment is heating up that is one reason why the glycerin is heating up and it is heating up also because of friction. So because of these reason the temperature is increasing. Now as we have pointed out earlier this variation in temperature is the systematic error one source of systematic error in this particular experiment. Now how do I remove this effect of temperature variation. So one way would be to use a temperature controlled bath however there is an inexpensive way of removing the effect of temperature and this is where we are considering the sequence in which you take the measurements. So our impulse would be to take the measurement as follows. So I will first take so our tendency would be first I use the ball A I take all the nine readings for ball A then I take the ball B I take all the nine readings then D then C and then D. Now what is going to happen here the ball with the largest diameter is being measured at the lowest temperature and the ball with the smallest diameter is being measured at the highest temperature. So as a result when you change the diameter in one fashion the temperature is changing in a particular fashion. Instead of that what I should do is supposing I take the sequence of readings as follows. I do the first experiment with A then I do the first experiment with B then I do the first experiment with C and then D. Then I do the second experiment with D first then second experiment with C then B then A then now I change again now I start with the third experiment with D then C then D A. Now I will average the nine readings that I obtain in after I have taken the done the experiments in this sequence. Now one can show that approximately the readings will average of the readings will correspond to the same temperature for all the cases. Why because you see supposing temperature is T1 for A and T2 for D here. Now since you are following up with D immediately it is likely that both these two readings for D are at T2. Now the temperature corresponding to this reading here A would be T3. Now average of T1 and T3 will be very close to T2. So like this you are cancelling out the effect of drift of T temperature. So choice of the sequence of in which you do the readings take the readings this is very crucial is one important point in all measurements. So by proper choice of the sequence you can reduce the systematic error. Let us take one more example of reducing systematic error one more method supposing you are measuring the pole conductivity of a sample. So how do you measure thermal conductivity let us say this is your sample and you see to it that one end of the sample is maintained at a particular temperature another end is maintained at another temperature and you are measuring the temperatures at these two ends using two thermometers. So I put one thermometer here or maybe I should show it the other way around. This is the sample this is one thermometer and this is another thermometer. So this thermometer measures T1 this measures T2. Now you know the thermal conductance of the sample is obtained by if I use the term sigma to represent conductance then it is equal to heat flux divided by T1 minus T2 temperature difference. So you will measure the heat flux how much heat flux is there and you will measure the temperatures T1 and T2 and then you will put it in this formula and calculate the conductivity. Now here what you should do is a systematic error can be maybe this thermometer 2 has a tendency to measure a slightly provide a slightly higher reading than it actually is. It can be a error in the thermometer 2 that it always tends to give you a slightly higher reading than the actual temperature. Now how can you cancel the effect of such a systematic error? So what you should do you should interchange the thermometers. So you measure T1 temperature using second thermometer. So in this approach therefore you will have one reading for T1 and T2. You interchange the thermometers and then you take the readings again maybe you will get different readings T1 dash and T2 dash. Now you take your delta T as T2 minus T1 plus T2 dash minus T1 dash divided by 2. Now when you use this approach then if there is any tendency on the part of any thermometer to show a reading that is higher than the actual temperature or lower than the actual temperature this effect can be cancelled out. So this approach can be termed as use of symmetry in the apparatus. There is some sort of a symmetry in the apparatus which you are using. So you are interchanging some elements over there and then using this to cancel the effect. Another example of this use of symmetry in the apparatus supposing you are measuring the resistance and unknown resistance using this potential balance approach. So you have a standard resistance and you have the unknown resistance and you have a wire uniform wire which you are going to tap and a moving pointer a moving tap and then you have a galvanometer. This is the approach which is used to measure resistances precisely. Now when the resistance of this part to this part ratio of this becomes equal to this resistance by this resistance your galvanometer will show zero reading. This is so called potential balance approach of measuring the resistance. So how can you measure the resistance from this? Since this is uniform wire the resistance of the wire is proportional to its length. Suppose total length is L and you have a scale here which allows you to measure let us say distance from this end of the pointer suppose this distance is x. So then you write rx by rs is equal to x by L minus x that is how you get rx. So you know rs, you know x, you know L. Now what one must do is because it is alright to write the formula like this. In practice there are contacts here and there will be some resistances associated with these contacts which also need to be taken into account. Now how can you cancel the effects of these? So what you should do is you should interchange you put the standard resistance on this side and put the unknown resistance on this side. Now if this so called end effects that is effects of these contacts are really absent then your new potential balance point would be this point will move to another point which is exactly distance x from this end. But in practice you will find that will not be true. So moment you find that that is not true then you realize that you are having something like end effects which are systematic errors. So now you can take the two readings and from there you can cancel out the effect of the systematic error. Now how can you do it in detail that I will not go into I am just pointing out methods. This is all common technique that is using symmetry in the apparatus and interchanging components to cancel out the effect of the systematic errors. So now let me come to the end of this discussion of experimental skills fitting a straight line because this is very very important and done very often. There are several ways in which you can fit straight lines very commonly used method is the method of least squares. So here you have made several measurements. So you want to know the behavior between one dependent variable and an independent variable and you want to approximate this behavior by a straight line. So you have taken several readings y i y is one variable several readings for y x is another variable. So you have taken equal number of readings for x which you call as x i. Now what is the best straight line which describes the behavior between y and x. So that straight line if you use the method of least squares we will just explain what is the method of least squares. First let us look at the formula. The slope m of such a line is given by sigma x i y i minus sigma of x i into sigma of y i product of this divided by n divided by sigma of x i square minus sigma of x i whole square divided by n and the constant c is given by sigma y i divided by n minus m into sigma of x i divided by n. Let us understand this method of least squares and fitting straight line by this method. If you look at this particular paper slide now this is a graphical representation of what we are doing there. Again you will appreciate how graphical representations explain things very nicely. So for example I have taken reading y i x i. So this is y i and this is x i. Like that this is x 2 y 2 x 3 y 3 x 4 y 4 and so on. So I have taken 1 2 3 4 5 6 7 8 8 readings for y and 8 readings for x and then I have plotted them on this graph. Now I want to know the best straight line that passes through all the points. Now it all depends on what is your criterion of the best straight line. The method of least square says that it minimizes this particular sum here shown here. Sigma of y i minus m x i minus c whole square. Again if I want to show this graphically what it means what is y i minus m x i minus c it is the distance of the reading from the straight line. So this is y i and then m x i minus c it is this. So this difference is this. So what you are doing is your straight line is such that if you take these differences this length this length this length this length this length and this length and you square each of them and sum all of them up you will find that the formula that we have suggested if you use that formula some of these squares of these lengths would be minimum. You take any other line some of these squares will be more that is the method of least squares. So with this we come to the end of the experimental skills.