 Often when you're trying to find something out, you can't make a direct measurement of the exact thing you want to calculate. But what you can do is a bunch of measurements that you can then combine into the thing you're actually interested in. So when you have uncertainties in the things that you measure, how do those uncertainties combine to give you an uncertainty in the final result that you're actually interested in? So suppose there's some quantity you want to know about. But it's a function of some other variable, which is the variable that you can measure. So the thing you can measure is called the raw data, and the thing you're trying to calculate is called the derived quantity. So my derived quantity is going to depend on the variable that I'm getting my raw data for. So supposing I measure a particular value for x, so I've got a particular value for that, of course I'm going to have some sort of uncertainty in that. So actually it's going to be some kind of bound there, and it's going to be somewhere in here. And this distance here is the delta x, our uncertainty in x. Now if we project that up to what y is going to be, obviously what we'd do is we'd say, well, that's the value of x that we measured, so this here must be the value of y that we derive. So because of our uncertainty in our raw data in x, we're also going to have some kind of uncertainty for our derived value of y. So just from this picture we can see that this here must be our uncertainty in y in our derived quantity, and you can in fact derive that pictorially using graphing, if you like. A more advanced way of doing exactly that process is to use calculus, but we're just going to work out a few rules of thumb for some very simple formulae that come up all the time. If we're doing this really carefully, we'd worry about the particular distribution of possibilities we'd have for our raw data, and we'd be able to convert that into a particular distribution for our possible results for our derived quantity, but we can get a good estimate by calculating this distance here. And that's just the difference between the values we would have got for y if we'd calculated using the slightly higher version of x, x plus delta x, compared to if we'd used just the expected value of x that we'd measured. So let's look at a few cases. Firstly, let's look at a linear multiplier. In other words, what if y is just some multiple of x? Then if we use this formula here, if we expand out this bracket, we're going to get lambda times x and lambda times delta x, and the lambda x is going to cancel with this lambda x, and we're just going to be left with the other term. So that makes sense. If I take 10 times a measurement, then I should multiply the measurement by 10, but I should also amplify my uncertainty by 10 as well. So if I had an uncertainty of a centimeter across a meter stick, then if I was talking about 10 of those, I'd have 10 meters as my average, but I'd also have now 10 centimeters as my uncertainty. It's very common that we're actually making more than one measurement, and so a very common one would be something like addition. So there you might have two quantities that you're measuring, and you're just adding them together. So in that case, we can follow exactly the same process. We now have allowed our derived quantity to depend on more than one variable. And that means that when we're estimating our uncertainty, we would have both this variable going to an extreme value and that value going to an extreme value. And so what we've done down here is we've said it's just addition, so I'm going to have a going to a plus delta a, and we're going to have b going to b plus delta b, and we're just taking away our average there. And if we expand out the brackets, the a cancels with the minus a, and the b cancels with the minus b, and we just get these two things added together. So for addition, you add the uncertainties. Now what happens with subtraction? Well, subtraction is just addition where one of these numbers is negative, but we haven't paid a lot of attention to the possibility of negative numbers thus far. So back up here with the linear multiplier, what if our multiplier wasn't 10, but it was minus 10? Would we expect our uncertainty to go negative? What does a negative uncertainty even mean? Ultimately, we're talking about this delta x as just this distance, and so we're assuming that it's positive, and our value could be anywhere between x minus delta x or x plus delta x, or indeed, possibly in the tails of some distribution of about that width. So we're really thinking about the uncertainty that we're quoting as a positive number. It's like the size of our uncertainty. And remember, if this slope, if this graph went that way, we wouldn't expect to get a negative number there for. So what we really should be doing is we should be putting an absolute value around whatever thing we get out of here, and that absolute value will make sure that our uncertainty comes out as a positive. And then it doesn't matter whether we pick plus or minus there, and it shouldn't. And it doesn't matter whether we pick plus or minus here, and it shouldn't. In the end, what we're going to get is the sum of the uncertainties popping out. So in fact, addition or subtraction, in both cases, you add the uncertainties. So in other words, supposing I go forwards a metre, plus or minus a centimetre, and then I go backwards a metre, plus or minus a centimetre. Now on average, you end up at exactly the same place. But there's uncertainty. There's uncertainty on my step forwards, and then there's another different uncertainty on my step backwards, and so those uncertainties are not going to cancel. So I'm going to end up back where I started, and now I'm going to have an uncertainty of two centimetres from my starting position. So even though I'm subtracting the values, I'm subtracting my A and my B from each other, my measured values, I still have to add the uncertainties. Okay, now what happens when we multiply values? Then we can follow exactly the same process again, and we expand out the bracket. We're going to have four terms. We're going to have A times B, which is going to cancel with that A times B. We're going to get terms. We have an error times the other value, and then we're going to have the term where we have the two errors multiplied together. And what we normally do is we normally ignore the term when we have the two errors multiplied together because we assume they're both quite small or else most of our approximations are going to get a bit loose anyway, and so when we multiply two small numbers, we get a tiny number. And so we're going to end up with just those two cross terms. And a good way to remember that is to divide both sides by Y. So if we have our error in Y divided by Y, that's going to be what we just worked out, divided by our definition of Y in the first place. And so we've got two terms here, and so when we have the first term, the A's are going to cancel, and the second term, the B's are going to cancel. And this quantity where you divide the uncertainty by the quantity itself is called the relative uncertainty, whereas the error just by itself is called the absolute uncertainty. And the rule is very simple. When you're multiplying things, you add the relative uncertainties. And a similar argument tells us the same thing works for division. Okay, let's do a couple of examples. So let's suppose that we have a box of apples. We've got 100 apples, and we weigh them. And we figure out that their mass together is 10.3 plus or minus 0.2 kilograms. So how much does one apple weigh? It's going to have mass. Well, there's 100 apples. We divide by 100. So we're going to get 100th of our average, and we're also going to get 100th of our uncertainty. And that's the mass of one apple. Now, suppose you wanted to know how much area the box has on one side, and we measured its height and its width. And if we measured some values for the width and the height with a particular uncertainty, and it's easy enough to get the average area, we simply multiply our width by our height. But what do we do for the uncertainty? Remember we have to add the relative uncertainty. So the relative uncertainty in this quantity is 1 centimeter divided by 120 centimeters, which is 0.8 percent. And the relative uncertainty in this is 2 centimeters divided by 23 centimeters, which is a lot larger, which is 9 percent. So remember we have to add the relative uncertainties when we're multiplying two numbers. And so we add 9 percent and 0.8 percent, which rounded gives us 10 percent. And so our error here is going to be 276 square centimeters. But remember we have to... We can't quote all these extra significant figures down here. And if we're going to have an error that big, which is going to round to 300, then there's no point quoting this number either. So the way we'd quote that properly is like that. So we've got the right number of significant figures in both our uncertainty and our final result.