Significant Figures & Error Analysis

In summary: OP's problem, I think the answer is correct. I would be very happy to be shown where I am wrong!In summary, the conversation discusses determining significant figures and transforming them into scientific notation in a physics lab. The student had trouble understanding how to calculate uncertainties and give final answers in the correct format. The student's attempt at the solution is shared, and suggestions are given for improving the accuracy of the answer. The experts also discuss the number of significant figures in the measurements and how to calculate uncertainties. Ultimately, the experts agree that the student's method for calculating uncertainties is correct, and the final answer should have four significant figures.
  • #1
Godisnemus
10
1
Hi, I've just had my first lab in physics and I'm having a bit of trouble understanding how to determine the significant figures of my final answers and transforming them in scientific notation. For example:

Homework Statement


[/B]
I had to measure three sides of a parallelepiped with a micrometer in order to determine its volume and density. My measurements were as followed :

A = 19.49 +- 0.01 mm
B = 19.53 +- 0.01 mm
C = 19.48 +- 0.01 mm

Homework Equations



The raw volume is: (7414.861356 +- 6.586123190454989) mm^3

The raw density is: (2521.962192168061 +- 13.67120199772) kg/m^3

The Attempt at a Solution



As stated earlier, I'm uncertain how to determine the significant figures correctly and how to give my final answer in scientific notation, but my attempt at the solution is:

Volume: (7.41 +- 0.00658) x 10^3 mm^3

Density: (2.52 +- 0.013) x10^3 kg/m^3

Thanks!
 
Physics news on Phys.org
  • #2
The rule of thumb is that the number of significant digits in a product or quotient should be the same as the number of significant digits in your least precise measurement. In this case you have four significant digits in all of your measurements and so your best estimate should have four significant digits (at least for volume - you did not include your measurement for mass, so I can't comment on your density value).

Generally speaking, you should only include one significant digit in your uncertainties so you have written your volume incorrectly - too many digits in the uncertainty, and too few in the best estimate. There are certain occasions where you may want to keep two digits in the uncertainty, but never more than that. The best estimate should be rounded to the place value where the uncertainty starts (or terminates, if you are keeping two digits). Therefore 7415 ± 7 cubic mm is the appropriate way to write it.

However...I am not sure how you calculated your uncertainty in the volume, but it seems like you underestimated it. Depending on how you were told to do it you should get either around 9 cubic millimeters or about 15 cubic millimeters. The general rule is to combine the relative uncertainties of each measurement in sum (yielding the 15 cubic mm) to find the total relative uncertainty (or in 'quadrature' if you are expected to do it that way - yielding the 9 cubic mm).
 
  • Like
Likes Godisnemus
  • #3
Godisnemus said:
Hi, I've just had my first lab in physics and I'm having a bit of trouble understanding how to determine the significant figures of my final answers and transforming them in scientific notation. For example:

Homework Statement


[/B]
I had to measure three sides of a parallelepiped with a micrometer in order to determine its volume and density. My measurements were as followed :

A = 19.49 +- 0.01 mm
B = 19.53 +- 0.01 mm
C = 19.48 +- 0.01 mm

Homework Equations



The raw volume is: (7414.861356 +- 6.586123190454989) mm^3

The raw density is: (2521.962192168061 +- 13.67120199772) kg/m^3

The Attempt at a Solution



As stated earlier, I'm uncertain how to determine the significant figures correctly and how to give my final answer in scientific notation, but my attempt at the solution is:

Volume: (7.41 +- 0.00658) x 10^3 mm^3

Density: (2.52 +- 0.013) x10^3 kg/m^3

Thanks!

Your data has, essentially, 2 significant figures because 19.48 < A < 19.50, so A has 2-digit accuracy; However, B and C have three figures: 19.52 < B < 19.54 and 19.47 < C < 19.49.

Multiply together all the lower bounds and all the upper bounds to get bounds on V = A*B*C:
7403.459712 < V < 7426.274700,
or 7.40e3 < V < 7.43e2. You see that only the leftmost two digits of V are stable.
 
  • Like
Likes Godisnemus
  • #4
Thanks for your answer.

I calculated the uncertainty in the volume by using partial derivatives as it is shown in my lab tutorial, see the uploaded pdf for the exact formula. I've also checked my answer with the Excel error analysis calculator and the Python Error Propagator Calculator and they both match.

Should I not be calculating my uncertainties this way in physics?
 

Attachments

  • Volume.pdf
    251.1 KB · Views: 209
  • #5
Sorry, your calculation is spot on - I made a typo when I checked your work.
 
  • #6
Ray Vickson said:
Your data has, essentially, 2 significant figures because 19.48 < A < 19.50, so A has 2-digit accuracy; However, B and C have three figures: 19.52 < B < 19.54 and 19.47 < C < 19.49.

Multiply together all the lower bounds and all the upper bounds to get bounds on V = A*B*C:
7403.459712 < V < 7426.274700,
or 7.40e3 < V < 7.43e2. You see that only the leftmost two digits of V are stable.

That's a fine way to get the upper and lower limits, but the OP's method is preferred (for small uncertainties). Also, I disagree about there only being two significant digits because of the 'stable' digits. Indeed, the variation is in the third digits place (by your method), but the uncertain digit is also considered significant. Further, with the OPs method, the variation is really in the fourth digit and therefore there are 4 significant digits - in agreement with the 'rule of thumb' I mentioned in post 2.
 
  • #7
brainpushups said:
That's a fine way to get the upper and lower limits, but the OP's method is preferred (for small uncertainties). Also, I disagree about there only being two significant digits because of the 'stable' digits. Indeed, the variation is in the third digits place (by your method), but the uncertain digit is also considered significant. Further, with the OPs method, the variation is really in the fourth digit and therefore there are 4 significant digits - in agreement with the 'rule of thumb' I mentioned in post 2.

I must be missing something, then. If we know that 19.48 < A < 19.50, how can we say there are four significant figures? OK, I can go with three, because "A < 19.50" is really something like "A < 19.499...". However, I do not see where the fourth significant figure comes from.

As I say, I may be wrong (it has happened lots of times before), but to help the OP circumvent his difficulties, an explanation would probably be helpful.
 
  • #8
Ok. So, by the OPs method (which I've seen called quadrature addition of uncertainty), estimates the uncertainty at about 7 cubic millimeters. If we accept that the uncertain digit is also considered significant then there are four significant digits.

The quadrature method controls for overestimations in uncertainty from the general idea you presented (finding the highest and lowest possible values). The big idea is that, provided we assume the uncertainties are completely random there tends to be a good probability there will be a net canceling canceling effect in the uncertainty (one value being above the true value and another being below the true value). Thus, if you use the absolute highest and lowest values you have, often, overestimated the overall uncertainty. See, for example, An Introduction to Error Analysis by John Taylor for an excellent readable reference.
 
  • #9
brainpushups said:
Ok. So, by the OPs method (which I've seen called quadrature addition of uncertainty), estimates the uncertainty at about 7 cubic millimeters. If we accept that the uncertain digit is also considered significant then there are four significant digits.

The quadrature method controls for overestimations in uncertainty from the general idea you presented (finding the highest and lowest possible values). The big idea is that, provided we assume the uncertainties are completely random there tends to be a good probability there will be a net canceling canceling effect in the uncertainty (one value being above the true value and another being below the true value). Thus, if you use the absolute highest and lowest values you have, often, overestimated the overall uncertainty. See, for example, An Introduction to Error Analysis by John Taylor for an excellent readable reference.

Of course, in reality the errors are random, with some unknown type of probability distributions. Hopefully the mean (expected values) of the individual errors equal zero, so the "central" value of the error bar is the actual mean of the data distribution, and the part after the ##\pm## sign is a measure of spread, such as a simple multiple of the standard deviation. Typically one would expect the error distribution to peak at zero and fall off as we go away from 0 on either side, so having errors near the end of the stated interval is unlikely. And, of course, in order to have ABC near the theoretical lower bound we would need all three of A, B and C to be at their lower bound, or nearly so, and the probability of all three being low is 1/8. It is more likely that we get partial cancellation, with some of the factors being too high and others to low. So, yes, indeed, the theoretical lower and upper bounds I gave are not likely to be realistic.

I realize that the "usual" way of estimating these things is to assume that ##A = \alpha+\epsilon_a##, ##B = \beta + \epsilon_b## and ##C = \gamma + \epsilon_c##, where ##\alpha, \beta, \gamma## are the unknown "true" values and the ##\epsilon##s are independent random errors with ##E(\epsilon_j) = 0## and some standard deviations ##\sigma_j## that are related to ##\pm 0.01## in some way; typically, ##.01 = k \sigma## for some ##k## near 1 (maybe a bit < 1 or a bit > 1). Then, keeping only first-order terms, we have (approximately) ##V \equiv ABC = \alpha \beta \gamma + \epsilon_v##, where ##E(\epsilon_v) = 0## and the variance ##\sigma_v^2## of ##\epsilon_v## is
$$ \sigma_v^2 = \sigma_1^2 + \sigma_2^2 + \sigma_3^2$$.
If all three ##\sigma##s are equal (as they are in this case), then ##\sigma_v = \sqrt{3} \sigma##, so the appropriate error would be ##\pm \sqrt{3} (.01) \doteq 0.017##.

I know all that, but I still do not get the four significant figure valuation that you gave. I am perfectly happy to go with with three significant figures, however.

BTW: if we do not take the first-order approximation in ##V## then we have quadratic and cubic terms in the ##\epsilon_k##, so ##\epsilon_v## need no longer have mean 0 and will no longer have variance given by the simple sum-of-variances formula. If we wanted a good picture of the actual probability distribution of ##\epsilon_v##, we might have a sufficiently complicated and difficult probability problem that be best approach would be to use Monte-Carlo simulation.
 
  • #10
It seems like we're just arguing over semantics - whether the uncertain digit in the best value should be considered significant or not. Maybe I'm wrong, but in introductory physics courses, where the concept of uncertainty is usually underdeveloped and students rely on the rule of thumb significant digit rules, when a measurement is taken and the last digit is understood to be an estimate (and the uncertainty is unspecified) it is still called 'significant.' That's the only reason I was calling the fourth digit in this particular answer significant - that is the position where the variation in the calculation is estimated.
 
  • #11
brainpushups said:
It seems like we're just arguing over semantics - whether the uncertain digit in the best value should be considered significant or not. Maybe I'm wrong, but in introductory physics courses, where the concept of uncertainty is usually underdeveloped and students rely on the rule of thumb significant digit rules, when a measurement is taken and the last digit is understood to be an estimate (and the uncertainty is unspecified) it is still called 'significant.' That's the only reason I was calling the fourth digit in this particular answer significant - that is the position where the variation in the calculation is estimated.

I think it depends a lot on where the data came from and how it was obtained and only the OP knows that for sure. For example, is A = 19.49±0.01 the result of a single reading, where the error is an estimate based on the difficulty of reading a gauge to great accuracy (although ±0.01 on a base of about 19.5 seems good to me), or is it some spec recommended in the user's manual for the equipment? is it the result of several (hopefully independent clones) of the same basic measurement, with the 19.49 being a computed sample mean and the "0.01" being related somehow to the sample standard deviation? If this is the case it is not clear whether reporting the mean as 19.49 or as 19.5 would be more appropriate.

Anyway, I cannot believe the blunder I made in the previous post, which I realized almost as soon as I pressed the enter key, but could not stay around to correct right away. From ##V = (\alpha + \epsilon_a) (\beta + \epsilon_b) (\gamma + \epsilon_c)## it follows that
$$ \begin{array}{rcl}
V &=& \alpha \beta \gamma + \alpha \beta \epsilon_c + \alpha \gamma \epsilon_b + \beta \gamma \epsilon_a \\
& & + \text{terms in }\: \epsilon_i \epsilon_j \;+\; \epsilon_a \epsilon_b \epsilon_c
\end{array}
$$.
To first order in small ##\epsilon## we keep only the terms on the first line, so
$$\epsilon_v \approx \alpha \beta \epsilon_c + \alpha \gamma \epsilon_b + \beta \gamma \epsilon_a, $$
which has mean 0 and variance
$$\sigma_v^2 \approx (\alpha \beta)^2 \sigma_c^2 + (\alpha \gamma)^2 \sigma_b^2 + (\beta \gamma)^2 \sigma_a^2$$
For purposes of getting a simple estimate, we might as well take ##\alpha \approx \beta \approx \gamma \approx 19.5## and ##\sigma_a = \sigma_b = \sigma_c = \sigma##, to get ##\sigma_v^2 \approx 3 (19.5)^2 \sigma^2##. This gives ##\sigma_v \approx 19.5 \sqrt{3} \sigma##, and so the error in ##V## is about ##\pm 0.338\,\text{mm}^3##.
 
Last edited:
  • #12
Fair enough. And I didn't read your post carefully enough to even notice any errors. Frankly, my first response was largely based on the OPs characterization that this was his or her 'first' physics lab. I wasn't even expecting that he or she knew anything about partial derivatives. Anyway, for most introductory laboratories, or student laboratories in general, one likely does not need to consider all of the subtleties involved with the analysis of uncertainties. Reasonable approximations, like the OPs method or the one you put forth in your first post, are accurate enough for the application -even without knowing exactly how the uncertainties were determined.
 

1. What are significant figures?

Significant figures are the digits in a number that are considered to be accurate and reliable. They include all of the certain digits, as well as one estimated digit.

2. How do you determine the number of significant figures in a measurement?

The number of significant figures in a measurement is determined by counting all of the digits that are known with certainty, as well as one estimated digit. Zeros at the beginning of a number are not significant, but zeros between other significant digits and zeros at the end of a number are significant.

3. Why is it important to use significant figures in scientific measurements?

Using significant figures in scientific measurements ensures that the reported value is accurate and reflects the precision of the measurement. It also helps to avoid misleading results and allows for consistency and comparability in scientific data.

4. How do you perform error analysis on a set of data?

Error analysis involves calculating the difference between each individual measurement and the accepted or expected value, then finding the average of these differences. This average is the overall error for the set of data.

5. What is the difference between systematic and random error?

Systematic error is caused by consistent and predictable factors, such as faulty equipment or incorrect measurement techniques. It affects all measurements in the same direction and can be reduced by identifying and correcting the source. Random error, on the other hand, is caused by unpredictable factors and affects measurements in a random manner. It can be reduced by taking multiple measurements and calculating an average.

Similar threads

  • Introductory Physics Homework Help
Replies
4
Views
569
  • Introductory Physics Homework Help
Replies
11
Views
1K
  • Introductory Physics Homework Help
Replies
8
Views
2K
  • Classical Physics
Replies
18
Views
1K
  • Introductory Physics Homework Help
Replies
1
Views
3K
  • Introductory Physics Homework Help
Replies
4
Views
3K
Replies
25
Views
4K
  • Introductory Physics Homework Help
Replies
1
Views
2K
  • Introductory Physics Homework Help
Replies
1
Views
1K
  • Other Physics Topics
Replies
6
Views
2K
Back
Top