Measurement Uncertainty Problem in MIT OCW 8.01x

In summary: A calculator can give the first order change (linear approximation):$$f(x + \Delta x, y + \Delta y) \approx f(x,y) + f_x(x,y) \Delta x + f_y(x,y) \Delta y.$$Substituting x = 0.781,y = 0.551, Δx = 0.002, Δy = 0.002:$$f(0.783, 0.553) \approx f(0.781, 0.551) + f_x(0.781, 0.551) (0.002) + f_y(0.781, 0.551) (0.002).$$f
  • #1
AdrianMachin
40
2

Homework Statement


What's the answer to (0.781±0.002)/(0.551±0.002)? Well, here is the answer (1.417±0.008) that professor Walter Lewin said in one of his videos. I checked this in an online uncertainty calculator and it turned out 1.417±0.006.

Homework Equations


n/a

The Attempt at a Solution


I tried the online calculator. I'm a bit confused and I need an explanation or a great guide or tutorial on these kinds of calculations. Thanks.
 
Physics news on Phys.org
  • #2
AdrianMachin said:

Homework Statement


What's the answer to (0.781±0.002)/(0.551±0.002)? Well, here is the answer (1.417±0.008) that professor Walter Lewin said in one of his videos. I checked this in an online uncertainty calculator and it turned out 1.417±0.006.

Homework Equations


n/a

The Attempt at a Solution


I tried the online calculator. I'm a bit confused and I need an explanation or a great guide or tutorial on these kinds of calculations. Thanks.
if C = A/B then fractional errors add using Pythagoras (if errors are independent) thus ΔC/C = √( (ΔA/A)2 +(ΔB/B)2) which confirms 0.006 as correct for actual error in in C (i.e C × ΔC/C )
 
  • Like
Likes AdrianMachin
  • #3
Richard A said:
if C = A/B then fractional errors add using Pythagoras (if errors are independent) thus ΔC/C = √( (ΔA/A)2 +(ΔB/B)2) which confirms 0.006 as correct for actual error in in C (i.e C × ΔC/C )
It is unfortunate that many courses seem to teach that as the only way to evaluate the resulting uncertainty. It is a statistical argument, based on the idea that you would be unlucky for both underlying uncertainties to be at the extremes of their ranges. That is fine for many purposes (though it pretends that the underlying uncertainties are sort of Gaussian, whereas they are often more like uniform). But to an engineer dealing with tolerances this is dangerous. If the radius of a bolt has been spec'd to a manufacturer as to be within a certain range, and the radius of the hole through which it must pass is spec'd to another manufacturer as being in some other range, it would be most unwise to discount the possibility that the bolt will be cast with the widest allowed radius and the hole bored with the smallest.
Professor Lewin used the engineer's approach. (But he was slightly off - it should be +/-0.009.)
 
  • Like
Likes AdrianMachin
  • #4
haruspex said:
It is unfortunate that many courses seem to teach that as the only way to evaluate the resulting uncertainty. It is a statistical argument, based on the idea that you would be unlucky for both underlying uncertainties to be at the extremes of their ranges. That is fine for many purposes (though it pretends that the underlying uncertainties are sort of Gaussian, whereas they are often more like uniform). But to an engineer dealing with tolerances this is dangerous. If the radius of a bolt has been spec'd to a manufacturer as to be within a certain range, and the radius of the hole through which it must pass is spec'd to another manufacturer as being in some other range, it would be most unwise to discount the possibility that the bolt will be cast with the widest allowed radius and the hole bored with the smallest.
Professor Lewin used the engineer's approach. (But he was slightly off - it should be +/-0.009.)
Agreed - I didn't actually check the source video and merely answered the mathematical question as posed, using the standard assumption of Gaussian errors. This underlines a general point about the need to make tacit assumptions clear - especially in statistics!
 
  • #5
Richard A said:
if C = A/B then fractional errors add using Pythagoras (if errors are independent) thus ΔC/C = √( (ΔA/A)2 +(ΔB/B)2) which confirms 0.006 as correct for actual error in in C (i.e C × ΔC/C )
Thanks a lot. Is there any book or tutorial on these subjects? (I guess I should look for statistics books, right?). My native language is not English so I'm confused what do we call them in English? I mean "Errors" or "Uncertainties" or "Accuracy"?
haruspex said:
It is unfortunate that many courses seem to teach that as the only way to evaluate the resulting uncertainty. It is a statistical argument, based on the idea that you would be unlucky for both underlying uncertainties to be at the extremes of their ranges. That is fine for many purposes (though it pretends that the underlying uncertainties are sort of Gaussian, whereas they are often more like uniform). But to an engineer dealing with tolerances this is dangerous. If the radius of a bolt has been spec'd to a manufacturer as to be within a certain range, and the radius of the hole through which it must pass is spec'd to another manufacturer as being in some other range, it would be most unwise to discount the possibility that the bolt will be cast with the widest allowed radius and the hole bored with the smallest.
Professor Lewin used the engineer's approach. (But he was slightly off - it should be +/-0.009.)
Thank you so much, but I didn't understand what was the calculations behind the engineer's approach in this case?
 
  • #6
AdrianMachin said:
Thanks a lot. Is there any book or tutorial on these subjects? (I guess I should look for statistics books, right?). My native language is not English so I'm confused what do we call them in English? I mean "Errors" or "Uncertainties" or "Accuracy"?

Thank you so much, but I didn't understand what was the calculations behind the engineer's approach in this case?

Here are three slightly different approaches that yield 3 different answers.
(1) Direct computation.
largest numerator = 0.781 + 0.002 = 0.783, smallest denominator = 0.551 - 0.002= 0.549, so largest ratio = .783/.549 ≈ 1.426.
smallest numerator = 0.781-0.002 = 0.779, largest denominator = 0.551+0.002 = 0.553, so smallest ratio = .779/.553 ≈ 1.409.
The ratio lies between 1.417 - 0.008 and 1.417 + 0.009

(2) calculus-based calculation (OK for small errors):
$$f(x + \Delta x, y + \Delta y) = f(x,y) + f_x(x,y) \Delta x + f_y(x,y) \Delta y + \cdots ,$$
where
$$f_x = \frac{\partial f}{\partial x}\; \text{and} \;f_y = \frac{ \partial f}{\partial y} $$
are the partial derivatives of ##f(x,y)## and "##\cdots##" stands for higher-order terms in ##\Delta x## and ##\Delta y## that we are dropping.

In our case, ##f(x,y) = x/y##, so ##f_x = 1/y## and ##f_y =- x/y^2##. For ##x = 0.781, y = 0.551## this gives
$$f(x + \Delta x, y + \Delta y) \doteq (.781/.551) \Delta x + (-.781/.551^2) \Delta y$$
The largest value occurs when ##\Delta x = 0.002, \Delta y = - 0.002## and the smallest value occurs in the opposite case. This gives an error bar of about ##\pm 0.009##.

(3) Statistical estimate. We can use the previous approximate expression for ##f(x + \Delta x, y + \Delta y)## (with the higher-order terms dropped), but now regarding ##\Delta x, \Delta y## as independent random variables with standard deviations of ##\sigma_x## and ##\sigma_y##. We are dealing with the special case in which ##\sigma_x = \sigma_y = 0.002##, but the general formula below applies whether or not the two standard deviations are equal. Well-known statistical formulas imply that the standard deviation of ##f(x + \Delta x, y + \Delta y)## is
$$\sigma_f = \sqrt{ f_x^2 \, \sigma_x^2 + f_y^2 \, \sigma_y^2}$$
In our case we have
$$\sigma_f = \sqrt{(1/.551)^2 (.002)^2 + (-.781/.551^2)^2 (.002)^2} \doteq 0.006,$$
giving a final error bar of about ##\pm 0.006##
 
Last edited:
  • Like
Likes AdrianMachin
  • #7
Ray Vickson said:
Here are three slightly different approaches that yield 3 different answers.
(1) Direct computation.
largest numerator = 0.781 + 0.002 = 0.783, smallest denominator = 0.551 - 0.002= 0.549, so largest ratio = .783/.549 ≈ 1.426.
smallest numerator = 0.781-0.002 = 0.779, largest denominator = 0.551+0.002 = 0.553, so smallest ratio = .779/.553 ≈ 1.409.
The ratio lies between 1.417 - 0.008 and 1.417 + 0.009

(2) calculus-based calculation (OK for small errors):
$$f(x + \Delta x, y + \Delta y) = f(x,y) + f_x(x,y) \Delta x + f_y(x,y) \Delta y + \cdots ,$$
where
$$f_x = \frac{\partial f}{\partial x}\; \text{and} \;f_y = \frac{ \partial f}{\partial y} $$
are the partial derivatives of ##f(x,y)## and "##\cdots##" stands for higher-order terms in ##\Delta x## and ##\Delta y## that we are dropping.

In our case, ##f(x,y) = x/y##, so ##f_x = 1/y## and ##f_y =- x/y^2##. For ##x = 0.781, y = 0.551## this gives
$$f(x + \Delta x, y + \Delta y) \doteq (.781/.551) \Delta x + (-.781/.551^2) \Delta y$$
The largest value occurs when ##\Delta x = 0.002, \Delta y = - 0.002## and the smallest value occurs in the opposite case. This gives an error bar of about ##\pm 0.009##.

(3) Statistical estimate. We can use the previous approximate expression for ##f(x + \Delta x, y + \Delta y)## (with the higher-order terms dropped), but now regarding ##\Delta x, \Delta y## as independent random variables with standard deviations of ##\sigma_x## and ##\sigma_y##. We are dealing with the special case in which ##\sigma_x = \sigma_y = 0.002##, but the general formula below applies whether or not the two standard deviations are equal. Well-known statistical formulas imply that the standard deviation of ##f(x + \Delta x, y + \Delta y)## is
$$\sigma_f = \sqrt{ f_x^2 \, \sigma_x^2 + f_y^2 \, \sigma_y^2}$$
In our case we have
$$\sigma_f = \sqrt{(1/.551)^2 (.002)^2 + (-.781/.551^2)^2 (.002)^2} \doteq 0.006,$$
giving a final error bar of about ##\pm 0.006##
Thanks A LOT, Ray. I really appreciate your help :)
Do you have recommendations on books or websites to read further?
 

1. What is measurement uncertainty in MIT OCW 8.01x?

Measurement uncertainty in MIT OCW 8.01x refers to the potential for error or variability in the measurements taken during experiments or calculations. It is the difference between the measured value and the true value of a quantity.

2. Why is measurement uncertainty important in MIT OCW 8.01x?

Measurement uncertainty is important in MIT OCW 8.01x because it affects the accuracy and reliability of experimental results. A thorough understanding of measurement uncertainty allows scientists to properly evaluate their data and make informed conclusions.

3. How is measurement uncertainty calculated in MIT OCW 8.01x?

Measurement uncertainty is calculated by considering all potential sources of error, such as limitations in equipment or human error, and quantifying their impact on the measured value. This can be done through statistical analysis or by using mathematical formulas.

4. How can measurement uncertainty be reduced in MIT OCW 8.01x?

Measurement uncertainty can be reduced by using more precise and accurate equipment, carefully following experimental procedures, and repeating measurements multiple times to account for variability. It is also important to properly document and account for any potential sources of error.

5. Is measurement uncertainty the same as measurement error in MIT OCW 8.01x?

No, measurement uncertainty and measurement error are not the same. Measurement error refers to a specific mistake or discrepancy in a measurement, while measurement uncertainty takes into account all potential sources of error and their combined effect on the measured value.

Similar threads

Replies
2
Views
3K
  • Introductory Physics Homework Help
Replies
22
Views
5K
  • Advanced Physics Homework Help
Replies
6
Views
26K
Replies
1
Views
2K
  • STEM Academic Advising
Replies
12
Views
3K
Replies
1
Views
4K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
6
Views
3K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
5
Views
3K
Back
Top