# Finding Absolute Uncertainty in Data

• ChiralSuperfields
In summary: LW + L \Delta W + W \Delta L + \Delta W \Delta L\\A_{min} = (L-\Delta L)*(W-\Delta W) =...= LW - L \Delta W - W \Delta L + \Delta W \Delta L\end{align*}$$So ##A_{max} - A_{min} = 2L \Delta W + 2W \Delta L##. And because we assume that the errors are small we can ignore the products like ##\Delta L \Delta W##.So the relative error in the area is ##{\Delta A \over A} = {2L\over LW} \Delta W + {2W\over LW ChiralSuperfields Homework Statement Please see below Relevant Equations ##NA = \bar {NA} ± \Delta NA## For this data, I am trying to find the overall absolute uncertainty of NA, where NA is the numerical aperture: ##\tan \theta_{NA} = \frac{R}{L}## and ##NA = \sin \theta_{NA}##  Case R(cm) L(cm) R/L theta_NA[rad] NA error in NA R0,L0 0.5​ 0.5​ 1​ 0.785398163​ 0.707106781​ 0​ Rmax,Lmax 0.7​ 0.55​ 1.272727273​ 0.904827089​ 0.786318339​ 0.079211558​ Rmax,Lmin 0.7​ 0.45​ 1.555555556​ 0.999458847​ 0.841178475​ 0.134071694​ Rmin,Lmax 0.3​ 0.55​ 0.545454545​ 0.499346722​ 0.478852131​ -0.228254651​ Rmin,Lmin 0.3​ 0.45​ 0.666666667​ 0.588002604​ 0.554700196​ -0.152406585​ Since ##NA = \bar {NA} ± \Delta NA ## (Where ##\bar {NA}## is the best estimate for NA and ##\Delta NA## is the uncertainty in NA) we take the absolute values of the values for the error in NA shown in bold? This is because if we solve for the absolute uncertainty, ##NA - \bar {NA} = ±\Delta NA## Then take the absolute value of both sides, ## |NA - \bar {NA} | = \Delta NA## Any help much appreciated! Many thanks! Last edited: ChiralSuperfields and berkeman Hi, If you have a 40% uncertainty in R, that dominates anything else. There is no point in doing calculation in 9 digit accuracy ! If you absolutely want, the 10% uncertainty from L can be added in (in quadrature, so the result is 41% (from: ##\sqrt{40^2+10^2}## ) in R/L. The subsequent ##\sin(\arctan {R\over L} ) ## dampens this by a factor of about 0.3: the table looks like  R/L theta_NA[rad] NA deviation in NA 1.00 0.785 0.707 0.000 0.59 0.533 0.508 -0.199 1.41 0.954 0.816 0.109 So your result is ##NA = 0.71\pm 0.15## (averaging the modulus of the deviation -- with such large uncertainties that's justified). ##\ ## Last edited: Vanadium 50 and ChiralSuperfields kuruman said: We've been there before. A measure of the uncertainty is$$\Delta (NA) =[(NA)_{max}-(\bar{NA})]-[(\bar{NA})-(NA)_{min}]=(NA)_{max}+(NA)_{min}.$$See https://www.physicsforums.com/threa...for-area-of-a-rectangle.1051087/#post-6869154 It doesn't matter if you are measuring an area or a numerical aperture. The idea of estimating the uncertainty on the basis of the maximum and minimum possible outcomes is the same. Last part fell off ? Anyway, I would propose$$\Delta (NA) ={1\over 2} \Bigl (NA_{max}-NA) + (NA-NA_{min})\Bigr ) ={1\over 2} \Bigl (NA_{max}-NA_{min}\Bigr ) .$$kuruman and ChiralSuperfields kuruman said: We've been there before. A measure of the uncertainty is$$\Delta (NA) =(NA)_{max}-(NA)_{min}.$$See https://www.physicsforums.com/threa...for-area-of-a-rectangle.1051087/#post-6869154 It doesn't matter if you are measuring an area or a numerical aperture. The idea of estimating the uncertainty on the basis of the maximum and minimum possible outcomes is the same. Thank you for your reply @kuruman! True, your expression is correct. However, could you please know why my original expression with the absolute value bars is wrong? Many thanks! BvU said: Last part fell off ? Anyway, I would propose$$\Delta (NA) ={1\over 2} \Bigl (NA_{max}-NA) + (NA-NA_{min})\Bigr ) ={1\over 2} \Bigl (NA_{max}-NA_{min}\Bigr ) .$$Thank you for your replies @BvU! Do you please know why your expression is half of the expression given by @kuruman ##\Delta NA =NA_{max}-NA_{min}##? Many thanks! BvU said: Last part fell off ? I edited in hurry and it did. ChiralSuperfields ChiralSuperfields said: Thank you for your replies @BvU! Do you please know why your expression is half of the expression given by @kuruman ##\Delta NA =NA_{max}-NA_{min}##? Many thanks! I wasn't aware of the other thread on the same subject. Looked at it now and totally disagree with the fragment you posted there in #1. Using ##A_\text {max} - A_\text {min} ## as estimate for the uncertainty in the area is a gross exaggeration. There are several aspects. Let me try to explain a little bit using drawings. Suppose we have measured ##W = 3.0 \pm 0.5## and ##L = 5.0 \pm 1.0## for the blue rectangle. We have ##A = 15##, ##A_\text {max}=21## and ##A_\text {min} = 10##. So ##A_\text {max}-A =6## (the green shaded area) and ##A - A_\text {min} = 5## (the red shaded area). And ##A_\text {max} - A_\text {min} =11##. Simple math so far. According to the fragment you copied, the area would be quoted as ##A = 15\pm 11##, so the biggest rectangle that would still be within error margin would be like the dashed green one with area 26 in the second picture and the smallest would be like the dashed red one with area 4. You can check that the sum of the green and red shaded areas is the difference beteween the area of the blue rectangle and the green dashed rectangle. And also the difference beteween the area of the blue rectangle and the red dashed rectangle. But the largest deviation from ##A=15## is only 6 and the average deviation is 5.5. To me it is clear that the dashed rectangles are not justfiable from the measurements, so quoting ##A = 15 \pm 6## is the best one can do (not ##15\pm 5.5## because that suggests an accurate error estimate that is nonexistent). If you agree and want to dive deeper, read on. If not, stop here . - - - - - - - - - - - - There is more: error handling is an exact science up to a point. Then judgment and common sense come in. Errors are usually small and are treated in first order. So in a formula like ##A = L * W## we do have$$\begin{align*}A_{max} = (L+\Delta L)*(W+\Delta W) = L*W + \Delta L*W +L* \Delta W +\Delta L*\Delta W\\A_{min} = (L-\Delta L)*(W-\Delta W) = L*W - \Delta L*W -L* \Delta W +\Delta L*\Delta W \end{align*}$$we ignore the last term in each because it is second order. The little rectangles shown here in the first picture: Well, not so little in this case with such large errors, but the concept is clear, I hope. That way we slightly underestimate ##A_{max}-A## and overestimate ##A-A_{min}##, but for the average deviation we have ##{1\over 2}(A_{max}-A_{min}) = \Delta L*W +L* \Delta W ##. Taking the average deviation as ##\Delta A## and rearranging: ##{\Delta A\over A} = {\Delta L\over L} + {\Delta W\over W} ## As in post #4. (the expressions for a product and a quotient are the same). - - - - - - There is still more: Error handling is based on statistics. Convention is to silently assume a few things unless stated otherwise (or clear from the context for experts ) : 1. Errors have a Gaussian distribution; the uncertainty reported is the standard deviation 2. Errors in separate measurements are uncorrelated (independent) The first assumption means that ##L+\Delta L## is less likely than ##L##: the shading density should be lower when going further away from ##L## Same for W. The second assumption means that probabilities (represented by shading density) can be multiplied. I'm not good at creating suitable graphics, so I have to steal a picture (from here) that shows the probability for the position of the top right point of a rectangle The consequence is that independent errors must be added in quadrature: for example ##\sigma_{a+b}^2 = sigma _a^2 + \sigma_b^2## (I did this with the 40% and 10% in post #3). If you are not bored to death yet, do some googling and pick up what you like and can understand ##\ ## Last edited: ChiralSuperfields BvU said: I wasn't aware of the other thread on the same subject. Looked at it now and totally disagree with the fragment you posted there in #1. Using ##A_\text {max} - A_\text {min} ## as estimate for the uncertainty in the area is a gross exaggeration. There are several aspects. Let me try to explain a little bit using drawings. Suppose we have measured ##W = 3.0 \pm 0.5## and ##L = 5.0 \pm 1.0## for the blue rectangle. We have ##A = 15##, ##A_\text {max}=21## and ##A_\text {min} = 10##. So ##A_\text {max}-A =6## (the green shaded area) and ##A - A_\text {min} = 5## (the red shaded area). And ##A_\text {max} - A_\text {min} =11##. Simple math so far. According to the fragment you copied, the area would be quoted as ##A = 15\pm 11##, so the biggest rectangle that would still be within error margin would be like the dashed green one with area 26 in the second picture and the smallest would be like the dashed red one with area 4. View attachment 325057 You can check that the sum of the green and red shaded areas is the difference beteween the area of the blue rectangle and the green dashed rectangle. And also the difference beteween the area of the blue rectangle and the red dashed rectangle. But the largest deviation from ##A=15## is only 6 and the average deviation is 5.5. To me it is clear that the dashed rectangles are not justfiable from the measurements, so quoting ##A = 15 \pm 6## is the best one can do (not ##15\pm 5.5## because that suggests an accurate error estimate that is nonexistent). If you agree and want to dive deeper, read on. If not, stop here . - - - - - - - - - - - - There is more: error handling is an exact science up to a point. Then judgment and common sense come in. Errors are usually small and are treated in first order. So in a formula like ##A = L * W## we do have$$\begin{align*}A_{max} = (L+\Delta L)*(W+\Delta W) = L*W + \Delta L*W +L* \Delta W +\Delta L*\Delta W\\A_{min} = (L-\Delta L)*(W-\Delta W) = L*W - \Delta L*W -L* \Delta W +\Delta L*\Delta W \end{align*} we ignore the last term in each because it is second order. The little rectangles shown here in the first picture:

View attachment 325059
Well, not so little in this case with such large errors, but the concept is clear, I hope.
That way we slightly underestimate ##A_{max}-A## and overestimate ##A-A_{min}##, but for
the average deviation we have ##{1\over 2}(A_{max}-A_{min}) = \Delta L*W +L* \Delta W ##.

Taking the average deviation as ##\Delta A## and rearranging: ##{\Delta A\over A} = {\Delta L\over L} + {\Delta W\over W} ##

As in post #4. (the expressions for a product and a quotient are the same).

- - - - - -

There is still more:
Error handling is based on statistics. Convention is to silently assume a few things unless stated otherwise (or clear from the context for experts ) :
1. Errors have a Gaussian distribution; the uncertainty reported is the standard deviation
2. Errors in separate measurements are uncorrelated (independent)
The first assumption means that ##L+\Delta L## is less likely than ##L##: the shading density should be lower when going further away from ##L## Same for W.
The second assumption means that probabilities (represented by shading density) can be multiplied. I'm not good at creating suitable graphics, so I have to steal a picture (from here) that shows the probability for the position of the top right point of a rectangle

View attachment 325060

The consequence is that independent errors must be added in quadrature: for example ##\sigma_{a+b}^2 = sigma _a^2 + \sigma_b^2## (I did this with the 40% and 10% in post #3).

If you are not bored to death yet, do some googling and pick up what you like and can understand ##\ ##

Wow, that is very informative. I'll look into this more when I have some time.

Many thanks!

• Introductory Physics Homework Help
Replies
2
Views
1K
• Introductory Physics Homework Help
Replies
1
Views
3K
• Introductory Physics Homework Help
Replies
1
Views
3K
• Introductory Physics Homework Help
Replies
22
Views
5K
• Calculus and Beyond Homework Help
Replies
1
Views
992
• Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
• Other Physics Topics
Replies
1
Views
3K
• Introductory Physics Homework Help
Replies
4
Views
5K
• Optics
Replies
12
Views
9K