Finding Absolute Uncertainty in Data

In summary: LW + L \Delta W + W \Delta L + \Delta W \Delta L\\A_{min} = (L-\Delta L)*(W-\Delta W) =...= LW - L \Delta W - W \Delta L + \Delta W \Delta L\end{align*}$$So ##A_{max} - A_{min} = 2L \Delta W + 2W \Delta L##. And because we assume that the errors are small we can ignore the products like ##\Delta L \Delta W##.So the relative error in the area is ##{\Delta A \over A} = {2L\over LW} \Delta W + {2W\over LW
  • #1
ChiralSuperfields
1,216
132
Homework Statement
Please see below
Relevant Equations
##NA = \bar {NA} ± \Delta NA##
For this data,
I am trying to find the overall absolute uncertainty of NA, where NA is the numerical aperture: ##\tan \theta_{NA} = \frac{R}{L}## and ##NA = \sin \theta_{NA}##
CaseR(cm)L(cm)R/Ltheta_NA[rad]NAerror in NA
R0,L0
0.5​
0.5​
1​
0.785398163​
0.707106781​
0​
Rmax,Lmax
0.7​
0.55​
1.272727273​
0.904827089​
0.786318339​
0.079211558​
Rmax,Lmin
0.7​
0.45​
1.555555556​
0.999458847​
0.841178475​
0.134071694​
Rmin,Lmax
0.3​
0.55​
0.545454545​
0.499346722​
0.478852131​
-0.228254651
Rmin,Lmin
0.3​
0.45​
0.666666667​
0.588002604​
0.554700196​
-0.152406585

Since ##NA = \bar {NA} ± \Delta NA ## (Where ##\bar {NA}## is the best estimate for NA and ##\Delta NA## is the uncertainty in NA) we take the absolute values of the values for the error in NA shown in bold? This is because if we solve for the absolute uncertainty,

##NA - \bar {NA} = ±\Delta NA##
Then take the absolute value of both sides,

## |NA - \bar {NA} | = \Delta NA##

Any help much appreciated!

Many thanks!
 
Physics news on Phys.org
  • #2
Last edited:
  • Like
Likes ChiralSuperfields and berkeman
  • #3
Hi,

If you have a 40% uncertainty in R, that dominates anything else. There is no point in doing calculation in 9 digit accuracy !
If you absolutely want, the 10% uncertainty from L can be added in (in quadrature, so the result is 41% (from: ##\sqrt{40^2+10^2}## ) in R/L.

The subsequent ##\sin(\arctan {R\over L} ) ## dampens this by a factor of about 0.3: the table looks like

R/Ltheta_NA[rad]NAdeviation in NA
1.000.7850.7070.000
0.590.5330.508-0.199
1.410.9540.8160.109

So your result is ##NA = 0.71\pm 0.15## (averaging the modulus of the deviation -- with such large uncertainties that's justified).

##\ ##
 
Last edited:
  • Like
Likes Vanadium 50 and ChiralSuperfields
  • #4
kuruman said:
We've been there before. A measure of the uncertainty is $$\Delta (NA) =[(NA)_{max}-(\bar{NA})]-[(\bar{NA})-(NA)_{min}]=(NA)_{max}+(NA)_{min}.$$See
https://www.physicsforums.com/threa...for-area-of-a-rectangle.1051087/#post-6869154
It doesn't matter if you are measuring an area or a numerical aperture. The idea of estimating the uncertainty on the basis of the maximum and minimum possible outcomes is the same.
Last part fell off ?

Anyway, I would propose $$\Delta (NA) ={1\over 2} \Bigl (NA_{max}-NA) + (NA-NA_{min})\Bigr ) ={1\over 2} \Bigl (NA_{max}-NA_{min}\Bigr ) .$$
 
  • Like
Likes kuruman and ChiralSuperfields
  • #5
kuruman said:
We've been there before. A measure of the uncertainty is $$\Delta (NA) =(NA)_{max}-(NA)_{min}.$$See
https://www.physicsforums.com/threa...for-area-of-a-rectangle.1051087/#post-6869154
It doesn't matter if you are measuring an area or a numerical aperture. The idea of estimating the uncertainty on the basis of the maximum and minimum possible outcomes is the same.
Thank you for your reply @kuruman!

True, your expression is correct. However, could you please know why my original expression with the absolute value bars is wrong?

Many thanks!
 
  • #6
BvU said:
Last part fell off ?

Anyway, I would propose $$\Delta (NA) ={1\over 2} \Bigl (NA_{max}-NA) + (NA-NA_{min})\Bigr ) ={1\over 2} \Bigl (NA_{max}-NA_{min}\Bigr ) .$$
Thank you for your replies @BvU!

Do you please know why your expression is half of the expression given by @kuruman ##\Delta NA =NA_{max}-NA_{min}##?

Many thanks!
 
  • #7
BvU said:
Last part fell off ?
I edited in hurry and it did.
 
  • Like
Likes ChiralSuperfields
  • #8
ChiralSuperfields said:
Thank you for your replies @BvU!

Do you please know why your expression is half of the expression given by @kuruman ##\Delta NA =NA_{max}-NA_{min}##?

Many thanks!

I wasn't aware of the other thread on the same subject. Looked at it now and totally disagree with the fragment you posted there in #1. Using ##A_\text {max} - A_\text {min} ## as estimate for the uncertainty in the area is a gross exaggeration. There are several aspects.
Let me try to explain a little bit using drawings. Suppose we have measured ##W = 3.0 \pm 0.5## and ##L = 5.0 \pm 1.0## for the blue rectangle. We have
##A = 15##, ##A_\text {max}=21## and ##A_\text {min} = 10##.
So ##A_\text {max}-A =6## (the green shaded area) and ##A - A_\text {min} = 5## (the red shaded area).
And ##A_\text {max} - A_\text {min} =11##. Simple math so far.

According to the fragment you copied, the area would be quoted as ##A = 15\pm 11##, so the biggest rectangle that would still be within error margin would be like the dashed green one with area 26 in the second picture and the smallest would be like the dashed red one with area 4.

1681813489569.png

You can check that the sum of the green and red shaded areas is the difference beteween the area of the blue rectangle and the green dashed rectangle.
And also the difference beteween the area of the blue rectangle and the red dashed rectangle.

But the largest deviation from ##A=15## is only 6 and the average deviation is 5.5. To me it is clear that the dashed rectangles are not justfiable from the measurements, so quoting ##A = 15 \pm 6## is the best one can do (not ##15\pm 5.5## because that suggests an accurate error estimate that is nonexistent).

If you agree and want to dive deeper, read on. If not, stop here :smile:.

- - - - - - - - - - - -

There is more: error handling is an exact science up to a point. Then judgment and common sense come in.

Errors are usually small and are treated in first order. So in a formula like ##A = L * W## we do have $$\begin{align*}A_{max} = (L+\Delta L)*(W+\Delta W) = L*W + \Delta L*W +L* \Delta W +\Delta L*\Delta W\\A_{min} = (L-\Delta L)*(W-\Delta W) = L*W - \Delta L*W -L* \Delta W +\Delta L*\Delta W \end{align*}$$ we ignore the last term in each because it is second order. The little rectangles shown here in the first picture:

1681816465891.png

Well, not so little in this case with such large errors, but the concept is clear, I hope.
That way we slightly underestimate ##A_{max}-A## and overestimate ##A-A_{min}##, but for
the average deviation we have ##{1\over 2}(A_{max}-A_{min}) = \Delta L*W +L* \Delta W ##.

Taking the average deviation as ##\Delta A## and rearranging: ##{\Delta A\over A} = {\Delta L\over L} + {\Delta W\over W} ##

As in post #4. (the expressions for a product and a quotient are the same).

- - - - - -

There is still more:
Error handling is based on statistics. Convention is to silently assume a few things unless stated otherwise (or clear from the context for experts :wink:) :
  1. Errors have a Gaussian distribution; the uncertainty reported is the standard deviation
  2. Errors in separate measurements are uncorrelated (independent)
The first assumption means that ##L+\Delta L## is less likely than ##L##: the shading density should be lower when going further away from ##L## Same for W.
The second assumption means that probabilities (represented by shading density) can be multiplied. I'm not good at creating suitable graphics, so I have to steal a picture (from here) that shows the probability for the position of the top right point of a rectangle

1*6pEBC9ts_gYwup-NtmmNNQ.jpg


The consequence is that independent errors must be added in quadrature: for example ##\sigma_{a+b}^2 = sigma _a^2 + \sigma_b^2## (I did this with the 40% and 10% in post #3).

If you are not bored to death yet, do some googling and pick up what you like and can understand :wink:##\ ##
 
Last edited:
  • Like
Likes ChiralSuperfields
  • #9
BvU said:
I wasn't aware of the other thread on the same subject. Looked at it now and totally disagree with the fragment you posted there in #1. Using ##A_\text {max} - A_\text {min} ## as estimate for the uncertainty in the area is a gross exaggeration. There are several aspects.
Let me try to explain a little bit using drawings. Suppose we have measured ##W = 3.0 \pm 0.5## and ##L = 5.0 \pm 1.0## for the blue rectangle. We have
##A = 15##, ##A_\text {max}=21## and ##A_\text {min} = 10##.
So ##A_\text {max}-A =6## (the green shaded area) and ##A - A_\text {min} = 5## (the red shaded area).
And ##A_\text {max} - A_\text {min} =11##. Simple math so far.

According to the fragment you copied, the area would be quoted as ##A = 15\pm 11##, so the biggest rectangle that would still be within error margin would be like the dashed green one with area 26 in the second picture and the smallest would be like the dashed red one with area 4.

View attachment 325057
You can check that the sum of the green and red shaded areas is the difference beteween the area of the blue rectangle and the green dashed rectangle.
And also the difference beteween the area of the blue rectangle and the red dashed rectangle.

But the largest deviation from ##A=15## is only 6 and the average deviation is 5.5. To me it is clear that the dashed rectangles are not justfiable from the measurements, so quoting ##A = 15 \pm 6## is the best one can do (not ##15\pm 5.5## because that suggests an accurate error estimate that is nonexistent).

If you agree and want to dive deeper, read on. If not, stop here :smile:.

- - - - - - - - - - - -

There is more: error handling is an exact science up to a point. Then judgment and common sense come in.

Errors are usually small and are treated in first order. So in a formula like ##A = L * W## we do have $$\begin{align*}A_{max} = (L+\Delta L)*(W+\Delta W) = L*W + \Delta L*W +L* \Delta W +\Delta L*\Delta W\\A_{min} = (L-\Delta L)*(W-\Delta W) = L*W - \Delta L*W -L* \Delta W +\Delta L*\Delta W \end{align*}$$ we ignore the last term in each because it is second order. The little rectangles shown here in the first picture:

View attachment 325059
Well, not so little in this case with such large errors, but the concept is clear, I hope.
That way we slightly underestimate ##A_{max}-A## and overestimate ##A-A_{min}##, but for
the average deviation we have ##{1\over 2}(A_{max}-A_{min}) = \Delta L*W +L* \Delta W ##.

Taking the average deviation as ##\Delta A## and rearranging: ##{\Delta A\over A} = {\Delta L\over L} + {\Delta W\over W} ##

As in post #4. (the expressions for a product and a quotient are the same).

- - - - - -

There is still more:
Error handling is based on statistics. Convention is to silently assume a few things unless stated otherwise (or clear from the context for experts :wink:) :
  1. Errors have a Gaussian distribution; the uncertainty reported is the standard deviation
  2. Errors in separate measurements are uncorrelated (independent)
The first assumption means that ##L+\Delta L## is less likely than ##L##: the shading density should be lower when going further away from ##L## Same for W.
The second assumption means that probabilities (represented by shading density) can be multiplied. I'm not good at creating suitable graphics, so I have to steal a picture (from here) that shows the probability for the position of the top right point of a rectangle

View attachment 325060

The consequence is that independent errors must be added in quadrature: for example ##\sigma_{a+b}^2 = sigma _a^2 + \sigma_b^2## (I did this with the 40% and 10% in post #3).

If you are not bored to death yet, do some googling and pick up what you like and can understand :wink:##\ ##
Thank you for your reply @BvU!

Wow, that is very informative. I'll look into this more when I have some time.

Many thanks!
 

1. What is absolute uncertainty in data?

Absolute uncertainty in data is the measure of the range of possible values for a particular data point. It represents the degree of uncertainty or error associated with a measurement or calculation.

2. How is absolute uncertainty calculated?

Absolute uncertainty is calculated by taking the difference between the highest and lowest possible values for a data point and dividing it by two. This value is then rounded to an appropriate number of significant figures.

3. Why is it important to find absolute uncertainty in data?

Finding absolute uncertainty in data is important because it allows us to understand the accuracy and reliability of our measurements and calculations. It also helps us to make informed decisions and draw meaningful conclusions from our data.

4. What factors can contribute to absolute uncertainty in data?

There are several factors that can contribute to absolute uncertainty in data, including limitations of measuring instruments, human error, and inherent variability in the data itself.

5. How can absolute uncertainty be reduced?

Absolute uncertainty can be reduced by using more precise measuring instruments, minimizing human error through careful techniques and repetition of measurements, and increasing the sample size to reduce variability in the data.

Similar threads

  • Introductory Physics Homework Help
Replies
2
Views
1K
  • Introductory Physics Homework Help
Replies
1
Views
3K
  • Introductory Physics Homework Help
Replies
1
Views
3K
  • Introductory Physics Homework Help
Replies
22
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
Replies
1
Views
3K
Replies
4
Views
5K
Replies
12
Views
9K
Back
Top