How Is Fractional Uncertainty Defined When the Measured Value Is Zero?

In summary: It is not meaningful in all cases. I guess we can't really say that there is a fractional uncertainty in case of 0+/-100. The percentage increase is 500%.In summary, the fractional uncertainty is defined as uncertainty/measured value, but it is not defined for a measured value of 0. Some suggest using the reciprocal quotient (measured value/uncertainty) instead, which would result in a value of 0 for a measured value of 0. However, this definition may not be meaningful in all cases.
  • #1
RaduAndrei
114
1
<Moderator note: Thread moved from General Physics hence no formatting template shown>

The fractional uncertainty is defined as:

uncertainty/measured value.

So for 2 cm +/- 1 cm we have 50%. For 9 cm +/- 1 cm we have 11.1%.

My question is what if the measured value is 0 cm? How is the fractional uncertainty defined in this case?
 
Physics news on Phys.org
  • #2
I moved this into the homework section, as it belongs there.

With this definition the fractional uncertainty isn't defined in case the measured value is zero. You may call the result infinite large, but I would rather consider the reciprocal quotient which makes more sense in my opinion.
 
  • #3
The reciprocal quotient? English is not my first language. Could you explain what that means?

Does it mean to consider instead measured value/uncertainty?
 
  • #4
RaduAndrei said:
The reciprocal quotient? English is not my first language. Could you explain what that means?

Does it mean to consider instead measured value/uncertainty?
Yes. This is what I meant. In this case a zero measurement would give a zero quotient, which can be interpreted as "no result", as it doesn't allow any statements about the accuracy. Furthermore this quotient gets undefined (or infinitely large), if the uncertainty is zero, which makes sense.

However, you should of course use whatever your book or teacher says. It's simply my opinion that (measured value / uncertainty range) is easier to interpret: zero if measurement is zero, infinitely large, if uncertainty is zero. The original quotient is just undefined for zero measurements.
 
  • #5
Ok, but still it does not really make sense for a measured value of 0.

Consider the definition uncertainty/measured value. The fractional uncertainty is 50% for 2+/-1 and 11.1% for 9+/1. If I change the uncertainty, ex 2, then it is 100% for 2+/-2 and 22.2% for 9+/-2. So it is kinda intuitive. As the measured value is much bigger than the uncertainty, the fractional uncertainty decreases. For a measured value of 0, the definition breaks, whatever the uncertainty. But I can say intuitively that for 0+/-1 the 'fractional uncertainty' (whatever its definition is) should be smaller than for 0+/-100. Right?

Consider the definition measured value/uncertainty. Whatever the uncertainty is, if the measured value is 0, then the fractional uncertainty is 0 always. But clearly there is a difference between having 0+/-1 or even 0+/-0.001 and having 0+/-100.

In the first case the fractional uncertainty is not defined, but in the second case it is 0. Intuitively there should be a difference between 0+/-1 and 0+/-100. And the definition must capture this difference.

For me there is no difference between having a zero result or an infinite result for a measured value of 0. The two definitions could work just as fine. What I am interested in is a definition that says "as the difference between the uncertainty and the measured value of 0 is larger and larger, the fractional uncertainty becomes larger and larger''. Maybe the definition could be: uncertainty/(measured value + 1)
 
Last edited:
  • #6
RaduAndrei said:
I can say intuitively that for 0+/-1 the 'fractional uncertainty' (whatever its definition is) should be smaller than for 0+/-100.
You can say what you like intuitively, but that does not make it meaningful. It is unbounded in both cases.
RaduAndrei said:
Consider the definition measured value/uncertainty.
That would be a definition of fractional precision, I assume. So 0 uncertainty is infinite precision, and vice versa.
 
  • #7
haruspex said:
You can say what you like intuitively, but that does not make it meaningful. It is unbounded in both cases.

But it is meaningful. Say you want to measure the speed of some object. One thing is to say that you measured 0+/-1 m/s and another thing is to say you measured 0+/- 100 m/s. The uncertainty is different and thus this must reflect in the fractional uncertainty too. Right?
 
  • #8
RaduAndrei said:
One thing is to say that you measured 0+/-1 m/s and another thing is to say you measured 0+/- 100 m/s
Those are meaningful statements, but there is no reason that "fractional uncertainty" should be meaningful for both.
My supermarket used to provide plastic bags for no charge, now they cost 5c. What is the percentage increase?
 
  • Like
Likes fresh_42
  • #9
haruspex said:
Those are meaningful statements, but there is no reason that "fractional uncertainty" should be meaningful for both.

Why not?
For the measured value 9, if I vary the uncertainty from 1 to 9 then the fractional uncertainty varies from 11% to 100%. Maybe I want to quantify this change for the zero value as well.
Those are meaningful statements. But I want to express them in numbers, not words.

haruspex said:
My supermarket used to provide plastic bags for no charge, now they cost 5c. What is the percentage increase?

I see your point.
 

What is fractional uncertainty?

Fractional uncertainty is a measure of the uncertainty or error associated with a measurement or calculation. It is expressed as a fraction or percentage of the measured value.

How is fractional uncertainty calculated?

Fractional uncertainty is calculated by dividing the uncertainty or error in a measurement by the measured value, and then multiplying by 100 to express it as a percentage.

Why is fractional uncertainty important in scientific measurements?

Fractional uncertainty is important because all measurements have some degree of error or uncertainty associated with them. By expressing this uncertainty as a fraction or percentage, it allows scientists to quantify and compare the reliability of different measurements.

What factors can contribute to fractional uncertainty?

Factors that can contribute to fractional uncertainty include limitations in measurement equipment, human error in taking measurements, and variations in the environment or conditions during the measurement process.

How can scientists reduce fractional uncertainty in their measurements?

Scientists can reduce fractional uncertainty by using more precise measurement equipment, taking multiple measurements and averaging the results, and controlling for variables that could affect the measurement. It is also important to accurately document and report the uncertainty associated with a measurement.

Similar threads

  • Introductory Physics Homework Help
Replies
15
Views
1K
Replies
7
Views
603
  • Introductory Physics Homework Help
Replies
8
Views
2K
  • Introductory Physics Homework Help
Replies
2
Views
2K
  • Atomic and Condensed Matter
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
21
Views
2K
  • Introductory Physics Homework Help
Replies
5
Views
4K
  • Introductory Physics Homework Help
Replies
9
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Introductory Physics Homework Help
Replies
15
Views
1K
Back
Top