Calculating error with known to be innaccurate decimals

  • Thread starter hardmath
  • Start date
  • Tags
    Error
In summary, the conversation discusses approximation in addition and how it applies to decimal fractions. The book states that each given decimal may be either greater or less than the true value by .0000005 due to rounding. The author of the conversation questions this logic and suggests that the possible error range is actually .0000004 for numbers rounded up. The expert explains that this is not the case, as both directions of rounding must be considered and the maximum error is still .0000005. This is important to keep in mind in order to be accurate and avoid underestimating the error.
  • #1
hardmath
4
0
My book says:
Approximation in Addition and Subtraction. As an illus-
tration of approximation in addition, let us add the decimal fractions
.234673, .322135, .114342, .563217, each being known to be correct to
six figures. The addition gives 1.234367, but it is not certain that this
result is correct because, in accordance with the discussion of the pre-
ceding article, each of the given decimals may be either greater or
less than the true value by as much as .0000005 (due to rejection of
the seventh figure). Since there are four numbers the total error in
the sum may be as much as 4 X .0000005, or .000002. That is, the true sum may be as much as 1.234369 or as little as 1.234365. In
either case the result correct to six figures would be written 1.23437.

The bit I don't get is
each of the given decimals may be either greater or
less than the true value by as much as .0000005 (due to rejection of
the seventh figure).

Why? Apply this logic to one of the numbers at random for arguments sake, let's say the first number .234673, it couldn't be .2346735-.2346739 or else it would have been rounded to .234674. It's true value (if it's greater) has to lie somewhere between .2346731-2346734 or else it would have been rounded up to .234674 .

So if it can't be .0000005 greater as that would get it rounded up to .234674, and you can't count .2346730 when calculating error as .2346730 is the same as .234673 which would be correct and wouldn't be a possible error you have to factor in, then knowing the figures are accurate to 6 significant figures it would appear to me that logically the possible error range is

.2346731-.2346734 = .0000004 greater

.2346725-.2346729 = .0000005 lesser

And this would apply to all 4 figures, so the sum could be (4 X .0000005) -.000002 lesser or (4X.0000004) .0000016 greater.

Now it's in the book so obviously there's a flaw in my reasoning and since this is very basic math it's probably a very basic error so that's my question, thanks for reading and I apologize for the longwindedness.
 
Last edited:
Mathematics news on Phys.org
  • #2
In fact round(x)=.234673 implies
.2346725<=x<=.2346735
different rules are used to round depending on the circumstances
The rule you cite that 5 should be rounded up is questionable as it is exactly between the two choices
for some statistical purposes it is better to round up half the time and down half the time
in any case it does not effect the error bound because if our maximal error is .0000005 or .00000049999999999
we must allow for .0000005
 
  • #3
lurflurf said:
In fact round(x)=.234673 implies
.2346725<=x<=.2346735
different rules are used to round depending on the circumstances
The rule you cite that 5 should be rounded up is questionable as it is exactly between the two choices
for some statistical purposes it is better to round up half the time and down half the time
in any case it does not effect the error bound because if our maximal error is .0000005 or .00000049999999999
we must allow for .0000005

So if I understand the first part right, you are saying that .234673 correct to 6 significant figures = somewhere between .2346725 and .2346735? (I'm not familiar with lots of these math symbols and am just seeing if I'm reading your meaning right). Is that the actual definition and a number that is known to 6 significant figures like .234673 could not be say .2346739?

Am I right or wrong in saying that when you use the method of always rounding 5 up like the author of the book does, then the error, as in the excerpt above, would only be .0000004 if greater? As he specifically refers to numbers being discarded and rounded, presumably using his method?

I don't understand what you mean when you say:
in any case it does not effect the error bound because if our maximal error is .0000005 or .00000049999999999
we must allow for .0000005

Does this statement only apply when you round up half the time and down the other half like you said in the first have of the sentence, or are you saying something about calculating error in general?

Thanks for helping! These questions probably seem finicky but I want to actually understand what I'm doing.
 
  • #4
A rounded number has a certain error range
usually if .234673 is the rounded value of x
.2346725<=x<=.2346735
.2346735 + or - .0000005
Different rounded rules are in use
always rounding 5's up is one rule, but it sometimes causes problemsin any case it does not effect the error bound because if our maximal error is .0000005 or .00000049999999999
we must allow for .0000005

this applies to any rounding rules your error is to think the error is bounded on one side by.4
in fact under round 5 up we could have
round(x)-actual(x)
.2346735-.23467300000000000001=.00000049999999999999
or
actual(x)-round(x)
.23467399999999999999-.2346735=.00000049999999999999

So we can effectively be of by as much as .0000005 even though we can only be off by that much exactly in one direction.

In other words one direction cannot be .0000005 but it can be much closer to .0000005 that .0000004 so we use .0000005 to be safe.
 
  • Like
Likes 1 person
  • #5
lurflurf said:
A rounded number has a certain error range
usually if .234673 is the rounded value of x
.2346725<=x<=.2346735
.2346735 + or - .0000005
Different rounded rules are in use
always rounding 5's up is one rule, but it sometimes causes problems


in any case it does not effect the error bound because if our maximal error is .0000005 or .00000049999999999
we must allow for .0000005

this applies to any rounding rules your error is to think the error is bounded on one side by.4
in fact under round 5 up we could have
round(x)-actual(x)
.2346735-.23467300000000000001=.00000049999999999999
or
actual(x)-round(x)
.23467399999999999999-.2346735=.00000049999999999999

So we can effectively be of by as much as .0000005 even though we can only be off by that much exactly in one direction.

In other words one direction cannot be .0000005 but it can be much closer to .0000005 that .0000004 so we use .0000005 to be safe.

I understand now. Thanks very much, it's been bothering me for a while.
 
  • #6
Before answering your question, maybe a slightly more practical example will be useful. Suppose that I want to know how tall you are. By visual inspection - as it's often formally called - I can easily determine that your height is about 2 m. But of course, it can be 1.90, or 1.84. In fact, just by looking at you it would be crazy to pretend that I know your height to even within 10 cm. Now I could specify the result of my "measurement" as 2.000 m. But that would imply that I know that it is 2 m to great accuracy, while really all I can say is that it is "somewhere between 1.5 m and 2.5 m - well, 2.5 is a bit high but it could be even 2.20 - in any case it will round to 2 m." This is what I express by saying "2" instead of "2.000".

Now let me get my measuring tape which has a cm scale with small ticks between the integer values, and measure it more accurately to be 1.84 m. Now of course, I could again say "You are 2 m tall", but that would give less information than I actually have. On the other hand, I could interpolate between the marks and say "You are 1.843 m tall". But of course my interpolation is a bit uncertain - my eye is not good enough to distinguish between 1.8424 and 1.8437. So stating the number as 1.843 would again fool you into thinking I measured it more accurately than I did. Basically, by reading off the number on a 0.5 cm scale, all I know that it is between 1.835 and 1.845. All these values (possibly with the exception of 1.845 exactly) round to 1.84. So I should say 1.84; in contrast 1.840 would imply that I actually measured it up to mm.

Now when you add or subtract values, you have to take this accuracy into account. Suppose I also measure my own height in the first way - I will also find 2m! Subtracting the two gives 2m - 2m = 0m. This does not mean, of course, that we have the same height. We do have the same height within the accuracy of our measurement - i.e. we are equally tall up to a difference of +/- 1 m. To sketch a "worst-case" scenario: a more accurate measurement may reveal us to be 2.49 m and 1.51 m respectively, giving a difference of 0.98 m. In contrast, if I would have measured us both to be 1.84m in 2 significant digits, the biggest difference we could have would be between 1.835 and (slightly under) 1.845, so 0.01m.
Now of course, when I report this difference, again I have to take this into account. In the first case, where I only have 1 significant digit in the measurement, I cannot report the difference between the measurements more accurately than that, so I should not pretend I can and write 0.0 m.
Even if I would know from my ID that my height is 1.84 m, but I only know yours in 1 significant digit as 2m, I could not say that the difference is 0.16m. After all, this would imply that the actual difference is within 0.005m. This would be true if my more accurate measurement turns out to give 1.84 for you as well, but you could be 2.4m, for all I know - the "2m" does not give me more information. Therefore, even if I would know my height up to a fraction of a centimeter, knowing yours in one significant digit will still force me to give a less accurate result. A little thought shows that the "worst" case here is you turn out to be 2.49m or 1.51, giving a difference of about 0.34m either way. So in any difference I calculate, there is a (big!) uncertainty even in the first decimal, meaning I cannot give the difference more accurately than meters (again giving 2 m - 1.84 m = 0 m, with the right significance).

hardmath said:
So if I understand the first part right, you are saying that .234673 correct to 6 significant figures = somewhere between .2346725 and .2346735? (I'm not familiar with lots of these math symbols and am just seeing if I'm reading your meaning right).
Yes.

hardmath said:
Is that the actual definition and a number that is known to 6 significant figures like .234673 could not be say .2346739?
Yes, otherwise you should either round it correctly - if you know that it is that much bigger than .234670 - or you should give it in fewer significant digits.

hardmath said:
Am I right or wrong in saying that when you use the method of always rounding 5 up like the author of the book does, then the error, as in the excerpt above, would only be .0000004 if greater? As he specifically refers to numbers being discarded and rounded, presumably using his method?
You're not entirely right, and I think you are worrying too much about the edge case. The "rounding 5 up" rule - whichever one you use - is only relevant if the number ends in ...5. If the actual value were ...50001 you would round it up anyway, and if it were ...499990 you would round it down. So either the error would be .00000049999... (repeating) or 0000005 - and these numbers are the same (by the famous 0.9999... = 1).
 
  • Like
Likes 1 person
  • #7
Thankyou both! My mistake was in thinking of .0000004 as being the number that comes before .0000005 when it could actually be say .000000499 which is as close to .0000005 as makes no practical difference for the sake of accuracy, or even .0000004999... repeating which is the same number as .0000005.
 

What is the purpose of calculating error with known to be inaccurate decimals?

The purpose of calculating error with known to be inaccurate decimals is to determine the level of uncertainty or inaccuracy in a measurement or calculation. This can help scientists better understand the reliability and validity of their data and results.

What is the formula for calculating error with known to be inaccurate decimals?

The formula for calculating error with known to be inaccurate decimals is (|measured value - actual value| / actual value) x 100%. This will give the percentage of error in the measurement or calculation.

What is considered a significant error when using known to be inaccurate decimals?

A significant error when using known to be inaccurate decimals is typically anything greater than 5%. This means that the measurement or calculation has a large amount of uncertainty or inaccuracy and may need to be repeated or adjusted.

Can calculating error with known to be inaccurate decimals be used for any type of measurement or calculation?

Yes, calculating error with known to be inaccurate decimals can be used for any type of measurement or calculation as long as there is a known actual value to compare it to. This includes physical measurements, mathematical calculations, and scientific experiments.

Why is it important to calculate error with known to be inaccurate decimals in scientific research?

Calculating error with known to be inaccurate decimals is important in scientific research because it allows scientists to assess the reliability and validity of their data and results. It also helps to identify any potential sources of error or uncertainty in their methods and procedures, which can lead to more accurate and trustworthy conclusions.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
Replies
10
Views
6K
  • Beyond the Standard Models
Replies
19
Views
5K
  • General Math
Replies
22
Views
3K
Replies
4
Views
2K
  • Programming and Computer Science
Replies
4
Views
5K
Replies
38
Views
3K
  • Introductory Physics Homework Help
Replies
2
Views
3K
Replies
7
Views
21K
Back
Top