Rounding errors in computer arithmetic

In summary, the conversation discusses the issue of round-off errors in computer calculations, specifically in relation to the use of base 10 arithmetic. The experts in the conversation suggest that the issue is not purely theoretical, as it can cause real world problems, and that modern computers have measures in place to mitigate these errors. The role of IEEE 754 in managing round-off errors is also mentioned.
  • #1
xDocMath
3
0
Homework Statement
Give an example in base-10 computer arithmetic when
a. (a + b) + c != a + (b + c)
b. (a * b) * c != a * (b * c)
Relevant Equations
machine epsilon for double = 2^-53
For the first part, I have used a= 2, b = eps/2, c = eps/2 which I believe works and I have tested it in MATLAB however haven't had any luck reproducing the second part in MATLAB with any numbers. Any hints? Thanks
 
Physics news on Phys.org
  • #2
Well first of all, I know of no computer (save my old TI-59) that does its arithmetic in base 10. I say this because I wonder if that wording is a clue as to what you should be attempting, because what you're doing (correct as far as I can see) has nothing to do with base 10.

Your first example seems to add a number that's at the limit of IEEE precision. The added bit gets dumped off the end so that a+b ends up equalling a, but double it (b+c) and the bit just makes it onto the tail of the result.

I suspect that you need to do a similar thing for the multiply example. Have b and c be small numbers on either side of 1. Given a first number that's right at the edge of needing one more bit, a*b will put it over but a*(b*c) will not, so more precision is retained. Just a guess, and one that doesn't involve base 10 at all, but the same concept can be used with base-10 numbers as well.
 
  • #3
Testing it on a computer can be misleading because the computer might have hidden bits in the processor for some calculations. So you might need to just think this through assuming a basic computer implementation.
For part (b), suppose you have ##a=b=2^{-27}## and ##c=2^{27}##.
 
  • #4
Thank you so much for the explanations. I actually went back and noticed that even the first part doesn't work properly with the numbers I mentioned on MATLAB which makes me think this is more of a theoretical problem. In that case I think the numbers you have mentioned would satisfy the second part. Thanks
 
  • #5
xDocMath said:
which makes me think this is more of a theoretical problem
Real world computer rounding/cutoff issues are NOT "theoretical" problems, they are practical problems and cause real world problems.
 
  • #6
xDocMath said:
I actually went back and noticed that even the first part doesn't work properly with the numbers I mentioned on MATLAB which makes me think this is more of a theoretical problem.
"Theoretical" is probably not the right description. They seem like theoretical problems only because computers mitigate them now. Not that long ago, computers did calculations in single precision and did nothing to protect against round-off errors. We had to be careful.
 
  • #7
FactChecker said:
Testing it on a computer can be misleading because the computer might have hidden bits in the processor for some calculations.
There are no "hidden bits" in IEEE 754, versions of which are implemented by most processors.

FactChecker said:
"Theoretical" is probably not the right description. They seem like theoretical problems only because computers mitigate them now.
How do computers mitigate round-off errors (other than by switching to a higher precision, which is not generally what happens)?
 
  • #8
xDocMath said:
Thank you so much for the explanations. I actually went back and noticed that even the first part doesn't work properly with the numbers I mentioned on MATLAB which makes me think this is more of a theoretical problem. In that case I think the numbers you have mentioned would satisfy the second part. Thanks
Until you explain what is meant by "base-10 computer arithmetic" in the question I don't think we can help you.
 
  • #9
pbuk said:
How do computers mitigate round-off errors (other than by switching to a higher precision, which is not generally what happens)?
They don't
 
  • Like
Likes pbuk
  • #10
phinds said:
They don't
Indeed.
 
  • #11
pbuk said:
There are no "hidden bits" in IEEE 754, versions of which are implemented by most processors.
I was thinking about the GRS bits (Guard, Round, and Sticky) that are beyond what can be stored. But I have to admit that I am not an expert in IEEE 754 implementations.
 
Last edited:
  • #12
TL;DR we don't need to worry about guard, round or sticky bits.

IEEE 754 specifies (5 different alternatives for) how a result is rounded, not how that rounding is achieved; guard digits are a method to achieve the rounding according to the specification. IEEE 754 also specifies that any bits beyond the defined precision (e.g. guard bits) must be discarded at each stage of any calculation. We can therefore predict precisely the outcome of any operation.

Of course an application may decide to apply further rounding before storing or displaying a result but this is not what is happening in the CPU.
 
  • #13
  • Like
Likes xDocMath
  • #14
pbuk said:
TL;DR we don't need to worry about guard, round or sticky bits.

IEEE 754 specifies (5 different alternatives for) how a result is rounded, not how that rounding is achieved; guard digits are a method to achieve the rounding according to the specification. IEEE 754 also specifies that any bits beyond the defined precision (e.g. guard bits) must be discarded at each stage of any calculation. We can therefore predict precisely the outcome of any operation.

Of course an application may decide to apply further rounding before storing or displaying a result but this is not what is happening in the CPU.
I picture the talk about rounding in IEEE 754 as being implemented using the GRS bits. I think it makes it a lot harder to find examples of the problems that the OP asks for that can be demonstrated on a modern computer. That being said, there are still some examples, but they are not as easy to come up with.
 
  • #15
xDocMath said:
Thank you so much for the explanations. I actually went back and noticed that even the first part doesn't work properly with the numbers I mentioned on MATLAB which makes me think this is more of a theoretical problem. In that case I think the numbers you have mentioned would satisfy the second part. Thanks
For part (a), add two large numbers that have enough digits so that you know some least significant digits will have to be truncated from the sum (but not from the individual numbers). Then subtract the second number.
 
  • #16
FactChecker said:
I picture the talk about rounding in IEEE 754 as being implemented using the GRS bits. I think it makes it a lot harder to find examples of the problems that the OP asks for that can be demonstrated on a modern computer.
No, it is still really easy to find examples because it doesn't matter how rounding is implemented in the ALU; all we can see is the result. And the result of ## 1 + \frac \epsilon 2 ## will always be the same as ## 1 ##.

We can use this knowledge to easily find an example for Q1 (which the OP has already done); Q2 is a little harder, here we need to use something like $$ \left ( 1 + \sqrt{\frac \epsilon 2} \right ) ^ 2 = 1 + 2 \left ( \sqrt{\frac \epsilon 2} \right ) + \frac \epsilon 2 \equiv 1 + 2 \left ( \sqrt{\frac \epsilon 2} \right ) $$
to help.
 
Last edited:
  • Like
Likes FactChecker

What are rounding errors in computer arithmetic?

Rounding errors in computer arithmetic occur when a computer performs calculations using finite precision numbers, resulting in a loss of accuracy. This can happen due to limitations in the number of digits a computer can store and represent.

How do rounding errors affect calculations?

Rounding errors can accumulate and result in incorrect or imprecise calculations. This can be especially problematic in scientific and financial calculations where accuracy is crucial.

What causes rounding errors in computer arithmetic?

Rounding errors can be caused by a variety of factors, including the use of finite precision numbers, the order of operations in a calculation, and the rounding method used by the computer.

How can rounding errors be minimized?

Rounding errors can be minimized by using higher precision numbers, avoiding unnecessary calculations, and using rounding methods that are appropriate for the type of calculation being performed.

Can rounding errors be completely eliminated?

No, rounding errors cannot be completely eliminated in computer arithmetic. However, they can be reduced to a negligible level by using appropriate techniques and being aware of their potential impact on calculations.

Similar threads

  • Engineering and Comp Sci Homework Help
Replies
6
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
937
  • Engineering and Comp Sci Homework Help
Replies
13
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
934
  • Engineering and Comp Sci Homework Help
Replies
32
Views
3K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
914
  • Engineering and Comp Sci Homework Help
2
Replies
54
Views
3K
Replies
7
Views
540
Replies
20
Views
1K
  • Special and General Relativity
Replies
1
Views
1K
Back
Top