Comp Sci Rounding errors in computer arithmetic

AI Thread Summary
The discussion centers on challenges with rounding errors in computer arithmetic, particularly in MATLAB. Users explore the implications of IEEE 754 precision and how it affects calculations, noting that real-world rounding issues are practical rather than purely theoretical. There is a debate about the existence of "hidden bits" in calculations, with clarification that IEEE 754 does not include them. Suggestions are made for testing with specific numerical examples to illustrate rounding behavior. The conversation emphasizes the complexity of demonstrating rounding errors in modern computing environments while acknowledging the fundamental principles of arithmetic precision.
xDocMath
Messages
3
Reaction score
0
Homework Statement
Give an example in base-10 computer arithmetic when
a. (a + b) + c != a + (b + c)
b. (a * b) * c != a * (b * c)
Relevant Equations
machine epsilon for double = 2^-53
For the first part, I have used a= 2, b = eps/2, c = eps/2 which I believe works and I have tested it in MATLAB however haven't had any luck reproducing the second part in MATLAB with any numbers. Any hints? Thanks
 
Physics news on Phys.org
Well first of all, I know of no computer (save my old TI-59) that does its arithmetic in base 10. I say this because I wonder if that wording is a clue as to what you should be attempting, because what you're doing (correct as far as I can see) has nothing to do with base 10.

Your first example seems to add a number that's at the limit of IEEE precision. The added bit gets dumped off the end so that a+b ends up equalling a, but double it (b+c) and the bit just makes it onto the tail of the result.

I suspect that you need to do a similar thing for the multiply example. Have b and c be small numbers on either side of 1. Given a first number that's right at the edge of needing one more bit, a*b will put it over but a*(b*c) will not, so more precision is retained. Just a guess, and one that doesn't involve base 10 at all, but the same concept can be used with base-10 numbers as well.
 
Testing it on a computer can be misleading because the computer might have hidden bits in the processor for some calculations. So you might need to just think this through assuming a basic computer implementation.
For part (b), suppose you have ##a=b=2^{-27}## and ##c=2^{27}##.
 
Thank you so much for the explanations. I actually went back and noticed that even the first part doesn't work properly with the numbers I mentioned on MATLAB which makes me think this is more of a theoretical problem. In that case I think the numbers you have mentioned would satisfy the second part. Thanks
 
xDocMath said:
which makes me think this is more of a theoretical problem
Real world computer rounding/cutoff issues are NOT "theoretical" problems, they are practical problems and cause real world problems.
 
xDocMath said:
I actually went back and noticed that even the first part doesn't work properly with the numbers I mentioned on MATLAB which makes me think this is more of a theoretical problem.
"Theoretical" is probably not the right description. They seem like theoretical problems only because computers mitigate them now. Not that long ago, computers did calculations in single precision and did nothing to protect against round-off errors. We had to be careful.
 
FactChecker said:
Testing it on a computer can be misleading because the computer might have hidden bits in the processor for some calculations.
There are no "hidden bits" in IEEE 754, versions of which are implemented by most processors.

FactChecker said:
"Theoretical" is probably not the right description. They seem like theoretical problems only because computers mitigate them now.
How do computers mitigate round-off errors (other than by switching to a higher precision, which is not generally what happens)?
 
xDocMath said:
Thank you so much for the explanations. I actually went back and noticed that even the first part doesn't work properly with the numbers I mentioned on MATLAB which makes me think this is more of a theoretical problem. In that case I think the numbers you have mentioned would satisfy the second part. Thanks
Until you explain what is meant by "base-10 computer arithmetic" in the question I don't think we can help you.
 
pbuk said:
How do computers mitigate round-off errors (other than by switching to a higher precision, which is not generally what happens)?
They don't
 
  • Like
Likes pbuk
  • #10
phinds said:
They don't
Indeed.
 
  • #11
pbuk said:
There are no "hidden bits" in IEEE 754, versions of which are implemented by most processors.
I was thinking about the GRS bits (Guard, Round, and Sticky) that are beyond what can be stored. But I have to admit that I am not an expert in IEEE 754 implementations.
 
Last edited:
  • #12
TL;DR we don't need to worry about guard, round or sticky bits.

IEEE 754 specifies (5 different alternatives for) how a result is rounded, not how that rounding is achieved; guard digits are a method to achieve the rounding according to the specification. IEEE 754 also specifies that any bits beyond the defined precision (e.g. guard bits) must be discarded at each stage of any calculation. We can therefore predict precisely the outcome of any operation.

Of course an application may decide to apply further rounding before storing or displaying a result but this is not what is happening in the CPU.
 
  • #13
  • Like
Likes xDocMath
  • #14
pbuk said:
TL;DR we don't need to worry about guard, round or sticky bits.

IEEE 754 specifies (5 different alternatives for) how a result is rounded, not how that rounding is achieved; guard digits are a method to achieve the rounding according to the specification. IEEE 754 also specifies that any bits beyond the defined precision (e.g. guard bits) must be discarded at each stage of any calculation. We can therefore predict precisely the outcome of any operation.

Of course an application may decide to apply further rounding before storing or displaying a result but this is not what is happening in the CPU.
I picture the talk about rounding in IEEE 754 as being implemented using the GRS bits. I think it makes it a lot harder to find examples of the problems that the OP asks for that can be demonstrated on a modern computer. That being said, there are still some examples, but they are not as easy to come up with.
 
  • #15
xDocMath said:
Thank you so much for the explanations. I actually went back and noticed that even the first part doesn't work properly with the numbers I mentioned on MATLAB which makes me think this is more of a theoretical problem. In that case I think the numbers you have mentioned would satisfy the second part. Thanks
For part (a), add two large numbers that have enough digits so that you know some least significant digits will have to be truncated from the sum (but not from the individual numbers). Then subtract the second number.
 
  • #16
FactChecker said:
I picture the talk about rounding in IEEE 754 as being implemented using the GRS bits. I think it makes it a lot harder to find examples of the problems that the OP asks for that can be demonstrated on a modern computer.
No, it is still really easy to find examples because it doesn't matter how rounding is implemented in the ALU; all we can see is the result. And the result of ## 1 + \frac \epsilon 2 ## will always be the same as ## 1 ##.

We can use this knowledge to easily find an example for Q1 (which the OP has already done); Q2 is a little harder, here we need to use something like $$ \left ( 1 + \sqrt{\frac \epsilon 2} \right ) ^ 2 = 1 + 2 \left ( \sqrt{\frac \epsilon 2} \right ) + \frac \epsilon 2 \equiv 1 + 2 \left ( \sqrt{\frac \epsilon 2} \right ) $$
to help.
 
Last edited:
  • Like
Likes FactChecker

Similar threads

Replies
6
Views
2K
Replies
17
Views
3K
Replies
2
Views
1K
Replies
4
Views
1K
Replies
5
Views
3K
Replies
8
Views
916
Replies
5
Views
1K
Back
Top