Insights Why Can't My Computer Do Simple Arithmetic? - Comments

Click For Summary
The discussion centers around the challenges of performing simple arithmetic on computers, particularly due to the limitations of floating-point arithmetic. Participants highlight that computers often produce slightly inaccurate results when performing calculations, such as using 1/3, due to the way floating-point numbers are stored in memory. A common recommendation is to use an approximate equality check instead of direct comparison, as values may be very close but not exactly equal. The conversation also touches on historical computing experiences, including the evolution from early programming methods using keypunch machines to modern programming practices. Participants share insights about various hardware and software solutions for handling floating-point arithmetic, including Intel's decimal libraries and the IEEE-754 standard. There is a consensus that while floating-point arithmetic is efficient, it comes with inherent inaccuracies, and some suggest that improvements to the IEEE-754 standard may be necessary. Overall, the discussion emphasizes the importance of understanding floating-point limitations in programming and the need for better methods to handle numerical precision in computing.
  • #31
jim mcnamara said:
Remember the Pentium FDIV bug?
That was half my life ago! No I don't remember it specifically... nor do I remember using anything that might have had it, once windows 95 came out I went through several older computers before buying a new dell 10 years ago (P4 2.4ghz), and haven't programmed much since...
 
Technology news on Phys.org
  • #32
jim mcnamara said:
Remember the Pentium FDIV bug?
I definitely do! I wrote a small x86 assembly program that checked your CPU to see if it was a Pentium, and, if so, used the FDIV instruction to do one of the division operations that was broken. It then compared the computed result against the correct answer to see if they were the same.

I sent it to Jeff Duntemann, Editor in Chief of PC Techniques magazine. He published it in the Feb/Mar 1995 issue. I thought he would publish it as a small article (and send me some $). Instead he published in their HAX page, as if it had been just some random code that someone had sent in. That wouldn't have been so bad, but he put a big blurb on the cover of that issue, "Detect Faulty Pentiums! Simple ASM Test"
 
  • #33
Mark44 said:
The Pentium would have been 80586, but I guess the marketing people got involved and changed to Pentium. If it was the first model, that was the one that had the division problem where some divisions gave an incorrect answer out in the 6th or so decimal place. It cost Intel about $1 billion to recall and replace those bad chips. I think that was in '94, not sure.
There was a joke that Intel changed the name to Pentium because they used the first Pentium to add 100 to 486 and got the answer 585.9963427...
 
Last edited:
  • Like
Likes Ibix and Silicon Waffle
  • #34
vela said:
There was a joke that Intel changed the name to Pentium because they used the first Pentium to add 100 to 486 and got the answer 485.9963427...
That cracked me up!
 
  • Like
Likes Silicon Waffle
  • #35
eltodesukane said:
Maybe it's time to review/modify/improve the IEEE-754 standard for floating-point arithmetic (IEEE floating point).
It's not a problem with the standard, it's a limitation of how accurate you can represent arbitrary floating point numbers (no matter the representation). I believe that the common implementations can exactly represent powers of 2 correct?

One application where I used this, is when using the additive blending through the GPU pipeline, to compute a reduction. Integers were not supported in OpenGL for this, so I instead chose a very small power of 2, and relied on the fact that it and it's multiples could be exactly represented using IEEE floating point specification, allowing me to effectively do an exact summation/reduction purely through GPU hardware.
 
  • Like
Likes Silicon Waffle
  • #36
Jarvis323 said:
It's not a problem with the standard, it's a limitation of how accurate you can represent arbitrary floating point numbers (no matter the representation). I believe that the common implementations can exactly represent powers of 2 correct?
If the fractional part is the sum of negative powers of 2, and the smallest power of 2 can be represented in the mantissa, then yes, the representation is exact. So, for example, 7/8 = 1/2 + 1/4 + 1/8 is represented exactly.
 
  • #37
The simplest example of a malfunction I can think of is 1/3+1/3+1/3=0.99999999. The main thing I remember from way back when (in my early teens) was that my computer had 8 decimal digits similar to a calculator, and the exponential 10 times for the number of decimal places followed by "E". The oddity that sticks out is that I remember trying to comprehend a 5 byte scheme that it used.

But regardless you have to compromise in some direction giving up speed in favor of accuracy or does the library system "skim" through easier values to increase efficiency? (I mean the newest system(s)) I'm guessing there are different implementations of the newest standards? I'm almost clueless how it works!
Another is sine tables, had to devise my own system to increase precision there, too! (nevermind found this:)
"The standard also includes extensive recommendations for advanced exception handling, additional operations (such as trigonometric functions), expression evaluation, and for achieving reproducible results."
from here: https://en.wikipedia.org/wiki/IEEE_floating_point
 
Last edited:
  • #38
Mathematica does rational arithmetic, so it gets it exact: Sum[1/10,{i,10} ] = 1
;-)
 
  • #39
Mark Harder said:
Mathematica does rational arithmetic, so it gets it exact: Sum[1/10,{i,10} ] = 1
;-)
Mathematica is written in Wolfram Language, C/C++ and Java.
As the link states, Wolfram only deals with symbolic computation, functional programming, and rule-based programming. So arithmetic and other computational math problem solvers are computed and handled in C or C++, whose main aim is also to boost the software performance.

PHP:
float a = 1.0 / 10;
    float sum = 0;
    for (int i = 0; i < 10; i++)
    {
        sum += a;
    }
    std::cout << sum << std::endl;//Output 1
 
  • #40
Silicon Waffle said:
Until now, nothing I could find in the C and C++ standard libraries can do correct computation with floats. I don't understand why the committee have done nothing about this even though there are already external libraries to do this, some of which are for free online.
What, exactly, do you mean by "correct computation with floats"?

Strictly speaking, no computer can ever do correct computations with the reals. The number of things a Turing machine (which has infinite memory) can do is limited by various theories of computation. On the other hand, almost all of the real numbers are not computable. The best that can be done with a Turing machine is to represent the computable numbers. The best that can be done with a realistic computer (which has finite memory and a finite amount of time in which to perform computations) is to represent a finite subset of the computable numbers.

As for why the C and C++ committees haven't made an arbitrary precision this part of the standard,
YAGNI: You Ain't Gonna Need It.
 
  • Like
Likes Silicon Waffle
  • #41
Silicon Waffle said:
Mathematica is written in Wolfram Language, C/C++ and Java.
Mathematica is an implementation of the Wolfram Language. It's not written in the language it implements.

Depending on how you enter an expression, Mathematica may use machine representation of numbers. For example, if you enter Sum[0.1, {i, 1, 10}] - 1 into Mathematica, you'll get ##-1.11022\times10^{-16}##, but if you enter the sum the way Mark did, Sum[1/10, {i, 1, 10}] - 1, the result will be 0.
 
  • #42
vela said:
Mathematica is an implementation of the Wolfram Language. It's not written in the language it implements.

Depending on how you enter an expression, Mathematica may use machine representation of numbers. For example, if you enter Sum[0.1, {i, 1, 10}] - 1 into Mathematica, you'll get ##-1.11022\times10^{-16}##, but if you enter the sum the way Mark did, Sum[1/10, {i, 1, 10}] - 1, the result will be 0.
I assume the symbols you enter e.g "Sum[...]" is manipulated by Wolfram which will call its already implemented methods done in C/C++ to compute the "sum" as requested. The result will be sent back to Wolfram to be displayed as an output.
 
  • #43
Silicon Waffle said:
I assume the symbols you enter e.g "Sum[...]" is manipulated by Wolfram which will call its already implemented methods done in C/C++ to compute the "sum" as requested. The result will be sent back to Wolfram to be displayed as an output.
That wouldn't necessarily imply that the C/C++ code is using primitive floating point types. I don't know about Wolfram, but I think a good Math package would allow you to choose.
 
  • #44
Early computers used BCD (binary coded decimal), including the first digital computer, the Eniac. IBM 1400 series were also decimal based. IBM mainframes since the 360 include support for variable length fixed point BCD, and variable length BCD (packed or unpacked) is a native type in COBOL. Intel processors have just enough support for BCD instructions to allow programs to work with variable length BCD fields (the software would have to take care of fixed point issues).

As for binary floating point compares, APL (A Programming Language), created in the 1960's, has a program adjustable "fuzz" variable used to set the compare tolerance for floating point "equal".

There are fractional libraries that represent rational numbers as integer fractions, maintaining separate values for the numerator and denominator . Some calculators include fractional support. Finding common divisors to reduce the fractions after an operation is probably done using Euclid algorithm, so this type of math is significantly slower.
 
Last edited:
  • #45
Mark44 said:
Mark44 submitted a new PF Insights post

Why Can't My Computer Do Simple Arithmetic?

computermath-80x80.png
Continue reading the Original PF Insights Post.
 
  • #46
Possibly it can, I am trying to figure out just what your question is about. If it is a PC( maybe a MAC also) then click the START button in the lower left, then ALL PROGRAMS, next ACCESSORIES and there should be a "Calculator" choice. Within it a choice probably called VIEW will show several different models of hand calculators that you click thru to run.
 
  • #47
davidNwillems said:
Possibly it can, I am trying to figure out just what your question is about. If it is a PC( maybe a MAC also) then click the START button in the lower left, then ALL PROGRAMS, next ACCESSORIES and there should be a "Calculator" choice. Within it a choice probably called VIEW will show several different models of hand calculators that you click thru to run.
You missed the point of my article, which is this -- if you write a program in one of many programming languages (such as C or its derivative languages, or Fortran, or whatever) to do some simple arithmetic, you are likely to get an answer that is a little off. It doesn't matter whether you use a PC or a Mac. The article discusses why this happens, based on the way that floating point numbers are stored in the computer's memory.
 
  • Like
Likes Silicon Waffle

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 67 ·
3
Replies
67
Views
8K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 52 ·
2
Replies
52
Views
7K