# Why Can't My Computer Do Simple Arithmetic? - Comments

• Insights
Until now, nothing I could find in the C and C++ standard libraries can do correct computation with floats. I don't understand why the committee have done nothing about this even though there are already external libraries to do this, some of which are for free online.
I also don't know why Microsoft only includes part of their solution in a Decimal class (System.Decimal) that can be used for only 28-29 significant digits as the number's precision even though I believe they can extend it to 1 zillion digits (can a string length be that long to verify its output accuracy anyway ?).

jim mcnamara
Mentor
As a guess, how about ROI? The Fujitsu Sparc64 M10 (has 16 cores), according to my friendly local Oracle peddler, supports it by writing the library onto the chip itself. Anyway one chip costs more than you and I make in a month, so it definitely is not a commodity chip. The z9 cpu like the ones in Power PC's supports decimal, too.

So in a sense some companies "decimalized" in hardware. Other companies saw it as a way to lose money - my opinion. BTW this problem has been around forever. Hence other formats: Oracle internal math is BCD with 32? decimals, MicroFocus COBOL is packed decimals with decimals = 18 per their website
http://supportline.microfocus.com/documentation/books/sx20books/prlimi.htm

Mentor
As an addendum - somebody may want to consider how to compare as equal 2 floating point numbers. Not just FLT_EPSILON or DBL_EPSILON which are nothing like a general solution.
This isn't anything I've ever given a lot of thought to, but it wouldn't be difficult to compare two floating point numbers in memory (32 bits, 64 bits, or more), as their bit patterns would be identical if they were equal. I'm not including unusual cases like NAN, denormals, and such. The trick is in the step going from a literal such as 0.1, to its representation as a bit pattern.

Maybe it's time to review/modify/improve the IEEE-754 standard for floating-point arithmetic (IEEE floating point).

jim mcnamara
Mentor
This is from code by Doug Gwyn mostly. FP compare is a PITA because you cannot do a bitwise compare like you do with int datatypes.
You also cannot do this reliably either:
Code:
  if (float_datatype_variable==double_datatype_variable)
where you can otherwise test equality for most int datatypes: char, int, long with each other.

Code:
#include <stdlib.h>
#include <stdio.h>
#include <fenv.h>
// compile with gcc -std=c99 for fpclassify
inline int classify(double x) // filter out bad numbers correct for inexact, INT_MIN is the error return;
{
switch(fpclassify(x))
{
case FP_INFINITE:
errno=ERANGE;
return INT_MIN;
case FP_NAN:
errno=EINVAL;
return INT_MIN;
case FP_NORMAL:
return 2;
case FP_SUBNORMAL:
case FP_ZERO:
return 0;
}
return FP_NORMAL;  //default
}

// doug gwyn's reldif function
//    usage:  if(reldif(a, b) <= TOLERANCE) ...
#define Abs(x)  ((x) < 0 ? -(x) : (x))
#define Max(a, b) ((a) > (b) ? (a) : (b))
double maxulp=0;  // tweek this into TOLERANCE
inline double reldif(double a, double b)
{
double c = Abs(a);
double d = Abs(b);
int rc=0;

d = Max(c, d);
d = (d == 0.0) ? 0.0 : Abs(a - b) / d;
rc=classify(d);
if(!rc) // correct almost zeroes to zero
d=0.;
if(rc == INT_MIN )
{                            // error return
errno=ERANGE;
d=DBL_MAX;
perror("Error comparing values");
}
return d;
}
//    usage:  if(reldif(a, b) <= TOLERANCE) ...

Remember the Pentium FDIV bug?
That was half my life ago! No I don't remember it specifically... nor do I remember using anything that might have had it, once windows 95 came out I went through several older computers before buying a new dell 10 years ago (P4 2.4ghz), and haven't programmed much since...

Mentor
Remember the Pentium FDIV bug?
I definitely do! I wrote a small x86 assembly program that checked your CPU to see if it was a Pentium, and, if so, used the FDIV instruction to do one of the division operations that was broken. It then compared the computed result against the correct answer to see if they were the same.

I sent it to Jeff Duntemann, Editor in Chief of PC Techniques magazine. He published it in the Feb/Mar 1995 issue. I thought he would publish it as a small article (and send me some $). Instead he published in their HAX page, as if it had been just some random code that someone had sent in. That wouldn't have been so bad, but he put a big blurb on the cover of that issue, "Detect Faulty Pentiums! Simple ASM Test" vela Staff Emeritus Science Advisor Homework Helper Education Advisor The Pentium would have been 80586, but I guess the marketing people got involved and changed to Pentium. If it was the first model, that was the one that had the division problem where some divisions gave an incorrect answer out in the 6th or so decimal place. It cost Intel about$1 billion to recall and replace those bad chips. I think that was in '94, not sure.
There was a joke that Intel changed the name to Pentium because they used the first Pentium to add 100 to 486 and got the answer 585.9963427...

Last edited:
Ibix and Silicon Waffle
Mentor
There was a joke that Intel changed the name to Pentium because they used the first Pentium to add 100 to 486 and got the answer 485.9963427...
That cracked me up!

Silicon Waffle
Maybe it's time to review/modify/improve the IEEE-754 standard for floating-point arithmetic (IEEE floating point).
It's not a problem with the standard, it's a limitation of how accurate you can represent arbitrary floating point numbers (no matter the representation). I believe that the common implementations can exactly represent powers of 2 correct?

One application where I used this, is when using the additive blending through the GPU pipeline, to compute a reduction. Integers were not supported in OpenGL for this, so I instead chose a very small power of 2, and relied on the fact that it and it's multiples could be exactly represented using IEEE floating point specification, allowing me to effectively do an exact summation/reduction purely through GPU hardware.

Silicon Waffle
Mentor
It's not a problem with the standard, it's a limitation of how accurate you can represent arbitrary floating point numbers (no matter the representation). I believe that the common implementations can exactly represent powers of 2 correct?
If the fractional part is the sum of negative powers of 2, and the smallest power of 2 can be represented in the mantissa, then yes, the representation is exact. So, for example, 7/8 = 1/2 + 1/4 + 1/8 is represented exactly.

The simplest example of a malfunction I can think of is 1/3+1/3+1/3=0.99999999. The main thing I remember from way back when (in my early teens) was that my computer had 8 decimal digits similar to a calculator, and the exponential 10 times for the number of decimal places followed by "E". The oddity that sticks out is that I remember trying to comprehend a 5 byte scheme that it used.

But regardless you have to compromise in some direction giving up speed in favor of accuracy or does the library system "skim" through easier values to increase efficiency? (I mean the newest system(s)) I'm guessing there are different implementations of the newest standards? I'm almost clueless how it works!
Another is sine tables, had to devise my own system to increase precision there, too! (nevermind found this:)
"The standard also includes extensive recommendations for advanced exception handling, additional operations (such as trigonometric functions), expression evaluation, and for achieving reproducible results."
from here: https://en.wikipedia.org/wiki/IEEE_floating_point

Last edited:
Mark Harder
Gold Member
Mathematica does rational arithmetic, so it gets it exact: Sum[1/10,{i,10} ] = 1
;-)

Mathematica does rational arithmetic, so it gets it exact: Sum[1/10,{i,10} ] = 1
;-)
Mathematica is written in Wolfram Language, C/C++ and Java.
As the link states, Wolfram only deals with symbolic computation, functional programming, and rule-based programming. So arithmetic and other computational math problem solvers are computed and handled in C or C++, whose main aim is also to boost the software performance.

PHP:
float a = 1.0 / 10;
float sum = 0;
for (int i = 0; i < 10; i++)
{
sum += a;
}
std::cout << sum << std::endl;//Output 1

D H
Staff Emeritus
Until now, nothing I could find in the C and C++ standard libraries can do correct computation with floats. I don't understand why the committee have done nothing about this even though there are already external libraries to do this, some of which are for free online.
What, exactly, do you mean by "correct computation with floats"?

Strictly speaking, no computer can ever do correct computations with the reals. The number of things a Turing machine (which has infinite memory) can do is limited by various theories of computation. On the other hand, almost all of the real numbers are not computable. The best that can be done with a Turing machine is to represent the computable numbers. The best that can be done with a realistic computer (which has finite memory and a finite amount of time in which to perform computations) is to represent a finite subset of the computable numbers.

As for why the C and C++ committees haven't made an arbitrary precision this part of the standard,
YAGNI: You Ain't Gonna Need It.

Silicon Waffle
vela
Staff Emeritus
Homework Helper
Mathematica is written in Wolfram Language, C/C++ and Java.
Mathematica is an implementation of the Wolfram Language. It's not written in the language it implements.

Depending on how you enter an expression, Mathematica may use machine representation of numbers. For example, if you enter Sum[0.1, {i, 1, 10}] - 1 into Mathematica, you'll get ##-1.11022\times10^{-16}##, but if you enter the sum the way Mark did, Sum[1/10, {i, 1, 10}] - 1, the result will be 0.

Mathematica is an implementation of the Wolfram Language. It's not written in the language it implements.

Depending on how you enter an expression, Mathematica may use machine representation of numbers. For example, if you enter Sum[0.1, {i, 1, 10}] - 1 into Mathematica, you'll get ##-1.11022\times10^{-16}##, but if you enter the sum the way Mark did, Sum[1/10, {i, 1, 10}] - 1, the result will be 0.
I assume the symbols you enter e.g "Sum[...]" is manipulated by Wolfram which will call its already implemented methods done in C/C++ to compute the "sum" as requested. The result will be sent back to Wolfram to be displayed as an output.

I assume the symbols you enter e.g "Sum[...]" is manipulated by Wolfram which will call its already implemented methods done in C/C++ to compute the "sum" as requested. The result will be sent back to Wolfram to be displayed as an output.
That wouldn't necessarily imply that the C/C++ code is using primitive floating point types. I don't know about Wolfram, but I think a good Math package would allow you to choose.

rcgldr
Homework Helper
Early computers used BCD (binary coded decimal), including the first digital computer, the Eniac. IBM 1400 series were also decimal based. IBM mainframes since the 360 include support for variable length fixed point BCD, and variable length BCD (packed or unpacked) is a native type in COBOL. Intel processors have just enough support for BCD instructions to allow programs to work with variable length BCD fields (the software would have to take care of fixed point issues).

As for binary floating point compares, APL (A Programming Language), created in the 1960's, has a program adjustable "fuzz" variable used to set the compare tolerance for floating point "equal".

There are fractional libraries that represent rational numbers as integer fractions, maintaining separate values for the numerator and denominator . Some calculators include fractional support. Finding common divisors to reduce the fractions after an operation is probably done using Euclid algorithm, so this type of math is significantly slower.

Last edited:
Possibly it can, I am trying to figure out just what your question is about. If it is a PC( maybe a MAC also) then click the START button in the lower left, then ALL PROGRAMS, next ACCESSORIES and there should be a "Calculator" choice. Within it a choice probably called VIEW will show several different models of hand calculators that you click thru to run.

Mentor
Possibly it can, I am trying to figure out just what your question is about. If it is a PC( maybe a MAC also) then click the START button in the lower left, then ALL PROGRAMS, next ACCESSORIES and there should be a "Calculator" choice. Within it a choice probably called VIEW will show several different models of hand calculators that you click thru to run.
You missed the point of my article, which is this -- if you write a program in one of many programming languages (such as C or its derivative languages, or Fortran, or whatever) to do some simple arithmetic, you are likely to get an answer that is a little off. It doesn't matter whether you use a PC or a Mac. The article discusses why this happens, based on the way that floating point numbers are stored in the computer's memory.

Silicon Waffle