Why Can't My Computer Do Simple Arithmetic? - Comments

In summary: I remember the day I finally got a program to print out the Fibonacci sequence on a terminal. It was a relief!Thanks for the memories.
  • #1
37,621
9,849
Mark44 submitted a new PF Insights post

Why Can't My Computer Do Simple Arithmetic?

computermath-80x80.png


Continue reading the Original PF Insights Post.
 
  • Like
Likes ShayanJ, Samy_A and Silicon Waffle
Technology news on Phys.org
  • #3
I learned a lot with this Insight!
 
  • #5
The same problem happens in decimal, for example if you used 1/3 instead. You could use rational numbers instead but this is why you should not do if (a == 1.0) because it may be pretty close to 1.0 but not actually 1. So you need to do if (abs(a - 1.0) < 0.00001), I wish there was an approximately equals function built in. The other problem with floats is that once you get above 16 million you can only store even values, then multiples of 4 past 32 million, etc.
 
  • Like
Likes Greg Bernhardt
  • #6
Great! Just learned something new today. Thanks for sharing!
 
  • #7
Good Insight. It brings back memories of the good old days (bad old days?) when we had no printers, screens or keyboards. I had to learn to read and enter 24 bit floating point numbers in binary, using 24 little lights and 24 buttons. It was made easier by the fact that almost all the numbers were close to 1.0.
 
  • Like
Likes Greg Bernhardt
  • #8
Great post, Mark. I was thinking of doing this very thing. Saved me from making a noodle of myself... Thanks for that. FWIW decimal libraries from Intel correctly do floating point decimal arithmetic. The z9 power PC also has a chip that supports IEEE-754-2008 (decimal floating point) as does the new Fujitsu SPARC64 M10 with "software on a chip". Cool idea.
Cool post, too.
 
  • Like
Likes Greg Bernhardt
  • #10
anorlunda said:
Good Insight. It brings back memories of the good old days (bad old days?) when we had no printers, screens or keyboards. I had to learn to read and enter 24 bit floating point numbers in binary, using 24 little lights and 24 buttons. It was made easier by the fact that almost all the numbers were close to 1.0.
The first programming class I took was in 1972, using a language named PL-C. I think the 'C' meant it was a compact subset of PL-1. Anyway, you wrote your program and then used a keypunch machine to punch holes in a Hollerith (AKA 'IBM') card for each line of code, and added a few extra cards for the job control language (JCL). Then you would drop your card deck in a box, and one of the computer techs would eventually put all the card decks into a card reader to be transcribed onto a tape that would then be mounted on the IBM mainframe computer. Turnaround was usually a day, and most of my programs came back (on 17" wide fanfold paper) with several pages of what to me was gibberish, a core dump, as my program didn't work right. With alll this rigamarole, programming didn't hold much interest for me.

Even as late as 1980, when I took a class in Fortran at the Univ. of Washington, we were still using keypunch machines. It was when personal computers started to really get big, and compilers could compile and run your program in about a minute or so, that I could see there was somthing to this programming thing.
jim mcnamara said:
Great post, Mark. I was thinking of doing this very thing. Saved me from making a noodle of myself... Thanks for that. FWIW decimal libraries from Intel correctly do floating point decimal arithmetic. The z9 power PC also has a chip that supports IEEE-754-2008 (decimal floating point) as does the new Fujitsu SPARC64 M10 with "software on a chip". Cool idea.
Cool post, too.
Thanks, Jim, glad you enjoyed it. Do you have any links to the Intel libraries, or a name? I'd like to look into that more.
 
  • Like
Likes Greg Bernhardt
  • #11
I certainly remember the days when you'd expect an integer and get "12.00000001". I think Algodoo still has that "glitch"! I can't even remember if my first computer had floating point operations in the CPU, it was a Tandy (radio shack) color computer 2 64kb 8086 1 Mhz processor. When I graduated to 80386 the floating point wasn't precise enough so I devised a method to use 2 bytes for the interger (+/- 32,767) and 2 bytes for the fractions (1/65,536ths). It was limited in flexibility but exact and quite fast!
 
  • #12
Mark44 said:
It was when personal computers started to really get big, and compilers could compile and run your program in about a minute or so, that I could see there was somthing to this programming thing.
My first computer was so slow I would crunch the code for routines in machine code in my head and just type a data string of values to poke into memory through basic because even the compiler was slow and buggy... its the object oriented programming platforms that was when I really saw the "next level something" for programming.
 
  • #13
jerromyjon said:
I certainly remember the days when you'd expect an integer and get "12.00000001". I think Algodoo still has that "glitch"! I can't even remember if my first computer had floating point operations in the CPU, it was a Tandy (radio shack) color computer 2 64kb 8086 1 Mhz processor.
I don't think it had an Intel 8086 cpu. According to this wiki article, https://en.wikipedia.org/wiki/TRS-80_Color_Computer#Color_Computer_2_.281983.E2.80.931986.29, the Coco 2 had a Motorola MC6809 processor. I'm 99% sure it didn't have hardware support for floating point operations.
jerromyjon said:
When I graduated to 80386 the floating point wasn't precise enough so I devised a method to use 2 bytes for the interger (+/- 32,767) and 2 bytes for the fractions (1/65,536ths). It was limited in flexibility but exact and quite fast!
You must have had the optional Intel 80387 math processing unit or one of its competitors (Cyrix and another I can't remember). The 80386 didn't have any hardware floating point instructions.
 
  • #14
Mark44 said:
I'm 99% sure it didn't have hardware support for floating point operations.
That's what I thought at first 68B09E was the coco3 was the last one now that I remember, second guessed myself.
Mark44 said:
The 80386 didn't have any hardware floating point instructions.
It was a pentium whatever number I forget but I was only programming up to the 386 instruction set at that time... I printed out the instruction set on that same old fanfold paper stack and just started coding.
 
  • #15
jerromyjon said:
It was a pentium whatever number I forget but I was only programming up to the 386 instruction set at that time...
The Pentium would have been 80586, but I guess the marketing people got involved and changed to Pentium. If it was the first model, that was the one that had the division problem where some divisions gave an incorrect answer out in the 6th or so decimal place. It cost Intel about $1 billion to recall and replace those bad chips. I think that was in '94, not sure.
 
  • #16
What C will do with that is do the computations as doubles but store the result in a float. Floats save no time. They are an old-fashioned thing in there to save space.

Numerical analysis programmers learn to be experts in dealing with floating point roundoff error. Hey, you can't expect a computer to store an infinite series, which is the definition of a real number.

There are packages that do calculations with rational numbers, but these are too slow for number crunching.
 
  • #17
Mark44 said:
I think that was in '94, not sure.
I think that was about the same time, I think it was a pentium 1, 25 Mhz intel chip with no math co-pro but it had a spot on the board for it. Perhaps it was just a 80486... can't even remember the computer model!
 
Last edited:
  • #18
Mark44 said:
The Pentium would have been 80586, but I guess the marketing people got involved and changed to Pentium.
Found this on wikipedia: "The i486 does not have the usual 80-prefix because of a court ruling that prohibits trademarking numbers (such as 80486). Later, with the introduction of the Pentium brand, Intel began branding its chips with words rather than numbers."
 
  • #20
As an addendum - somebody may want to consider how to compare as equal 2 floating point numbers. Not just FLT_EPSILON or DBL_EPSILON which are nothing like a general solution.
 
  • #21
@jerromyjon Michael Kahan consulted and John Palmer et al at Intel worked to create the 8087 math coprocessor, which was released in 1979. You most likely would have had to have an IBM motherboard with a special socket for it for the 8087. This AFAIK the first commodity FP math processor. It was the basis for original IEEE-754-1985 standard and the IEEE Standard for Radix-Independent Floating-Point Arithmetic (IEEE-754-1987).

With the 80486, a later Intel x86 processor marked the start of these cpus with an integrated math coprocessor. Still have 'em in there. Remember the Pentium FDIV bug? https://en.wikipedia.org/wiki/Pentium_FDIV_bug

.
 
  • #23
jim mcnamara said:
I think you mean IEEE 754 - 2008 support.

Computers can do extended precision arithmetic, they just need to be programmed to do so. How else are we calculating π to a zillion digits, or finding new largest primes all the time? Even your trusty calculator doesn't rely on "standard" data types to do its internal calculations.

Early HP calculators, for example, used 56-bit wide registers in the CPU with special decimal encoding to represent numbers internally (later calculator CPUs from HP expanded to 64-bit wide registers), giving about a 10-digit mantissa and a two-digit exponent:

http://www.hpmuseum.org/techcpu.htm

The "standard" data types (4-byte or 8-byte floating point) are a compromise so that computers can crunch lots of decimal numbers relatively quickly without too much loss of precision doing so. In situations where more precision is required, like calculating the zillionth + 1 digit of π, different methods and programming altogether are used.
 
  • #24
Thanks @SteamKing I type with my elbows, it is slow but really inaccurate.
 
  • #25
jim mcnamara said:
Thanks @SteamKing I type with my elbows, it is slow but really inaccurate.
So, a compromise then? What you lack in accuracy, you make up for in speed. :wink:
 
  • #26
Until now, nothing I could find in the C and C++ standard libraries can do correct computation with floats. I don't understand why the committee have done nothing about this even though there are already external libraries to do this, some of which are for free online.
I also don't know why Microsoft only includes part of their solution in a Decimal class (System.Decimal) that can be used for only 28-29 significant digits as the number's precision even though I believe they can extend it to 1 zillion digits (can a string length be that long to verify its output accuracy anyway :biggrin: ?).
 
  • #27
As a guess, how about ROI? The Fujitsu Sparc64 M10 (has 16 cores), according to my friendly local Oracle peddler, supports it by writing the library onto the chip itself. Anyway one chip costs more than you and I make in a month, so it definitely is not a commodity chip. The z9 cpu like the ones in Power PC's supports decimal, too.

So in a sense some companies "decimalized" in hardware. Other companies saw it as a way to lose money - my opinion. BTW this problem has been around forever. Hence other formats: Oracle internal math is BCD with 32? decimals, MicroFocus COBOL is packed decimals with decimals = 18 per their website
http://supportline.microfocus.com/documentation/books/sx20books/prlimi.htm
 
  • #28
jim mcnamara said:
As an addendum - somebody may want to consider how to compare as equal 2 floating point numbers. Not just FLT_EPSILON or DBL_EPSILON which are nothing like a general solution.
This isn't anything I've ever given a lot of thought to, but it wouldn't be difficult to compare two floating point numbers in memory (32 bits, 64 bits, or more), as their bit patterns would be identical if they were equal. I'm not including unusual cases like NAN, denormals, and such. The trick is in the step going from a literal such as 0.1, to its representation as a bit pattern.
 
  • #29
Maybe it's time to review/modify/improve the IEEE-754 standard for floating-point arithmetic (IEEE floating point).
 
  • #30
This is from code by Doug Gwyn mostly. FP compare is a PITA because you cannot do a bitwise compare like you do with int datatypes.
You also cannot do this reliably either:
Code:
  if (float_datatype_variable==double_datatype_variable)
where you can otherwise test equality for most int datatypes: char, int, long with each other.

Code:
#include <stdlib.h>
#include <stdio.h>
#include <fenv.h>
// compile with gcc -std=c99 for fpclassify
inline int classify(double x) // filter out bad numbers correct for inexact, INT_MIN is the error return;
{
  switch(fpclassify(x))
  {
    case FP_INFINITE:
       errno=ERANGE;
       return INT_MIN;
    case FP_NAN: 
       errno=EINVAL;
       return INT_MIN;
  case FP_NORMAL: 
     return 2;
  case FP_SUBNORMAL:
  case FP_ZERO: 
     return 0; 
  }
  return FP_NORMAL;  //default
}

// doug gwyn's reldif function
//    usage:  if(reldif(a, b) <= TOLERANCE) ...
#define Abs(x)  ((x) < 0 ? -(x) : (x))
#define Max(a, b) ((a) > (b) ? (a) : (b))
double maxulp=0;  // tweek this into TOLERANCE
inline double reldif(double a, double b)
{
   double c = Abs(a);
   double d = Abs(b);
   int rc=0;

   d = Max(c, d);
   d = (d == 0.0) ? 0.0 : Abs(a - b) / d;
   rc=classify(d);
   if(!rc) // correct almost zeroes to zero
     d=0.;
   if(rc == INT_MIN )
   {                            // error return
     errno=ERANGE;
     d=DBL_MAX;
     perror("Error comparing values");
   } 
   return d;
}
//    usage:  if(reldif(a, b) <= TOLERANCE) ...
 
  • #31
jim mcnamara said:
Remember the Pentium FDIV bug?
That was half my life ago! No I don't remember it specifically... nor do I remember using anything that might have had it, once windows 95 came out I went through several older computers before buying a new dell 10 years ago (P4 2.4ghz), and haven't programmed much since...
 
  • #32
jim mcnamara said:
Remember the Pentium FDIV bug?
I definitely do! I wrote a small x86 assembly program that checked your CPU to see if it was a Pentium, and, if so, used the FDIV instruction to do one of the division operations that was broken. It then compared the computed result against the correct answer to see if they were the same.

I sent it to Jeff Duntemann, Editor in Chief of PC Techniques magazine. He published it in the Feb/Mar 1995 issue. I thought he would publish it as a small article (and send me some $). Instead he published in their HAX page, as if it had been just some random code that someone had sent in. That wouldn't have been so bad, but he put a big blurb on the cover of that issue, "Detect Faulty Pentiums! Simple ASM Test"
 
  • #33
Mark44 said:
The Pentium would have been 80586, but I guess the marketing people got involved and changed to Pentium. If it was the first model, that was the one that had the division problem where some divisions gave an incorrect answer out in the 6th or so decimal place. It cost Intel about $1 billion to recall and replace those bad chips. I think that was in '94, not sure.
There was a joke that Intel changed the name to Pentium because they used the first Pentium to add 100 to 486 and got the answer 585.9963427...
 
Last edited:
  • Like
Likes Ibix and Silicon Waffle
  • #34
vela said:
There was a joke that Intel changed the name to Pentium because they used the first Pentium to add 100 to 486 and got the answer 485.9963427...
That cracked me up!
 
  • Like
Likes Silicon Waffle
  • #35
eltodesukane said:
Maybe it's time to review/modify/improve the IEEE-754 standard for floating-point arithmetic (IEEE floating point).
It's not a problem with the standard, it's a limitation of how accurate you can represent arbitrary floating point numbers (no matter the representation). I believe that the common implementations can exactly represent powers of 2 correct?

One application where I used this, is when using the additive blending through the GPU pipeline, to compute a reduction. Integers were not supported in OpenGL for this, so I instead chose a very small power of 2, and relied on the fact that it and it's multiples could be exactly represented using IEEE floating point specification, allowing me to effectively do an exact summation/reduction purely through GPU hardware.
 
  • Like
Likes Silicon Waffle

Similar threads

  • Programming and Computer Science
Replies
2
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
2
Replies
67
Views
7K
  • Programming and Computer Science
Replies
25
Views
3K
  • Programming and Computer Science
Replies
2
Views
2K
  • Programming and Computer Science
Replies
6
Views
2K
  • Programming and Computer Science
Replies
2
Views
1K
  • Programming and Computer Science
Replies
11
Views
2K
  • Programming and Computer Science
Replies
6
Views
1K
  • Programming and Computer Science
Replies
29
Views
3K
Back
Top