Why Can't My Computer Do Simple Arithmetic? - Comments

  • Context: Insights 
  • Thread starter Thread starter Mark44
  • Start date Start date
  • Tags Tags
    Arithmetic Computer
Click For Summary

Discussion Overview

The discussion revolves around the challenges and intricacies of computer arithmetic, particularly focusing on floating point representation and its implications in programming. Participants share insights, personal experiences, and technical details related to the limitations of floating point calculations, historical computing practices, and potential solutions.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Historical
  • Debate/contested

Main Points Raised

  • Some participants discuss the inaccuracies in floating point arithmetic, suggesting that comparisons should use a tolerance level rather than direct equality.
  • Others mention the historical context of programming, recalling experiences with early computers and the limitations of their arithmetic capabilities.
  • A few participants propose that certain modern processors and libraries handle floating point arithmetic more accurately, referencing specific technologies like Intel's decimal libraries and IEEE-754-2008 support.
  • Some express nostalgia for earlier computing days, sharing anecdotes about programming practices and the evolution of technology.
  • There are mentions of specific floating point issues, such as the infamous Pentium division bug, highlighting the real-world impact of arithmetic errors in computing.
  • Participants also note the existence of packages for rational number calculations, though they acknowledge these may not be efficient for high-performance computing.

Areas of Agreement / Disagreement

Participants express a range of views on the effectiveness of floating point arithmetic and its historical context. While some agree on the challenges posed by floating point representation, there is no consensus on the best approaches or solutions, and multiple competing perspectives remain throughout the discussion.

Contextual Notes

Limitations include varying definitions of floating point precision, the historical context of computing hardware, and unresolved technical details regarding specific processors and their capabilities.

Who May Find This Useful

This discussion may be of interest to computer science students, programmers, and those interested in the history of computing and numerical analysis.

Messages
38,106
Reaction score
10,661
Mark44 submitted a new PF Insights post

Why Can't My Computer Do Simple Arithmetic?

computermath-80x80.png


Continue reading the Original PF Insights Post.
 
  • Like
Likes   Reactions: ShayanJ, Samy_A and Silicon Waffle
Technology news on Phys.org
Last edited:
  • Like
Likes   Reactions: gracy and Silicon Waffle
BvU said:
Good insight !

Quickly fix Avogadro !
##6.022 × 10^{23}## right? I fixed it.
 
  • Like
Likes   Reactions: BvU
The same problem happens in decimal, for example if you used 1/3 instead. You could use rational numbers instead but this is why you should not do if (a == 1.0) because it may be pretty close to 1.0 but not actually 1. So you need to do if (abs(a - 1.0) < 0.00001), I wish there was an approximately equals function built in. The other problem with floats is that once you get above 16 million you can only store even values, then multiples of 4 past 32 million, etc.
 
  • Like
Likes   Reactions: Greg Bernhardt
Great! Just learned something new today. Thanks for sharing!
 
Good Insight. It brings back memories of the good old days (bad old days?) when we had no printers, screens or keyboards. I had to learn to read and enter 24 bit floating point numbers in binary, using 24 little lights and 24 buttons. It was made easier by the fact that almost all the numbers were close to 1.0.
 
  • Like
Likes   Reactions: Greg Bernhardt
Great post, Mark. I was thinking of doing this very thing. Saved me from making a noodle of myself... Thanks for that. FWIW decimal libraries from Intel correctly do floating point decimal arithmetic. The z9 power PC also has a chip that supports IEEE-754-2008 (decimal floating point) as does the new Fujitsu SPARC64 M10 with "software on a chip". Cool idea.
Cool post, too.
 
  • Like
Likes   Reactions: Greg Bernhardt
  • #10
anorlunda said:
Good Insight. It brings back memories of the good old days (bad old days?) when we had no printers, screens or keyboards. I had to learn to read and enter 24 bit floating point numbers in binary, using 24 little lights and 24 buttons. It was made easier by the fact that almost all the numbers were close to 1.0.
The first programming class I took was in 1972, using a language named PL-C. I think the 'C' meant it was a compact subset of PL-1. Anyway, you wrote your program and then used a keypunch machine to punch holes in a Hollerith (AKA 'IBM') card for each line of code, and added a few extra cards for the job control language (JCL). Then you would drop your card deck in a box, and one of the computer techs would eventually put all the card decks into a card reader to be transcribed onto a tape that would then be mounted on the IBM mainframe computer. Turnaround was usually a day, and most of my programs came back (on 17" wide fanfold paper) with several pages of what to me was gibberish, a core dump, as my program didn't work right. With alll this rigamarole, programming didn't hold much interest for me.

Even as late as 1980, when I took a class in Fortran at the Univ. of Washington, we were still using keypunch machines. It was when personal computers started to really get big, and compilers could compile and run your program in about a minute or so, that I could see there was somthing to this programming thing.
jim mcnamara said:
Great post, Mark. I was thinking of doing this very thing. Saved me from making a noodle of myself... Thanks for that. FWIW decimal libraries from Intel correctly do floating point decimal arithmetic. The z9 power PC also has a chip that supports IEEE-754-2008 (decimal floating point) as does the new Fujitsu SPARC64 M10 with "software on a chip". Cool idea.
Cool post, too.
Thanks, Jim, glad you enjoyed it. Do you have any links to the Intel libraries, or a name? I'd like to look into that more.
 
  • Like
Likes   Reactions: Greg Bernhardt
  • #11
I certainly remember the days when you'd expect an integer and get "12.00000001". I think Algodoo still has that "glitch"! I can't even remember if my first computer had floating point operations in the CPU, it was a Tandy (radio shack) color computer 2 64kb 8086 1 Mhz processor. When I graduated to 80386 the floating point wasn't precise enough so I devised a method to use 2 bytes for the interger (+/- 32,767) and 2 bytes for the fractions (1/65,536ths). It was limited in flexibility but exact and quite fast!
 
  • #12
Mark44 said:
It was when personal computers started to really get big, and compilers could compile and run your program in about a minute or so, that I could see there was somthing to this programming thing.
My first computer was so slow I would crunch the code for routines in machine code in my head and just type a data string of values to poke into memory through basic because even the compiler was slow and buggy... its the object oriented programming platforms that was when I really saw the "next level something" for programming.
 
  • #13
jerromyjon said:
I certainly remember the days when you'd expect an integer and get "12.00000001". I think Algodoo still has that "glitch"! I can't even remember if my first computer had floating point operations in the CPU, it was a Tandy (radio shack) color computer 2 64kb 8086 1 Mhz processor.
I don't think it had an Intel 8086 cpu. According to this wiki article, https://en.wikipedia.org/wiki/TRS-80_Color_Computer#Color_Computer_2_.281983.E2.80.931986.29, the Coco 2 had a Motorola MC6809 processor. I'm 99% sure it didn't have hardware support for floating point operations.
jerromyjon said:
When I graduated to 80386 the floating point wasn't precise enough so I devised a method to use 2 bytes for the interger (+/- 32,767) and 2 bytes for the fractions (1/65,536ths). It was limited in flexibility but exact and quite fast!
You must have had the optional Intel 80387 math processing unit or one of its competitors (Cyrix and another I can't remember). The 80386 didn't have any hardware floating point instructions.
 
  • #14
Mark44 said:
I'm 99% sure it didn't have hardware support for floating point operations.
That's what I thought at first 68B09E was the coco3 was the last one now that I remember, second guessed myself.
Mark44 said:
The 80386 didn't have any hardware floating point instructions.
It was a pentium whatever number I forget but I was only programming up to the 386 instruction set at that time... I printed out the instruction set on that same old fanfold paper stack and just started coding.
 
  • #15
jerromyjon said:
It was a pentium whatever number I forget but I was only programming up to the 386 instruction set at that time...
The Pentium would have been 80586, but I guess the marketing people got involved and changed to Pentium. If it was the first model, that was the one that had the division problem where some divisions gave an incorrect answer out in the 6th or so decimal place. It cost Intel about $1 billion to recall and replace those bad chips. I think that was in '94, not sure.
 
  • #16
What C will do with that is do the computations as doubles but store the result in a float. Floats save no time. They are an old-fashioned thing in there to save space.

Numerical analysis programmers learn to be experts in dealing with floating point roundoff error. Hey, you can't expect a computer to store an infinite series, which is the definition of a real number.

There are packages that do calculations with rational numbers, but these are too slow for number crunching.
 
  • #17
Mark44 said:
I think that was in '94, not sure.
I think that was about the same time, I think it was a pentium 1, 25 Mhz intel chip with no math co-pro but it had a spot on the board for it. Perhaps it was just a 80486... can't even remember the computer model!
 
Last edited:
  • #18
Mark44 said:
The Pentium would have been 80586, but I guess the marketing people got involved and changed to Pentium.
Found this on wikipedia: "The i486 does not have the usual 80-prefix because of a court ruling that prohibits trademarking numbers (such as 80486). Later, with the introduction of the Pentium brand, Intel began branding its chips with words rather than numbers."
 
  • #20
As an addendum - somebody may want to consider how to compare as equal 2 floating point numbers. Not just FLT_EPSILON or DBL_EPSILON which are nothing like a general solution.
 
  • #21
@jerromyjon Michael Kahan consulted and John Palmer et al at Intel worked to create the 8087 math coprocessor, which was released in 1979. You most likely would have had to have an IBM motherboard with a special socket for it for the 8087. This AFAIK the first commodity FP math processor. It was the basis for original IEEE-754-1985 standard and the IEEE Standard for Radix-Independent Floating-Point Arithmetic (IEEE-754-1987).

With the 80486, a later Intel x86 processor marked the start of these cpus with an integrated math coprocessor. Still have 'em in there. Remember the Pentium FDIV bug? https://en.wikipedia.org/wiki/Pentium_FDIV_bug

.
 
  • #23
jim mcnamara said:
I think you mean IEEE 754 - 2008 support.

Computers can do extended precision arithmetic, they just need to be programmed to do so. How else are we calculating π to a zillion digits, or finding new largest primes all the time? Even your trusty calculator doesn't rely on "standard" data types to do its internal calculations.

Early HP calculators, for example, used 56-bit wide registers in the CPU with special decimal encoding to represent numbers internally (later calculator CPUs from HP expanded to 64-bit wide registers), giving about a 10-digit mantissa and a two-digit exponent:

http://www.hpmuseum.org/techcpu.htm

The "standard" data types (4-byte or 8-byte floating point) are a compromise so that computers can crunch lots of decimal numbers relatively quickly without too much loss of precision doing so. In situations where more precision is required, like calculating the zillionth + 1 digit of π, different methods and programming altogether are used.
 
  • #24
Thanks @SteamKing I type with my elbows, it is slow but really inaccurate.
 
  • #25
jim mcnamara said:
Thanks @SteamKing I type with my elbows, it is slow but really inaccurate.
So, a compromise then? What you lack in accuracy, you make up for in speed. :wink:
 
  • #26
Until now, nothing I could find in the C and C++ standard libraries can do correct computation with floats. I don't understand why the committee have done nothing about this even though there are already external libraries to do this, some of which are for free online.
I also don't know why Microsoft only includes part of their solution in a Decimal class (System.Decimal) that can be used for only 28-29 significant digits as the number's precision even though I believe they can extend it to 1 zillion digits (can a string length be that long to verify its output accuracy anyway :biggrin: ?).
 
  • #27
As a guess, how about ROI? The Fujitsu Sparc64 M10 (has 16 cores), according to my friendly local Oracle peddler, supports it by writing the library onto the chip itself. Anyway one chip costs more than you and I make in a month, so it definitely is not a commodity chip. The z9 cpu like the ones in Power PC's supports decimal, too.

So in a sense some companies "decimalized" in hardware. Other companies saw it as a way to lose money - my opinion. BTW this problem has been around forever. Hence other formats: Oracle internal math is BCD with 32? decimals, MicroFocus COBOL is packed decimals with decimals = 18 per their website
http://supportline.microfocus.com/documentation/books/sx20books/prlimi.htm
 
  • #28
jim mcnamara said:
As an addendum - somebody may want to consider how to compare as equal 2 floating point numbers. Not just FLT_EPSILON or DBL_EPSILON which are nothing like a general solution.
This isn't anything I've ever given a lot of thought to, but it wouldn't be difficult to compare two floating point numbers in memory (32 bits, 64 bits, or more), as their bit patterns would be identical if they were equal. I'm not including unusual cases like NAN, denormals, and such. The trick is in the step going from a literal such as 0.1, to its representation as a bit pattern.
 
  • #29
Maybe it's time to review/modify/improve the IEEE-754 standard for floating-point arithmetic (IEEE floating point).
 
  • #30
This is from code by Doug Gwyn mostly. FP compare is a PITA because you cannot do a bitwise compare like you do with int datatypes.
You also cannot do this reliably either:
Code:
  if (float_datatype_variable==double_datatype_variable)
where you can otherwise test equality for most int datatypes: char, int, long with each other.

Code:
#include <stdlib.h>
#include <stdio.h>
#include <fenv.h>
// compile with gcc -std=c99 for fpclassify
inline int classify(double x) // filter out bad numbers correct for inexact, INT_MIN is the error return;
{
  switch(fpclassify(x))
  {
    case FP_INFINITE:
       errno=ERANGE;
       return INT_MIN;
    case FP_NAN: 
       errno=EINVAL;
       return INT_MIN;
  case FP_NORMAL: 
     return 2;
  case FP_SUBNORMAL:
  case FP_ZERO: 
     return 0; 
  }
  return FP_NORMAL;  //default
}

// doug gwyn's reldif function
//    usage:  if(reldif(a, b) <= TOLERANCE) ...
#define Abs(x)  ((x) < 0 ? -(x) : (x))
#define Max(a, b) ((a) > (b) ? (a) : (b))
double maxulp=0;  // tweek this into TOLERANCE
inline double reldif(double a, double b)
{
   double c = Abs(a);
   double d = Abs(b);
   int rc=0;

   d = Max(c, d);
   d = (d == 0.0) ? 0.0 : Abs(a - b) / d;
   rc=classify(d);
   if(!rc) // correct almost zeroes to zero
     d=0.;
   if(rc == INT_MIN )
   {                            // error return
     errno=ERANGE;
     d=DBL_MAX;
     perror("Error comparing values");
   } 
   return d;
}
//    usage:  if(reldif(a, b) <= TOLERANCE) ...
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 67 ·
3
Replies
67
Views
8K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 52 ·
2
Replies
52
Views
7K