Why Can't My Computer Do Simple Arithmetic? - Comments

  • Insights
  • Thread starter Mark44
  • Start date

Answers and Replies

  • #5
The same problem happens in decimal, for example if you used 1/3 instead. You could use rational numbers instead but this is why you should not do if (a == 1.0) because it may be pretty close to 1.0 but not actually 1. So you need to do if (abs(a - 1.0) < 0.00001), I wish there was an approximately equals function built in. The other problem with floats is that once you get above 16 million you can only store even values, then multiples of 4 past 32 million, etc.
 
  • Like
Likes Greg Bernhardt
  • #6
Great!! Just learnt something new today. Thanks for sharing!!
 
  • #7
anorlunda
Staff Emeritus
Insights Author
8,585
5,472
Good Insight. It brings back memories of the good old days (bad old days?) when we had no printers, screens or keyboards. I had to learn to read and enter 24 bit floating point numbers in binary, using 24 little lights and 24 buttons. It was made easier by the fact that almost all the numbers were close to 1.0.
 
  • Like
Likes Greg Bernhardt
  • #8
jim mcnamara
Mentor
3,907
2,296
Great post, Mark. I was thinking of doing this very thing. Saved me from making a noodle of myself.... Thanks for that. FWIW decimal libraries from Intel correctly do floating point decimal arithmetic. The z9 power PC also has a chip that supports IEEE-754-2008 (decimal floating point) as does the new Fujitsu SPARC64 M10 with "software on a chip". Cool idea.
Cool post, too.
 
  • Like
Likes Greg Bernhardt
  • #10
33,631
5,288
Good Insight. It brings back memories of the good old days (bad old days?) when we had no printers, screens or keyboards. I had to learn to read and enter 24 bit floating point numbers in binary, using 24 little lights and 24 buttons. It was made easier by the fact that almost all the numbers were close to 1.0.
The first programming class I took was in 1972, using a language named PL-C. I think the 'C' meant it was a compact subset of PL-1. Anyway, you wrote your program and then used a keypunch machine to punch holes in a Hollerith (AKA 'IBM') card for each line of code, and added a few extra cards for the job control language (JCL). Then you would drop your card deck in a box, and one of the computer techs would eventually put all the card decks into a card reader to be transcribed onto a tape that would then be mounted on the IBM mainframe computer. Turnaround was usually a day, and most of my programs came back (on 17" wide fanfold paper) with several pages of what to me was gibberish, a core dump, as my program didn't work right. With alll this rigamarole, programming didn't hold much interest for me.

Even as late as 1980, when I took a class in Fortran at the Univ. of Washington, we were still using keypunch machines. It was when personal computers started to really get big, and compilers could compile and run your program in about a minute or so, that I could see there was somthing to this programming thing.
Great post, Mark. I was thinking of doing this very thing. Saved me from making a noodle of myself.... Thanks for that. FWIW decimal libraries from Intel correctly do floating point decimal arithmetic. The z9 power PC also has a chip that supports IEEE-754-2008 (decimal floating point) as does the new Fujitsu SPARC64 M10 with "software on a chip". Cool idea.
Cool post, too.
Thanks, Jim, glad you enjoyed it. Do you have any links to the Intel libraries, or a name? I'd like to look into that more.
 
  • Like
Likes Greg Bernhardt
  • #11
1,241
189
I certainly remember the days when you'd expect an integer and get "12.00000001". I think Algodoo still has that "glitch"! I can't even remember if my first computer had floating point operations in the CPU, it was a Tandy (radio shack) color computer 2 64kb 8086 1 Mhz processor. When I graduated to 80386 the floating point wasn't precise enough so I devised a method to use 2 bytes for the interger (+/- 32,767) and 2 bytes for the fractions (1/65,536ths). It was limited in flexibility but exact and quite fast!
 
  • #12
1,241
189
It was when personal computers started to really get big, and compilers could compile and run your program in about a minute or so, that I could see there was somthing to this programming thing.
My first computer was so slow I would crunch the code for routines in machine code in my head and just type a data string of values to poke into memory through basic because even the compiler was slow and buggy... its the object oriented programming platforms that was when I really saw the "next level something" for programming.
 
  • #13
33,631
5,288
I certainly remember the days when you'd expect an integer and get "12.00000001". I think Algodoo still has that "glitch"! I can't even remember if my first computer had floating point operations in the CPU, it was a Tandy (radio shack) color computer 2 64kb 8086 1 Mhz processor.
I don't think it had an Intel 8086 cpu. According to this wiki article, https://en.wikipedia.org/wiki/TRS-80_Color_Computer#Color_Computer_2_.281983.E2.80.931986.29, the Coco 2 had a Motorola MC6809 processor. I'm 99% sure it didn't have hardware support for floating point operations.
jerromyjon said:
When I graduated to 80386 the floating point wasn't precise enough so I devised a method to use 2 bytes for the interger (+/- 32,767) and 2 bytes for the fractions (1/65,536ths). It was limited in flexibility but exact and quite fast!
You must have had the optional Intel 80387 math processing unit or one of its competitors (Cyrix and another I can't remember). The 80386 didn't have any hardware floating point instructions.
 
  • #14
1,241
189
I'm 99% sure it didn't have hardware support for floating point operations.
That's what I thought at first 68B09E was the coco3 was the last one now that I remember, second guessed myself.
The 80386 didn't have any hardware floating point instructions.
It was a pentium whatever number I forget but I was only programming up to the 386 instruction set at that time... I printed out the instruction set on that same old fanfold paper stack and just started coding.
 
  • #15
33,631
5,288
It was a pentium whatever number I forget but I was only programming up to the 386 instruction set at that time...
The Pentium would have been 80586, but I guess the marketing people got involved and changed to Pentium. If it was the first model, that was the one that had the division problem where some divisions gave an incorrect answer out in the 6th or so decimal place. It cost Intel about $1 billion to recall and replace those bad chips. I think that was in '94, not sure.
 
  • #16
669
313
What C will do with that is do the computations as doubles but store the result in a float. Floats save no time. They are an old-fashioned thing in there to save space.

Numerical analysis programmers learn to be experts in dealing with floating point roundoff error. Hey, you can't expect a computer to store an infinite series, which is the definition of a real number.

There are packages that do calculations with rational numbers, but these are too slow for number crunching.
 
  • #17
1,241
189
I think that was in '94, not sure.
I think that was about the same time, I think it was a pentium 1, 25 Mhz intel chip with no math co-pro but it had a spot on the board for it. Perhaps it was just a 80486... can't even remember the computer model!
 
Last edited:
  • #18
1,241
189
The Pentium would have been 80586, but I guess the marketing people got involved and changed to Pentium.
Found this on wikipedia: "The i486 does not have the usual 80-prefix because of a court ruling that prohibits trademarking numbers (such as 80486). Later, with the introduction of the Pentium brand, Intel began branding its chips with words rather than numbers."
 
  • #20
jim mcnamara
Mentor
3,907
2,296
As an addendum - somebody may want to consider how to compare as equal 2 floating point numbers. Not just FLT_EPSILON or DBL_EPSILON which are nothing like a general solution.
 
  • #21
jim mcnamara
Mentor
3,907
2,296
@jerromyjon Michael Kahan consulted and John Palmer et al at Intel worked to create the 8087 math coprocessor, which was released in 1979. You most likely would have had to have an IBM motherboard with a special socket for it for the 8087. This AFAIK the first commodity FP math processor. It was the basis for original IEEE-754-1985 standard and the IEEE Standard for Radix-Independent Floating-Point Arithmetic (IEEE-754-1987).

With the 80486, a later Intel x86 processor marked the start of these cpus with an integrated math coprocessor. Still have 'em in there. Remember the Pentium FDIV bug? https://en.wikipedia.org/wiki/Pentium_FDIV_bug

.
 
  • #23
SteamKing
Staff Emeritus
Science Advisor
Homework Helper
12,796
1,667
I think you mean IEEE 754 - 2008 support.

Computers can do extended precision arithmetic, they just need to be programmed to do so. How else are we calculating π to a zillion digits, or finding new largest primes all the time? Even your trusty calculator doesn't rely on "standard" data types to do its internal calculations.

Early HP calculators, for example, used 56-bit wide registers in the CPU with special decimal encoding to represent numbers internally (later calculator CPUs from HP expanded to 64-bit wide registers), giving about a 10-digit mantissa and a two-digit exponent:

http://www.hpmuseum.org/techcpu.htm

The "standard" data types (4-byte or 8-byte floating point) are a compromise so that computers can crunch lots of decimal numbers relatively quickly without too much loss of precision doing so. In situations where more precision is required, like calculating the zillionth + 1 digit of π, different methods and programming altogether are used.
 
  • #24
jim mcnamara
Mentor
3,907
2,296
Thanks @SteamKing I type with my elbows, it is slow but really inaccurate.
 
  • #25
SteamKing
Staff Emeritus
Science Advisor
Homework Helper
12,796
1,667
Thanks @SteamKing I type with my elbows, it is slow but really inaccurate.
So, a compromise then? What you lack in accuracy, you make up for in speed. :wink:
 

Related Threads on Why Can't My Computer Do Simple Arithmetic? - Comments

Replies
12
Views
4K
Replies
1
Views
499
Replies
67
Views
5K
Replies
2
Views
831
  • Last Post
Replies
3
Views
2K
Replies
6
Views
880
Replies
3
Views
704
Replies
6
Views
905
Top