anorlunda said:
Who else is old enough to remember the early IBM computers, 1401, 1620, 650?
The 1620 was the second computer I worked with.
The first machine I worked with was the 402 accounting machine (programmed via a jumper panel); the the Honeywell 200, then the IBM1620 - and much later a 1401.
But getting back to the points in hand:
1) Encoding:
The central issue here is "encoding" - how a computer represents numbers - especially non-integer values.
One method is to pick units that allow integer representation. So 0.3 meters minus 0.2 meters might be an issue, but 30 centimeters minus 20 centimeters is no issue at all.
So the encoding could be 16-bit 2's complement integer with centimeter units. The results will be precise within the range of -327.68 meters to 327.65 meters.
But the floating point arithmetic supported by many computer languages and (for the past few decades) most computer processors is targeted to support a wide range of applications. The values could represent time, distance, or non-transcendental numbers. So fixed point arithmetic does not do.
The purpose of floating point arithmetic is to attempt to provide convenience. If it does not suit you, use your own encoding. I have a case in point right in front of me. The device I am working with now is intended for low power to extend battery life. It has very little ROM programming area and RAM - much too little to hold a floating point library. So my encoding is always to preserve precision. I get better results than floating point. In fact, my target is ideal results (no loss of precision from the captured measurements to the decisions based on those measurements) and with planning, I always hit that target.
2) The floating point exponent:
A lot of the discussion has focused on how 1/10, 2/10, and 3/10 are not precisely encoded. But there's a floating point vulnerability in play that is more central to the results that
@SamRoss is describing. When 0.01 or 1/10 are evaluated, then floating point encoding will be as close as the encoding allows to 0.1. And the same is true with 2/10 and 3/10.
Fundamentally, the floating point encoding is a signed binary exponent (power of 2) and a mantissa. If I wanted to be precise, I would discuss the phantom bit and other subtleties related to collating sequence - but I will stay general to stay on on point.
The precision problem comes with mantissa precision (which is limited to the number of bits reserved for the mantissa) and the absolute precision (which is a function of the mantissa bits and that binary exponent).
And we will keep our mantissa in the range of 0.5 to 1.0.
So the 1/10 will be something like 4/5 times 2^-3. The 2/10 becomes 4/5 times 2^-2 and the 3/10 becomes 3/5 times 2^-1.
During the subtraction, the intermediate (internally hidden) results will be:
3/5 times 2^-1 minus 4/5 times 2^-2
aligning the mantissas: 6/5 times 2^-2 minus 4/5 times 2^-2 = 2/5 times 2^-2
then readjusting the result for floating point encoding: 4/5 times 2^-3.
During that final readjustment, the mantissa is shifted leaving the low-order bit unspecified. A zero is filled in - but that doesn't create any precision.
The problem would be even more severe (and easier to catch) if the subtraction was 10000.3 - 10000.2.
3) Is the computer "wrong":
Clearly this is an issue of semantics. But I would note that compiler statements are imperative. Even compiler statements described as "declarations" (such as "int n") are instructions (ie, "imperatives") to the compiler and computer. Assuming there is no malfunction, the results, right or wrong, are the programmers.