Python Why do 0.1 and 1/10 work but 0.3-0.2 doesn't?

  • Thread starter Thread starter SamRoss
  • Start date Start date
  • Tags Tags
    Work
AI Thread Summary
The discussion centers on the differences in floating-point arithmetic in Python, specifically why operations like 0.3 - 0.2 yield unexpected results, such as 0.09999999999999998, while 0.1 and 1/10 display correctly as 0.1. This discrepancy arises because numbers like 0.3 and 0.2 cannot be precisely represented in binary, leading to truncation errors during calculations. The conversation highlights that when subtracting two floating-point numbers, precision loss occurs, resulting in a value that is slightly off from the expected result. It is noted that Python's print function rounds displayed values, which can further obscure these precision issues. Understanding these limitations is crucial for users to avoid misinterpretation of results in floating-point arithmetic.
  • #51
PeroK said:
Based on this thread, your GPS always has you on the correct road.
You are going off the rails here. Nobody is claiming anything like this.
 
Technology news on Phys.org
  • #52
PeroK said:
Can a computer ever be wrong?
Yes. There could be a bug in the CPU. Intel infamously knew of such a bug but didn't bother to tell anyone. A researcher painstakingly tracked the obscure bug down when he noticed his simulation run on different computers was producing different results.
 
  • #53
PeroK said:
Based on this thread, your GPS always has you on the correct road. As the end user, you just didn't study the GPS code and/or specification carefully enough! :smile:
Ha! My ex also would have liked it if I went the wrong way or circled forever. ;-)
 
  • #54
PeterDonis said:
No, you haven't. The print() function does not change the underlying value; if the underlying value is an intermediate step in a computation with many steps, it's still going to be the floating point binary value that @DrClaude described, not the rounded value. That's most likely not going to create a problem.
I wasn't referring to how the numbers are represented by the computer internally. Mathematica, C, APL, Python all appear to produce the same result internally. I was talking about how such a number is displayed to the user. Python seems to be the only one that displays the result to so many digits of precision. The others seem to reflect the idea that displaying the rounded value by default is more sensible, presumably because the rounded result is the more useful one to most users.
If you want to understand what the computer is actually doing and what values it is actually working with, it's the kind of thing you need to know. (For example, such knowledge is essential when debugging.)
I see this scenario as an edge case. If you're trying to track down some puzzling result, then sure, knowledge of how floating point numbers works might be useful, but I'd argue most people using a computer to do calculations would rather know that 0.3-0.2 is 0.1.

Anyway, Python does what Python does. It's a design choice, but it's just one I think is weird and questionable.
 
  • Like
Likes PeroK
  • #55
vela said:
Python seems to be the only one that displays the result to so many digits of precision.
As I pointed out earlier, the print() function in Python is not intended for nicely formatted output if you just print objects directly. It is intended only for "quick and dirty" uses like debugging. You are expected to use the formatting capabilities if you want nicely formatted output.

vela said:
most people using a computer to do calculations would rather know that 0.3-0.2 is 0.1.
The issue here is what the notation 0.3 - 0.2 should correspond to. When Python was originally developed, there was no decimal support, so the only option for translating such a notation was floating point.

One could perhaps argue that in modern Python, which has decimal support, simple decimal notations like 0.3 - 0.2 should be interpreted as decimals, not floats, but I don't know if anyone has ever proposed that. It would likely be viewed as breaking too much existing code to be workable. (I don't think any other commonly used language has such a notation defaulting to decimal instead of floating point.)
 
  • #56
anorlunda said:
Regulars in several threads seem so peckish today.
Perhaps you meant "peevish." "Peckish" relates to being hungry.
vela said:
There could be a bug in the CPU. Intel infamously knew of such a bug but didn't bother to tell anyone.
The infamous Pentium division bug cost Intel somewhere around half a billion dollars to replace the flawed CPUs. Incidentally, I believe I had the first published test of how a knowledgeable user could determine whether he had a bug-afflicted Pentium processor. The article was published in PC Techniques, in Feb/Mar of 1995.
 
  • Love
  • Like
Likes PeroK and pbuk
  • #57
vela said:
The others seem to reflect the idea that displaying the rounded value by default is more sensible, presumably because the rounded result is the more useful one to most users.
But rounded to what precision? C rounds by default to 7 digits. (Other languages might have different defaults.) But the "right" result being claimed here has only one digit to the right of the decimal point. The extra 6 digits in C contribute nothing except 6 extra zeros after the 0.1. Is that a problem? If not, why not? At what point do we have "too many" extra digits? And how is such a point to be determined in a way that is not arbitrary?
 
  • Like
Likes pbuk
  • #58
Edit: I seem to have missed a whole page of posts! Never mind, I'll leave this here for posterity.

vela said:
Because according to the Python documentation, "That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead."
Here is that statement in context:
Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. On most machines, if Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display
Code:
>>> 0.1
0.1000000000000000055511151231257827021181583404541015625
That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead

vela said:
I guess it depends on whether you think the user should cater to the computer or the other way around. Most people don't need or want floating-point results to 15~17 decimal places, so Python should display the useful result, not the pedantically correct one, as the default.
Most people don't write software. People that do generally want the computer to
  • do what they tell it to
  • in a predictable manner
  • to the best of its abilities
so that they can make the decisions about the trade-off between precision and accuracy themselves.

This is not unique to Python, this is how print() works in every language I can think of running with IEEE 754 (C's std::out << is not print(), and prints only 6 sig figs!).
anorlunda said:
My point is that precision is a function of the underlying hardware, not the programming language.
But most modern languages (including Python) require IEEE 754, and if the hardware did not implement it then it would need to be emulated in software.
 
Last edited:
  • #59
PeterDonis said:
(I don't think any other commonly used language has such a notation defaulting to decimal instead of floating point.)
Depends on whether you count COBOL as "commonly used" - there is still a lot in the wild!
 
  • #60
pbuk said:
Depends on whether you count COBOL as "commonly used" - there is still a lot in the wild!
Hm, yes, COBOL is an interesting example because it was, AFAIK, mainly intended for financial applications, where you want decimal arithmetic, not floating point. (Also, IIRC, floating point hadn't even been well standardized when COBOL was originally developed.)
 
  • Like
Likes pbuk
  • #61
Here is that statement in context:

Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. On most machines, if Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display

>>> 0.1
0.1000000000000000055511151231257827021181583404541015625
That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead
@pbuk, This confuses me. What is the point of storing and working with that many decimal places when the last 37 are garbage? Are you sure that it is not printing a lot more than its calculations support? Is the Python language assuming that the computer supports so much more than 64-bit accuracy or does it try to use software to achieve more significant digits?
 
  • #62
PeterDonis said:
Hm, yes, COBOL is an interesting example because it was, AFAIK, mainly intended for financial applications, where you want decimal arithmetic, not floating point.
Yes indeed. One of my first jobs was fixing a payroll application which had been translated from COBOL (fixed place BCD) to Pascal using floating point arithmetic instead of integers.
PeterDonis said:
(Also, IIRC, floating point hadn't even been well standardized when COBOL was originally developed.)
COBOL was around for 25 years before IEEE 754!
 
  • Like
Likes PeroK
  • #63
FactChecker said:
What is the point of storing and working with that many decimal places when the last 37 are garbage?
It's not storing decimal digits, it's storing binary digits. The article is simply giving the mathematically exact decimal equivalent of the binary number that is stored as the closest floating point number to 0.1.
 
  • Like
Likes FactChecker
  • #64
PeterDonis said:
It's not storing decimal digits, it's storing binary digits. The article is simply giving the mathematically exact decimal equivalent of the binary number that is stored as the closest floating point number to 0.1.
I see. Thanks. So it is only exact to the extent that the print wants to print the decimal equivalent of the binary. They could have gone on as far as they wanted.
 
  • #65
FactChecker said:
What is the point of storing and working with that many decimal places when the last 37 are garbage?
PeterDonis said:
It's not storing decimal digits, it's storing binary digits. The article is simply giving the mathematically exact decimal equivalent of the binary number that is stored as the closest floating point number to 0.1.
^^^ This. And it is nothing special about Python, the basic operations of https://en.wikipedia.org/wiki/IEEE_754 are implemented in hardware in almost all general purpose floating point processors produced in the last 30 years - including the one in whatever device you are using now.
 
  • Like
Likes DrClaude
  • #66
FactChecker said:
it is only exact to the extent that the print wants to print the decimal equivalent of the binary. They could have gone on as far as they wanted.
The decimal number given is the exact equivalent of the binary number that is the closest floating point number to 0.1. Adding more decimal places would just add zeros to the right. See the "Representation Error" section at the end of the Python doc article I referenced earlier, which gives the details, including how to verify this using the interactive interpreter.
 
  • Like
Likes pbuk
  • #67
FactChecker said:
I see. Thanks. So it is only exact to the extent that the print wants to print the decimal equivalent of the binary. They could have gone on as far as they wanted.
No, it is exact. Every finite length binary string representing a number with a fractional part is represented exactly by a finite length decimal string (with a 5 as the last digit).

The converse is not true (with the simplest example being ## 0.1_{10} ##).
 
  • Like
Likes FactChecker
  • #68
PeroK said:
##0.3 - 0.2 = 0.1## whatever any computer says. The incompetence of computers does not alter mathematical facts.
And ##x = x + 2## is false, but perfectly valid in many programming languages.
 
  • Like
Likes Vanadium 50
  • #69
DrClaude said:
And ##x = x + 2## is false, but perfectly valid in many programming languages.
That's purely syntactical.
 
  • #70
PeroK said:
That's purely syntactical.
So is ##0.3-0.2##. That's the entire point I am making.
 
  • Like
  • Skeptical
Likes Vanadium 50, pbuk and PeroK
  • #71
DrClaude said:
So is ##0.3-0.2##. That's the entire point I am making.
We are never going to agree on this. You say that whatever the computer does is correct. And I say that the computer may give the wrong answer.

If I know the computer will calculate something wrongly, then I have to work round that. But, how you conclude that makes the computer right is beyond me. It's the equivalent of a known bug, IMO.
 
  • #72
PeroK said:
We are never going to agree on this. You say that whatever the computer does is correct. And I say that the computer may give the wrong answer.

If I know the computer will calculate something wrongly, then I have to work round that. But, how you conclude that makes the computer right is beyond me. It's the equivalent of a known bug, IMO.
Writing 0.3 indicates the 64-bit floating-point binary closest to the decimal number 0.3. It is not the exact decimal 0.3, just like "=" is not the equality sign, but the assignment operator.
 
  • Like
Likes rbelli1
  • #73
DrClaude said:
Writing 0.3 indicates the 64-bit floating-point binary closest to the decimal number 0.3. It is not the exact decimal 0.3, just like "=" is not the equality sign, but the assignment operator.
The worst thing you can do when designing software is to have the software produce unexpected results that only a detailed knowledge of the inner workings of that particular release of that particular code can explain.

As others have pointed out, the problem is not with binary calculations per se, but with Python outputting more decimal places that are supported by its internal calcutations.
 
  • #74
PS even if you say that ##0.3 - 0.2 = 0.0999999999999998## is right, you must admit it's a pretty dumb answer!

It's not a great advert for Python.
 
  • #75
PeroK said:
The worst thing you can do when designing software is to have the software produce unexpected results that only a detailed knowledge of the inner workings of that particular release of that particular code can explain.
Which is why many or most introductory programming classes devote time on exactly this problem of the limitations of real-number arithmetic.

A good programmer must have some knowledge of the inner workings of the machine he or she is working with.
PeroK said:
PS even if you say that ##0.3 - 0.2 = 0.0999999999999998## is right, you must admit it's a pretty dumb answer!

It's not a great advert for Python.
This isn't limited to just Python. Just about any programming language that doesn't support (more) exact Decimal representation of numbers will produce results like this.
 
  • #76
PeroK said:
You say that whatever the computer does is correct.
No, that's not what we've been saying. What we've been saying is that if the computer does something wrong, it's not the computer's fault, it's the human's fault. The computer can only do what the human tells it to do. When a Python program outputs ##0.0999999999999998## in response to you inputting ##0.3 - 0.2##, it's not arguing with you about the answer to a mathematical question. It's giving the mathematically correct answer to the question you actually told it to answer, even though that's a different mathematical question than the one you wanted it to answer.

We've been over all this already, but I'm repeating it because you still apparently don't get it.

PeroK said:
As others have pointed out, the problem is not with binary calculations per se, but with Python outputting more decimal places that are supported by its internal calcutations.
And as other others have pointed out, this is missing the point. The actual number that the computer is working with is not ##0.1## in the case given; it is ##0.0999999999999998##. Even if it outputs ##0.1## because of rounding, that doesn't mean "oh, great, it's giving me the right answer". It just means you now are telling it to do two wrong things instead of one: first you're telling it to subtract the wrong numbers (the actual representable floats closest to ##0.3## and ##0.2## instead of those exact decimal numbers), and then you're telling it to output the wrong number (a rounded ##0.1## instead of the actual number it got from the subtraction).

Do you have any actual programming experience? Because this sort of thing leads to disaster in actual programming. If you want exact Decimal math, you tell the computer to do exact Decimal math. Python supports that. The right thing to do is to tell the computer to use that support. Not to complain that it didn't use it when you didn't tell it to.

PeroK said:
It's not a great advert for Python.
Tell that to the people who developed the IEEE 754 standard for floating point math, since that's where this originally comes from. It is not limited to Python, as has already been pointed out.
 
  • #77
Mark44 said:
Just about any programming language that doesn't support (more) exact Decimal representation of numbers will produce results like this.
Even languages that do have support for exact Decimal representation of numbers will produce results like this if their default interpretation of notations like ##0.3 - 0.2## is as floats, not decimals, most likely because Decimal support was added well after the original introduction of the language, when decisions on how to interpret such notations had to be made and couldn't be changed later because it would break too much existing code. Which, as I have already pointed out, is the case for Python.
 
  • #78
PeroK said:
he worst thing you can do when designing software
I'n not so sure I agree. The plausible but wrong answer seems to me to be worse.

When I was young, I wrote programs that used sensible defaults. Now I write code so that if something important is missing, the program doesn't just guess. It throws some sort of exception.
 
  • Like
Likes PeroK
  • #79
PeroK said:
Python outputting more decimal places that are supported by its internal calcutations.
This is wrong. All representable floating point numbers have exact decimal equivalents. If anything, Python's print() generally outputs fewer decimal places than the exact decimal equivalents of the floating point numbers would require.
 
  • #80
PeroK said:
the problem is not with binary calculations per se, but with
...you wanting Python to interpret the notation ##0.3 - 0.2## as telling it to do the exact decimal subtraction of three-tenths minus two-tenths, instead of as telling it to subtract the closest representable floating point numbers to those decimal numbers. But Python doesn't, and, as I have already said, I think the default interpretation of such notations in Python (not to mention in all the other languages that interpret such notations the same way) is highly unlikely to change. But that doesn't mean you can't do the exact decimal subtraction in Python; it just means you have to tell Python to do it the way Python's API is set up for you to tell it that.
 
  • #81
PeroK said:
the problem is not with binary calculations per se
The problem is with the input. The issue is that 0.3 is not a possible binary number. So we should make the computer program throw an exception if a user inputs 0.3 and force the user to input 1.0011001100110011001100110011001100110011001100110011 * 2^(-2) instead. I am sure that users will be thrilled that their programs are no longer wrong as they were before.
 
  • Haha
  • Skeptical
Likes vela and PeroK
  • #82
PeterDonis said:
Tell that to the people who developed the IEEE 754 standard for floating point math, since that's where this originally comes from. It is not limited to Python, as has already been pointed out.
There's nothing wrong with the floating point math, given that it must produce an approximation. It's how you present that approximation to the programmer that's the point. If there was a syntax to say "show me everything", then that's fine, you get all the decimal places possible. But, to default to an invalid arithmetic answer in simple cases seems wrong to me. And, it does seem ironic that the average person can do basic arithmetic better than a computer in these simple cases. That I find is wrong in the sense that IT should be better than that. It looks lazy and sub standard to me.
 
  • #83
PeterDonis said:
This is wrong.
It's not wrong because it's what I intended to say! Ergo by the logic of this thread it's correct. That's a biological computer functioning according to its specfication.
 
  • #84
PeroK said:
an invalid arithmetic answer
The answer is not invalid. It's the correct mathematical answer to the mathematical problem you told the computer to solve. This point has been made repeatedly and you continue to ignore it.

PeroK said:
It's not wrong because it's what I intended to say! Ergo by the logic of this thread it's correct.
This invalid claim has been rebutted repeatedly, yet you continue to make it.

I strongly suggest that you take a step back. You are not actually engaging with any of the points that have been made in response to you. That is not up to your usual quality of discussion.
 
  • #85
Okay, but I would say that there may be those who agree with me but are inhibited from taking my side of the argument by the aggressive and browbeating tone of the discussion.

I am surprised that challenging the supposed perfection of the silicon chip has elicited so strong a response.

There are places I am sure where the belief that a computer can never be wrong would be met by the same condemnation with which my belief that they can be wrong has been met here.
 
  • #86
PeroK said:
I would say that there may be those who agree with me
Agree with you about what? That's what you've never even bothered to specify. Or, to put it another way, what, exactly, do you think should be done by programming languages like Python in cases like the one under discussion?

Do you think Python should by default interpret ##0.3 - 0.2## as telling it to do decimal arithmetic instead of floating point? If so, then why can't you just say so? And then respond to the reasons that have already been given why that's unlikely to happen?

Do you think Python's interpreting ##0.3 - 0.2## as telling it to do floating point arithmetic is fine, but it should always round print() output of floats differently than it currently does? If so, how? And what justifies your preferred choice as compared to what Python currently does?

Or do you think Python's default parsing of ##0.3 - 0.2## and its defaults for print() output of floats are both fine, but there's some other problem lurking here that hasn't been identified? If so, what?

Having a substantive discussion with you about your responses to questions like the above would be great. But it can only happen if you stop getting hung up on irrelevancies that nobody else is even claiming and focus on the actual substantive issues.

PeroK said:
the supposed perfection of the silicon chip
PeroK said:
the belief that a computer can never be wrong
Nobody has made either of these claims. You are attacking straw men and failing to engage with the actual substantive issues.
 
  • #87
PeroK said:
I am surprised that challenging the supposed perfection of the silicon chip has elicited so strong a response.
What you're seeming to ignore is that the usual floating-point math that is implemented in languages such as Python, C, C++, Java, etc., is incapable of dealing precisely with certain fractional amounts. Most of these languages are capable of working with such expressions as 0.3 - 0.2, provided that you use the appropriate types and library routines; i.e., Decimal and associated routines.
PeroK said:
There are places I am sure where the belief that a computer can never be wrong would be met by the same condemnation with which my belief that they can be wrong has been met here.
I don't think anyone in this thread has asserted that computers can never be wrong. If computers are wrong, it is usually because programmers have written incorrect programs, including firmware such as the aforementioned Pentium 5 division bug.
 
  • #88
I am going to try showing @PeroK, because telling doesn't seem to be working...
Python:
from fractions import Fraction
# Prints 1/10: the Fraction class is designed for working with fractions!
print(Fraction(3, 10) - Fraction(2, 10))

from decimal import Decimal
# Prints 0.1: the Decimal class is designed for working with decimals!
print(Decimal('0.3') - Decimal('0.2'))
# Be careful: why do you think this prints 0.09999999999999997779553950750?
print(Decimal(0.3) - Decimal(0.2))

# Prints 0.1 (this is one way to implement accounting software avoiding FP problems).
print((30 - 20) / 100)

# Prints 0.09999999999999998: the float class is designed to work with floats!
print(float(0.3) - float(0.2))

# Prints <class 'float'> because this is the default type for a numeric literal containing
# a decimal point.
print(type(0.3 - 0.2))

# Prints 0.09999999999999998: float (i.e. IEEE 754 double precision) is the default.
print(0.3 - 0.2)
 
  • Informative
Likes Mark44
  • #89
Mark44 said:
I don't think anyone in this thread has asserted that computers can never be wrong.
Actually, @PeterDonis has. But besides bugs in the design, like the Pentium bug, there's always a chance that a circuit fails randomly. Engineers design chips so that the mean time between failure is so long you should virtually never run into an error of that type under normal conditions. But that's not a guarantee it couldn't happen.
 
  • #90
vela said:
Actually, @PeterDonis has.
I did in an earlier post, but that was intended in a particular context, namely, the computer is correctly executing its program code, but the program code is not doing what the user intended. My point (which others in this thread have agreed with) was that in such a case, the problem is not that the computer is wrong; the computer is doing what the user told it to do. The issue is that the user is telling the computer to do the wrong thing--something other than what the user actually intended.

I did not intend, and more generally most of the discussion in this thread has not intended, to cover the cases you describe, in which, for a variety of reasons, the computer does not correctly execute its program code. In those cases, yes, the computer is wrong, in the sense that it is not doing what the user told it to do. Fortunately, as you note, such cases are extremely rare.
 
  • #91
anorlunda said:
Who else is old enough to remember the early IBM computers, 1401, 1620, 650?
The 1620 was the second computer I worked with.
The first machine I worked with was the 402 accounting machine (programmed via a jumper panel); the the Honeywell 200, then the IBM1620 - and much later a 1401.

But getting back to the points in hand:

1) Encoding:
The central issue here is "encoding" - how a computer represents numbers - especially non-integer values.
One method is to pick units that allow integer representation. So 0.3 meters minus 0.2 meters might be an issue, but 30 centimeters minus 20 centimeters is no issue at all.
So the encoding could be 16-bit 2's complement integer with centimeter units. The results will be precise within the range of -327.68 meters to 327.65 meters.
But the floating point arithmetic supported by many computer languages and (for the past few decades) most computer processors is targeted to support a wide range of applications. The values could represent time, distance, or non-transcendental numbers. So fixed point arithmetic does not do.
The purpose of floating point arithmetic is to attempt to provide convenience. If it does not suit you, use your own encoding. I have a case in point right in front of me. The device I am working with now is intended for low power to extend battery life. It has very little ROM programming area and RAM - much too little to hold a floating point library. So my encoding is always to preserve precision. I get better results than floating point. In fact, my target is ideal results (no loss of precision from the captured measurements to the decisions based on those measurements) and with planning, I always hit that target.

2) The floating point exponent:
A lot of the discussion has focused on how 1/10, 2/10, and 3/10 are not precisely encoded. But there's a floating point vulnerability in play that is more central to the results that @SamRoss is describing. When 0.01 or 1/10 are evaluated, then floating point encoding will be as close as the encoding allows to 0.1. And the same is true with 2/10 and 3/10.
Fundamentally, the floating point encoding is a signed binary exponent (power of 2) and a mantissa. If I wanted to be precise, I would discuss the phantom bit and other subtleties related to collating sequence - but I will stay general to stay on on point.
The precision problem comes with mantissa precision (which is limited to the number of bits reserved for the mantissa) and the absolute precision (which is a function of the mantissa bits and that binary exponent).
And we will keep our mantissa in the range of 0.5 to 1.0.
So the 1/10 will be something like 4/5 times 2^-3. The 2/10 becomes 4/5 times 2^-2 and the 3/10 becomes 3/5 times 2^-1.
During the subtraction, the intermediate (internally hidden) results will be:
3/5 times 2^-1 minus 4/5 times 2^-2
aligning the mantissas: 6/5 times 2^-2 minus 4/5 times 2^-2 = 2/5 times 2^-2
then readjusting the result for floating point encoding: 4/5 times 2^-3.
During that final readjustment, the mantissa is shifted leaving the low-order bit unspecified. A zero is filled in - but that doesn't create any precision.
The problem would be even more severe (and easier to catch) if the subtraction was 10000.3 - 10000.2.

3) Is the computer "wrong":
Clearly this is an issue of semantics. But I would note that compiler statements are imperative. Even compiler statements described as "declarations" (such as "int n") are instructions (ie, "imperatives") to the compiler and computer. Assuming there is no malfunction, the results, right or wrong, are the programmers.
 
  • Like
Likes anorlunda
  • #92
PeroK said:
Okay, but I would say that there may be those who agree with me but are inhibited from taking my side of the argument by the aggressive and browbeating tone of the discussion.

I am surprised that challenging the supposed perfection of the silicon chip has elicited so strong a response.

There are places I am sure where the belief that a computer can never be wrong would be met by the same condemnation with which my belief that they can be wrong has been met here.
The current behaviour is caused by two things:
1 python uses the IEEE floating point standard. this is important for compatibility with everything else.
2 python will produse a different output for every different IEEE floating point number.
Since the IEEE standard isn't exact, this will necessarily cause long decimal expansions for numbers that are close to a whole number. This is can be inconvenient for quicky programs, and I might have to look up how format worked again in python, because I always forget.

I think it's much more important, that if a floating point number changes, that I can see that it changed. |I think you might get undetected errors if you can't see that a floating point number changed. So I really want point 2. The python designers seem to think this also.

I don't understand the whole discussion of "the computer can never be wrong". The design of python might be stupid, altough I don't think so, but apart from faulty hardware or bugs, python can't really be wrong if it does as is designed.

|I think chalenging the supposed perfection of the silicon chip has elicited a strong response, because noone understood what you meant. AFAIK we are just talking about the design of python.
 
  • #93
The problem discussed here actually has nothing to do with the design of Python specifically. This is the standard floating point implementation used by all computers in the underlying hardware.

Programmers of all languages are expected to understand how this works because of how standard it is.
 
  • Like
Likes Mark44
  • #94
No. At least since post #7, we are discussing the design of oython, and how it prints floats by default. Other languages do not do this in the same way, even if they also use the same hardware and the same bit patters to represent floats.
 
  • Skeptical
  • Like
Likes vela and Dale
  • #95
PeroK said:
There's nothing wrong with the floating point math, given that it must produce an approximation.
That is not the only way to think about it.

Floating point math does not deliver an approximation. It delivers an exact result. Given a pair of 64 bit floating point operands, their sum (or difference or product or quotient) is a defined 64 bit floating point result. The result is an algebra.

It is not the algebra of the real numbers under addition, subtraction, multiplication and division. It is the algebra of the 64 bit IEEE floats under addition, subtraction, multiplication and division.

The floating point algebra does not have all of the handy mathematical properties that we are used to in the algebra of the real numbers. For instance, ##\frac{3x}{3}## may not be equal to ##3\frac{x}{3}## in the floating point algebra. That does not make floating point wrong. It merely makes it different.

The two algebras deliver results that closely approximate each other in most cases. Programmers need to be aware that there are differences.
 
  • Like
Likes DrClaude, Dale, anorlunda and 1 other person
  • #96
jbriggs444 said:
That does not make floating point wrong. It merely makes it different.
Well said.

I just want to add that your point is not limited to floating point. My first big project was a training simulator. All of the models were implemented with 24 bit fixed point integer arithmetic. That was because of the dismally slow performance of floating point in those days.

Everything you said about floating point algebra being different, yet delivering results that closely approximate other algebras, applies to fixed point algebras too.
 
  • Like
Likes jbriggs444
  • #97
willem2 said:
No. At least since post #7, we are discussing the design of oython, and how it prints floats by default. Other languages do not do this in the same way, even if they also use the same hardware and the same bit patters to represent floats.
As was mentioned in post #13 by @pbuk , Python does provide ways of printing out rounded results.
That said, Python rounds a floating point 0.1 to "0.1" on output. I would not suggest a change to Python that would, by default, also round 0.3-0.2 to "0.1" on output.

I don't know what "other language" you are using for comparison.
Try this in javascript: console.log(0.1);console.log(0.3-0.2);
 
  • #98
Dale said:
The problem discussed here actually has nothing to do with the design of Python specifically. This is the standard floating point implementation used by all computers in the underlying hardware.
It's the point most here seem to be fixated on. I think everyone who posted in the thread understands why the result of 0.3-0.2 is slightly different than 0.1 as approximated by the computer hardware. So the repeated explanations of why the results are different miss the point.

Dale said:
Programmers of all languages are expected to understand how this works because of how standard it is.
Sure, programmers should be aware of the ins and outs of floating-point arithmetic on a computer, but should the average user have to be?

Which way of displaying the result is more useful most of the time–0.1 or 0.09999999999999998? The same calculation in Excel, APL, Mathematica, Wolfram Alpha, Desmos, and, I would expect, most user-centric software, the result is rendered as 0.1. Why? Because most people don't care that the computer approximates 0.3-0.2 as 0.09999999999999998.

Note that both 0.09999999999999998 and 0.1 are rounded results. Neither is the exact representation of the computer's result from calculating 0.3-0.2. As mentioned in the Python documentation, displaying the exact result wouldn't be very useful to most people, so it displays a rounded result. The question is why round to 16 decimal places instead of, say, 7 or 10, by default?

To me, the choice of 16 digits reveals a computer-centric mindset, i.e., "55 bits of precision is about 16 decimal places, so this is what you should see." This is the mindset most people who posted seem to have. The choice of fewer digits, on the other hand, points to a user-focused mindset, i.e., "if I have $0.30 and spend $0.20, then I'll have $0.10 left over, not $0.09999999999999998." (Don't bother with a "you should represent these as integers" tangent.)
 
  • Love
Likes PeroK
  • #99
vela said:
Which way of displaying the result is more useful most of the time
Which kind of user is more likely to be typing in "0.3 - 0.2" directly as Python code or using simple print() calls to display results?

The kind of "average user" you mention is probably not doing that, because they're probably not writing code or using the Python interactive prompt. They're probably using an application written by someone else, which is displaying results to them based on the programmer's understanding of the needs of typical users of that application. That application is going to be using specific input and output functions that are suitable for that application.

For example, if the application is a typical calculator program, it is not going to take the input "0.3 - 0.2" from the user and interpret it as floating point arithmetic. It's going to interpret it as decimal arithmetic. So it's not going to take the user input "0.3 - 0.2" and interpret it directly as Python source code. Nor is it going to just use print() to display results; it's going to format the output suitably for the output of a calculator program.

In other words, the ultimate answer to the OP's question is that Python source code and the Python interactive prompt are not user interfaces for average users. They're user interfaces for Python programmers. So criticizing them on the basis that they don't do what would be "the right thing" for average users is beside the point.

vela said:
The question is why round to 16 decimal places instead of, say, 7 or 10, by default?
You give the answer to this later in your post:

vela said:
To me, the choice of 16 digits reveals a computer-centric mindset, i.e., "55 bits of precision is about 16 decimal places, so this is what you should see."
And for a user interface for programmers, I think this is perfectly reasonable.

I agree it would not be reasonable for a user interface for average users, but, as above, that is not what Python source code or the Python interactive prompt are. I believe I have already said earlier in this discussion that interfaces for average users should be built using the formatting functions that are provided by Python for that purpose. Nobody writing a program for average users should be using raw print() statements for output or interpreting user input as if it were raw Python source code. And as far as I know, nobody writing a program for average users does that.
 
  • Like
Likes .Scott
  • #100
vela said:
Sure, programmers should be aware of the ins and outs of floating-point arithmetic on a computer, but should the average user have to be?
If you are writing Python code and using the Python print function then you are by definition a programmer and not an average user.
 
Last edited:
  • Like
Likes .Scott and PeterDonis

Similar threads

Back
Top