# Why do 0.1 and 1/10 work but 0.3-0.2 doesn't?

• Python
Homework Helper
Gold Member
Then the language specification is wrong. Or, at least, is flawed.
I disagree. It fits the purpose of controlling the computer processor hardware that has limits. That is the job of any programming language.

pbuk
Mentor
Then the language specification is wrong.
As I have already said, that complaint does not belong here, it belongs in the Python mailing lists. Good luck convincing them that their language specification, developed over several decades in response to tons of user input, is wrong.

If you write a computer programme that specifies that the capital of the USA is New York, then that is wrong.
That's not what Python is doing here. The program code literals 0.3 and 0.2 are bit strings. They have no intrinsic meaning. They only have meaning based on their semantic relationships with the rest of the world. When their semantic relationship with the rest of the world is that you (or a mathematician) are using them in sentences, they mean the decimal numbers three-tenths and two-tenths. When their semantic relationship with the rest of the world is that they are part of Python program code, they mean what @DrClaude described in post #16. The second meaning is not wrong. It's just different.

Homework Helper
Gold Member
2022 Award
I disagree. It fits the purpose of controlling the computer processor hardware that has limits. That is the job of any programming language.
Can a computer ever be wrong?

Mentor
Can a computer ever be wrong?
I would say no, because computers can only do what their human programmers tell them to do. So if the humans think what the computer is doing is wrong, they need to tell it to do something else.

If we ever develop computers that could be considered to be independent agents the way humans are, then those computers would be capable of being wrong. But the computers we use now to run Python programs aren't like that.

Homework Helper
Gold Member
Can a computer ever be wrong?
Sure. It is often not perfectly accurate. It only has to be accurate enough for the job that it is designed for. The history of computers includes mechanical, analog, fixed point, 32-bit, 64-bit, and (a few) 128-bit accuracy. They all had/have limitations.

Mentor
Then the language specification is wrong.
By the way, this feature of the language specification is not unique to Python. Consider this C program and its output:

C:
#include <stdio.h>

int main() {
printf("%.17f\n", 0.1);
printf("%.17f\n", 0.3 - 0.2);
}

Code:
0.10000000000000001
0.09999999999999998

FactChecker
Staff Emeritus
What is the phase of the moon? Regulars in several threads seem so peckish today.

Homework Helper
Gold Member
C:
#include <stdio.h>

int main() {
printf("%.17f\n", 0.1);
printf("%.17f\n", 0.3 - 0.2);
}

Code:
0.10000000000000001
0.09999999999999998
Ha! I didn't expect an extra bit in the first printout!

Staff Emeritus
Homework Helper
Python's floating point display conventions have evolved over several decades now, in response to a lot of user input through their mailing lists. I strongly suspect that the issue you raised was discussed.

Also, Python already has rounding functions, so users who want results rounded to a "useful" number of digits already have a way of getting them. Since different users will want different numbers of "useful" digits, saying that "Python should display the useful result" doesn't help since there is no single "useful result" that will work for all users. So it seems better to just leave the "pedantically correct" result as the default to display, and let users who want rounding specify the particular "useful" rounding they need for their use case.
It does seem to come down to a question of philosophy. If you see Python as a tool for solving problems, then you'd prefer to see 0.1 as the result of 0.3-0.2. As long as the language gives you a way to display more digits, you haven't lost any capabilities, but you've removed the nuisance of having to round values manually. If you see Python as simply a front end to the computer's guts, then you may perhaps prefer Python not do anything without explicitly being told to, so seeing 0.09999999999999998 might be more to your liking.

Mentor
I didn't expect an extra bit in the first printout!
It's the same behavior that Python gave in the interactive interpreter output I posted in post #33.

Staff Emeritus
Homework Helper
By the way, this feature of the language specification is not unique to Python. Consider this C program and its output:
But does C have a format it uses when one is not specified in the code?

Mentor
you've removed the nuisance of having to round values manually
No, you haven't. The print() function does not change the underlying value; if the underlying value is an intermediate step in a computation with many steps, it's still going to be the floating point binary value that @DrClaude described, not the rounded value. That's most likely not going to create a problem, but if you want to understand what the computer is actually doing and what values it is actually working with, it's the kind of thing you need to know. (For example, such knowledge is essential when debugging.)

Mentor
does C have a format it uses when one is not specified in the code?
I think C defaults to 7 decimal digits for floating point in printf, so it would display both values rounded to that number of digits, which would of course disguise the difference. But it would not, as I noted in post #47, change the underlying values themselves; if 0.3 - 0.2 was an intermediate step in a computation with many steps, the program would be working with the actual binary floating point value, not the decimal number 0.1.

Homework Helper
Gold Member
2022 Award
My GPS does not always have me on the correct road when two roads are side-by-side (like an interstate and its access road). I would prefer something embedded in the road.
Based on this thread, your GPS always has you on the correct road. As the end user, you just didn't study the GPS code and/or specification carefully enough!

Homework Helper
Gold Member
It's the same behavior that Python gave in the interactive interpreter output I posted in post #33.
Sorry. I missed that. Thanks!

Mentor
You are going off the rails here. Nobody is claiming anything like this.

Staff Emeritus
Homework Helper
Can a computer ever be wrong?
Yes. There could be a bug in the CPU. Intel infamously knew of such a bug but didn't bother to tell anyone. A researcher painstakingly tracked the obscure bug down when he noticed his simulation run on different computers was producing different results.

Homework Helper
Gold Member
Based on this thread, your GPS always has you on the correct road. As the end user, you just didn't study the GPS code and/or specification carefully enough!
Ha! My ex also would have liked it if I went the wrong way or circled forever. ;-)

Staff Emeritus
Homework Helper
No, you haven't. The print() function does not change the underlying value; if the underlying value is an intermediate step in a computation with many steps, it's still going to be the floating point binary value that @DrClaude described, not the rounded value. That's most likely not going to create a problem.
I wasn't referring to how the numbers are represented by the computer internally. Mathematica, C, APL, Python all appear to produce the same result internally. I was talking about how such a number is displayed to the user. Python seems to be the only one that displays the result to so many digits of precision. The others seem to reflect the idea that displaying the rounded value by default is more sensible, presumably because the rounded result is the more useful one to most users.
If you want to understand what the computer is actually doing and what values it is actually working with, it's the kind of thing you need to know. (For example, such knowledge is essential when debugging.)
I see this scenario as an edge case. If you're trying to track down some puzzling result, then sure, knowledge of how floating point numbers works might be useful, but I'd argue most people using a computer to do calculations would rather know that 0.3-0.2 is 0.1.

Anyway, Python does what Python does. It's a design choice, but it's just one I think is weird and questionable.

PeroK
Mentor
Python seems to be the only one that displays the result to so many digits of precision.
As I pointed out earlier, the print() function in Python is not intended for nicely formatted output if you just print objects directly. It is intended only for "quick and dirty" uses like debugging. You are expected to use the formatting capabilities if you want nicely formatted output.

most people using a computer to do calculations would rather know that 0.3-0.2 is 0.1.
The issue here is what the notation 0.3 - 0.2 should correspond to. When Python was originally developed, there was no decimal support, so the only option for translating such a notation was floating point.

One could perhaps argue that in modern Python, which has decimal support, simple decimal notations like 0.3 - 0.2 should be interpreted as decimals, not floats, but I don't know if anyone has ever proposed that. It would likely be viewed as breaking too much existing code to be workable. (I don't think any other commonly used language has such a notation defaulting to decimal instead of floating point.)

Mentor
Regulars in several threads seem so peckish today.
Perhaps you meant "peevish." "Peckish" relates to being hungry.
There could be a bug in the CPU. Intel infamously knew of such a bug but didn't bother to tell anyone.
The infamous Pentium division bug cost Intel somewhere around half a billion dollars to replace the flawed CPUs. Incidentally, I believe I had the first published test of how a knowledgeable user could determine whether he had a bug-afflicted Pentium processor. The article was published in PC Techniques, in Feb/Mar of 1995.

PeroK and pbuk
Mentor
The others seem to reflect the idea that displaying the rounded value by default is more sensible, presumably because the rounded result is the more useful one to most users.
But rounded to what precision? C rounds by default to 7 digits. (Other languages might have different defaults.) But the "right" result being claimed here has only one digit to the right of the decimal point. The extra 6 digits in C contribute nothing except 6 extra zeros after the 0.1. Is that a problem? If not, why not? At what point do we have "too many" extra digits? And how is such a point to be determined in a way that is not arbitrary?

pbuk
Homework Helper
Gold Member
Edit: I seem to have missed a whole page of posts! Never mind, I'll leave this here for posterity.

Because according to the Python documentation, "That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead."
Here is that statement in context:
Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. On most machines, if Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display
Code:
>>> 0.1
0.1000000000000000055511151231257827021181583404541015625
That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead

I guess it depends on whether you think the user should cater to the computer or the other way around. Most people don't need or want floating-point results to 15~17 decimal places, so Python should display the useful result, not the pedantically correct one, as the default.
Most people don't write software. People that do generally want the computer to
• do what they tell it to
• in a predictable manner
• to the best of its abilities
so that they can make the decisions about the trade-off between precision and accuracy themselves.

This is not unique to Python, this is how print() works in every language I can think of running with IEEE 754 (C's std::out << is not print(), and prints only 6 sig figs!).
My point is that precision is a function of the underlying hardware, not the programming language.
But most modern languages (including Python) require IEEE 754, and if the hardware did not implement it then it would need to be emulated in software.

Last edited:
Homework Helper
Gold Member
(I don't think any other commonly used language has such a notation defaulting to decimal instead of floating point.)
Depends on whether you count COBOL as "commonly used" - there is still a lot in the wild!

Mentor
Depends on whether you count COBOL as "commonly used" - there is still a lot in the wild!
Hm, yes, COBOL is an interesting example because it was, AFAIK, mainly intended for financial applications, where you want decimal arithmetic, not floating point. (Also, IIRC, floating point hadn't even been well standardized when COBOL was originally developed.)

pbuk
Homework Helper
Gold Member
Here is that statement in context:

Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. On most machines, if Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display

>>> 0.1
0.1000000000000000055511151231257827021181583404541015625
That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead
@pbuk, This confuses me. What is the point of storing and working with that many decimal places when the last 37 are garbage? Are you sure that it is not printing a lot more than its calculations support? Is the Python language assuming that the computer supports so much more than 64-bit accuracy or does it try to use software to achieve more significant digits?

Homework Helper
Gold Member
Hm, yes, COBOL is an interesting example because it was, AFAIK, mainly intended for financial applications, where you want decimal arithmetic, not floating point.
Yes indeed. One of my first jobs was fixing a payroll application which had been translated from COBOL (fixed place BCD) to Pascal using floating point arithmetic instead of integers.
(Also, IIRC, floating point hadn't even been well standardized when COBOL was originally developed.)
COBOL was around for 25 years before IEEE 754!

PeroK
Mentor
What is the point of storing and working with that many decimal places when the last 37 are garbage?
It's not storing decimal digits, it's storing binary digits. The article is simply giving the mathematically exact decimal equivalent of the binary number that is stored as the closest floating point number to 0.1.

FactChecker
Homework Helper
Gold Member
It's not storing decimal digits, it's storing binary digits. The article is simply giving the mathematically exact decimal equivalent of the binary number that is stored as the closest floating point number to 0.1.
I see. Thanks. So it is only exact to the extent that the print wants to print the decimal equivalent of the binary. They could have gone on as far as they wanted.

Homework Helper
Gold Member
What is the point of storing and working with that many decimal places when the last 37 are garbage?
It's not storing decimal digits, it's storing binary digits. The article is simply giving the mathematically exact decimal equivalent of the binary number that is stored as the closest floating point number to 0.1.
^^^ This. And it is nothing special about Python, the basic operations of https://en.wikipedia.org/wiki/IEEE_754 are implemented in hardware in almost all general purpose floating point processors produced in the last 30 years - including the one in whatever device you are using now.

DrClaude
Mentor
it is only exact to the extent that the print wants to print the decimal equivalent of the binary. They could have gone on as far as they wanted.
The decimal number given is the exact equivalent of the binary number that is the closest floating point number to 0.1. Adding more decimal places would just add zeros to the right. See the "Representation Error" section at the end of the Python doc article I referenced earlier, which gives the details, including how to verify this using the interactive interpreter.

pbuk
Homework Helper
Gold Member
I see. Thanks. So it is only exact to the extent that the print wants to print the decimal equivalent of the binary. They could have gone on as far as they wanted.
No, it is exact. Every finite length binary string representing a number with a fractional part is represented exactly by a finite length decimal string (with a 5 as the last digit).

The converse is not true (with the simplest example being ## 0.1_{10} ##).

FactChecker
Mentor
##0.3 - 0.2 = 0.1## whatever any computer says. The incompetence of computers does not alter mathematical facts.
And ##x = x + 2## is false, but perfectly valid in many programming languages.