- 29,375
- 21,049
##0.3 - 0.2 = 0.1## whatever any computer says. The incompetence of computers does not alter mathematical facts.
I'm sorry, but you are not even responding to the counter arguments that have been made. You are simply declaring by fiat that the notation ##0.3 - 0.2## means what you say it means, not what the actual language specification of the language we are discussing, Python, says it means for program code in that language. Sorry, but the Humpty Dumpty principle doesn't work for program code. The language is going to interpret your code the way the language specification says, not the way you say.PeroK said:##0.3 - 0.2 = 0.1## whatever any computer says.
Testing this in the interactive interpreter is simple:FactChecker said:That makes me wonder what it would print for 0.1 or 1/10 if we force a format with the number of digits of the 0.09999999999999998 printout.
>>> print("{:.17f}".format(0.1))
0.10000000000000001
>>> print("{:.17f}".format(1/10))
0.10000000000000001
>>> print("{:.17f}".format(0.3 - 0.2))
0.09999999999999998
>>> print("{:.17f}".format(0.3))
0.29999999999999999
>>> print("{:.17f}".format(0.2))
0.20000000000000001
This is the programming forum, not the math forum. Nobody is arguing about the mathematical facts. We are simply pointing out, and you are apparently refusing to acknowledge, that which mathematical facts a particular notation in a particular programming language refers to is determined by the language specification, not by your personal desires.PeroK said:The Incompletence of computers does not alter mathematical facts.
Then the language specification is wrong. Or, at least, is flawed. If you write a computer programme that specifies that the capital of the USA is New York, then that is wrong. It's no defence to say that as far as the computer is concerned it's the right answer. It's wrong.PeterDonis said:I'm sorry, but you are not even responding to the counter arguments that have been made. You are simply declaring by fiat that the notation ##0.3 - 0.2## means what you say it means, not what the actual language specification of the language we are discussing, Python, says it means for program code in that language. Sorry, but the Humpty Dumpty principle doesn't work for program code. The language is going to interpret your code the way the language specification says, not the way you say.
I disagree. It fits the purpose of controlling the computer processor hardware that has limits. That is the job of any programming language.PeroK said:Then the language specification is wrong. Or, at least, is flawed.
As I have already said, that complaint does not belong here, it belongs in the Python mailing lists. Good luck convincing them that their language specification, developed over several decades in response to tons of user input, is wrong.PeroK said:Then the language specification is wrong.
That's not what Python is doing here. The program code literals 0.3 and 0.2 are bit strings. They have no intrinsic meaning. They only have meaning based on their semantic relationships with the rest of the world. When their semantic relationship with the rest of the world is that you (or a mathematician) are using them in sentences, they mean the decimal numbers three-tenths and two-tenths. When their semantic relationship with the rest of the world is that they are part of Python program code, they mean what @DrClaude described in post #16. The second meaning is not wrong. It's just different.PeroK said:If you write a computer programme that specifies that the capital of the USA is New York, then that is wrong.
Can a computer ever be wrong?FactChecker said:I disagree. It fits the purpose of controlling the computer processor hardware that has limits. That is the job of any programming language.
I would say no, because computers can only do what their human programmers tell them to do. So if the humans think what the computer is doing is wrong, they need to tell it to do something else.PeroK said:Can a computer ever be wrong?
Sure. It is often not perfectly accurate. It only has to be accurate enough for the job that it is designed for. The history of computers includes mechanical, analog, fixed point, 32-bit, 64-bit, and (a few) 128-bit accuracy. They all had/have limitations.PeroK said:Can a computer ever be wrong?
By the way, this feature of the language specification is not unique to Python. Consider this C program and its output:PeroK said:Then the language specification is wrong.
#include <stdio.h>
int main() {
printf("%.17f\n", 0.1);
printf("%.17f\n", 0.3 - 0.2);
}
0.10000000000000001
0.09999999999999998
Ha! I didn't expect an extra bit in the first printout!PeterDonis said:C:#include <stdio.h> int main() { printf("%.17f\n", 0.1); printf("%.17f\n", 0.3 - 0.2); }
Code:0.10000000000000001 0.09999999999999998
It does seem to come down to a question of philosophy. If you see Python as a tool for solving problems, then you'd prefer to see 0.1 as the result of 0.3-0.2. As long as the language gives you a way to display more digits, you haven't lost any capabilities, but you've removed the nuisance of having to round values manually. If you see Python as simply a front end to the computer's guts, then you may perhaps prefer Python not do anything without explicitly being told to, so seeing 0.09999999999999998 might be more to your liking.PeterDonis said:Python's floating point display conventions have evolved over several decades now, in response to a lot of user input through their mailing lists. I strongly suspect that the issue you raised was discussed.
Also, Python already has rounding functions, so users who want results rounded to a "useful" number of digits already have a way of getting them. Since different users will want different numbers of "useful" digits, saying that "Python should display the useful result" doesn't help since there is no single "useful result" that will work for all users. So it seems better to just leave the "pedantically correct" result as the default to display, and let users who want rounding specify the particular "useful" rounding they need for their use case.
It's the same behavior that Python gave in the interactive interpreter output I posted in post #33.FactChecker said:I didn't expect an extra bit in the first printout!
But does C have a format it uses when one is not specified in the code?PeterDonis said:By the way, this feature of the language specification is not unique to Python. Consider this C program and its output:
No, you haven't. The print() function does not change the underlying value; if the underlying value is an intermediate step in a computation with many steps, it's still going to be the floating point binary value that @DrClaude described, not the rounded value. That's most likely not going to create a problem, but if you want to understand what the computer is actually doing and what values it is actually working with, it's the kind of thing you need to know. (For example, such knowledge is essential when debugging.)vela said:you've removed the nuisance of having to round values manually
I think C defaults to 7 decimal digits for floating point in printf, so it would display both values rounded to that number of digits, which would of course disguise the difference. But it would not, as I noted in post #47, change the underlying values themselves; if 0.3 - 0.2 was an intermediate step in a computation with many steps, the program would be working with the actual binary floating point value, not the decimal number 0.1.vela said:does C have a format it uses when one is not specified in the code?
Based on this thread, your GPS always has you on the correct road. As the end user, you just didn't study the GPS code and/or specification carefully enough!FactChecker said:My GPS does not always have me on the correct road when two roads are side-by-side (like an interstate and its access road). I would prefer something embedded in the road.
Sorry. I missed that. Thanks!PeterDonis said:It's the same behavior that Python gave in the interactive interpreter output I posted in post #33.
You are going off the rails here. Nobody is claiming anything like this.PeroK said:Based on this thread, your GPS always has you on the correct road.
Yes. There could be a bug in the CPU. Intel infamously knew of such a bug but didn't bother to tell anyone. A researcher painstakingly tracked the obscure bug down when he noticed his simulation run on different computers was producing different results.PeroK said:Can a computer ever be wrong?
Ha! My ex also would have liked it if I went the wrong way or circled forever. ;-)PeroK said:Based on this thread, your GPS always has you on the correct road. As the end user, you just didn't study the GPS code and/or specification carefully enough!![]()
I wasn't referring to how the numbers are represented by the computer internally. Mathematica, C, APL, Python all appear to produce the same result internally. I was talking about how such a number is displayed to the user. Python seems to be the only one that displays the result to so many digits of precision. The others seem to reflect the idea that displaying the rounded value by default is more sensible, presumably because the rounded result is the more useful one to most users.PeterDonis said:No, you haven't. The print() function does not change the underlying value; if the underlying value is an intermediate step in a computation with many steps, it's still going to be the floating point binary value that @DrClaude described, not the rounded value. That's most likely not going to create a problem.
I see this scenario as an edge case. If you're trying to track down some puzzling result, then sure, knowledge of how floating point numbers works might be useful, but I'd argue most people using a computer to do calculations would rather know that 0.3-0.2 is 0.1.If you want to understand what the computer is actually doing and what values it is actually working with, it's the kind of thing you need to know. (For example, such knowledge is essential when debugging.)
As I pointed out earlier, the print() function in Python is not intended for nicely formatted output if you just print objects directly. It is intended only for "quick and dirty" uses like debugging. You are expected to use the formatting capabilities if you want nicely formatted output.vela said:Python seems to be the only one that displays the result to so many digits of precision.
The issue here is what the notation 0.3 - 0.2 should correspond to. When Python was originally developed, there was no decimal support, so the only option for translating such a notation was floating point.vela said:most people using a computer to do calculations would rather know that 0.3-0.2 is 0.1.
Perhaps you meant "peevish." "Peckish" relates to being hungry.anorlunda said:Regulars in several threads seem so peckish today.
The infamous Pentium division bug cost Intel somewhere around half a billion dollars to replace the flawed CPUs. Incidentally, I believe I had the first published test of how a knowledgeable user could determine whether he had a bug-afflicted Pentium processor. The article was published in PC Techniques, in Feb/Mar of 1995.vela said:There could be a bug in the CPU. Intel infamously knew of such a bug but didn't bother to tell anyone.
But rounded to what precision? C rounds by default to 7 digits. (Other languages might have different defaults.) But the "right" result being claimed here has only one digit to the right of the decimal point. The extra 6 digits in C contribute nothing except 6 extra zeros after the 0.1. Is that a problem? If not, why not? At what point do we have "too many" extra digits? And how is such a point to be determined in a way that is not arbitrary?vela said:The others seem to reflect the idea that displaying the rounded value by default is more sensible, presumably because the rounded result is the more useful one to most users.
Here is that statement in context:vela said:Because according to the Python documentation, "That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead."
Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. On most machines, if Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display
That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value insteadCode:>>> 0.1 0.1000000000000000055511151231257827021181583404541015625
Most people don't write software. People that do generally want the computer tovela said:I guess it depends on whether you think the user should cater to the computer or the other way around. Most people don't need or want floating-point results to 15~17 decimal places, so Python should display the useful result, not the pedantically correct one, as the default.
std::out << is not print(), and prints only 6 sig figs!).But most modern languages (including Python) require IEEE 754, and if the hardware did not implement it then it would need to be emulated in software.anorlunda said:My point is that precision is a function of the underlying hardware, not the programming language.
Depends on whether you count COBOL as "commonly used" - there is still a lot in the wild!PeterDonis said:(I don't think any other commonly used language has such a notation defaulting to decimal instead of floating point.)
Hm, yes, COBOL is an interesting example because it was, AFAIK, mainly intended for financial applications, where you want decimal arithmetic, not floating point. (Also, IIRC, floating point hadn't even been well standardized when COBOL was originally developed.)pbuk said:Depends on whether you count COBOL as "commonly used" - there is still a lot in the wild!