.Scott
Science Advisor
Homework Helper
- 3,757
- 1,836
There is more to it than "most useful most of the time". If you are going to go with 7 or 10 digits, it is important that you document that and that programmers know what it is. By defaulting to the full precision of the floating point encoding, you are setting a rule that is easy to recognize and easy to keep track of.vela said:Which way of displaying the result is more useful most of the time–0.1 or 0.09999999999999998? The same calculation in Excel, APL, Mathematica, Wolfram Alpha, Desmos, and, I would expect, most user-centric software, the result is rendered as 0.1. Why? Because most people don't care that the computer approximates 0.3-0.2 as 0.09999999999999998.
Note that both 0.09999999999999998 and 0.1 are rounded results. Neither is the exact representation of the computer's result from calculating 0.3-0.2. As mentioned in the Python documentation, displaying the exact result wouldn't be very useful to most people, so it displays a rounded result. The question is why round to 16 decimal places instead of, say, 7 or 10, by default?
If you wanted to use a different number, I might go with 2 places to the right of the decimal. Aside being great for US currency, it would also be enough of a nuisance that Python programmers would quickly figure out how to be explicit about the displayed decimal precision.
One of the primary uses of the default precision is in writing debug code - code added to the code in the course of tracking down a bug - then soon deleted. In that case, defaulting to the full precision of the floating point encoding is perfect.