Is it possible to parse a string to a double with only 1 pass

  • Thread starter Thread starter Jamin2112
  • Start date Start date
  • Tags Tags
    String
Click For Summary

Discussion Overview

The discussion revolves around the challenge of parsing a string representation of a double in a single pass. Participants explore various methods for converting a valid decimal string into a double, considering both efficiency and precision issues.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant suggests that knowing the position of the decimal point in advance is necessary for an efficient parsing method.
  • Another participant proposes a method of accumulating the value by iterating through each character, counting digits after the decimal point, and adjusting the final value accordingly.
  • Some participants argue that searching for the decimal point first could lead to a more efficient parsing process, suggesting that character-by-character parsing may be slower and less accurate.
  • Options for using standard library functions like std::strtod and std::stod are presented as preferable methods for parsing, with emphasis on their error handling and representation capabilities.
  • Concerns are raised about precision loss when using iterative multiplication and division methods, especially with long strings of zeros after the decimal point.
  • A participant mentions that some C compilers support a Decimal datatype, which could mitigate binary floating-point issues, although it may introduce other problems.
  • Another participant notes that scanning the string backwards could potentially reduce precision loss if the length is known in advance.

Areas of Agreement / Disagreement

Participants express a range of views on the best approach to parsing the string, with no consensus reached on a single method. Multiple competing strategies are discussed, highlighting the complexity of the problem.

Contextual Notes

Participants acknowledge the limitations of various methods, including potential precision loss and the need for careful handling of floating-point representations.

Jamin2112
Messages
973
Reaction score
12
through the string?

I'm at a point in a program I'm making where I have a string representation of a valid decimal representation of a double. By "valid representation" I mean that it contains only characters in the set {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '.'} with no more than 1 of '.'. I want to convert it to a double.

Here's the problem: The only procedure I can think of would require knowing in advance where the decimal place is. Or is there another way?
 
Technology news on Phys.org
Starting from the first character in the string s, and using c(s) to represent the character at the current position in the string
Code:
0. Set x to 0
1. If c(s) != '.'
   1a. x *= 10
   1b. add c(s) to x
   1c. advance to next character
   1d. go to 1
2. (Code to treat the decimals)
 
For each digit, accumulate the converted value with x = 10*x + d.
Count the number of digits k after the '.' as you wotk along the string.
At the end, set x = x / 10^k.

That isn't the best possible way to do it, because you will get unnecessary roundimg errors with a string like 1.0000000000000000000000000000000000000000000000000 (with more than 16 0's after the decimal point).

Fixing that is left as an exercise - you can still do it in one pass :smile:
 
Last edited:
Jamin2112 said:
Here's the problem: The only procedure I can think of would require knowing in advance where the decimal place is. Or is there another way?
Option #1: Just search for the period and be done with it. Your obsessing about small optimizations is in creating disoptimizations. If you find that period (which is fast; std::string::find) you now have two integers to parse. That's fast. Parsing one character at a time is slow, and is also going to hurt accuracy. The goal in the parsing should be to be within half an ULP. A one character at a time parser isn't going to attain that.

Option #2: (Better): use std::strtod from #include <cstdlib>. This has nice hooks for error handling (which you can ignore at the start), it knows all the different ways doubles can be represented, and it knows all the little gotchas with those representations. Use the standard library. Don't reinvent the wheel.

Option #3: (Best): Use std::stod from #include <string> (new with c++11). This is just a wrapper around std::strtod, but with a more c++ like interface.
 
This is actually a lot trickier than it sounds, unless you're allowed to lose the last couple bits of precision. See this answer on stackoverflow.

If you're fine with losing bits of precision, just start with a double total of 0. Then iteratively multiply by ten and adding the current digit into the total until you hit the decimal point. Then start iteratively dividing a scale factor (starting from 1) by ten then adding the current digit multiplied by the scale factor into the total. You'll lose precision when dividing by ten, and also there will be rounding in the additions. But, like I said, avoiding that entirely is hard.
 
Or just use the library functions described in post #4. Or

Option #4: Use a std::stringstream from #include <sstream>.
 
You really should consider reading 'What every computer scientist should know about floating point' By David Goldberg.

https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/02Numerics/Double/paper.pdf.

An ULP is the ultimate limit of precision. I am defining ULP minimally since D H mentions it, and it seems possible that because of the way you phrased your question you may not know about it.

Some C compilers support the Decimal datatype, which is a floating representation without the binary floating point problems.
 
jim mcnamara said:
An ULP is the ultimate limit of precision.
Or unit of least precision. Or unit in the last place. They're all ULP, there's a tiny angel dancing on the point of a needle distinction between some of these.

Think of that tiny dancing angel as an ULP.
 
jim mcnamara said:
Some C compilers support the Decimal datatype, which is a floating representation without the binary floating point problems.

Except that it probably just swaps the binary floating point problems for a different set.

For example the IBM 360 and 370 mainframes worked in floating point to base 16 not base 2, which had some "interesting" consequences - like calculating the mean of a set of n numbers by adding them and dividing by n, and getting a result that was outside of the range of the largest and smallest input numbers. (And that "interesting mathematical result" didn't involve any underflows or overflows).
 
  • #11
If the length of the string is known in advance, then the string could be scanned backwards to reduce the loss of precision. You'd could assume the string is an integer until and/or unless a '.' is encountered.
 

Similar threads

Replies
17
Views
1K
  • · Replies 34 ·
2
Replies
34
Views
4K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 23 ·
Replies
23
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K