Is it possible to parse a string to a double with only 1 pass

  • Thread starter Thread starter Jamin2112
  • Start date Start date
  • Tags Tags
    String
AI Thread Summary
Parsing a string to a double in one pass is challenging, particularly when the decimal point's position is unknown. Several methods are discussed, including using standard library functions like std::strtod and std::stod, which handle various representations of doubles and provide better accuracy. A character-by-character approach is slower and may lead to rounding errors, particularly with long decimal sequences. It is suggested that finding the decimal point first can optimize the parsing process, while also acknowledging the potential precision loss when using iterative multiplication and division. Ultimately, leveraging existing library functions is recommended for efficiency and accuracy.
Jamin2112
Messages
973
Reaction score
12
through the string?

I'm at a point in a program I'm making where I have a string representation of a valid decimal representation of a double. By "valid representation" I mean that it contains only characters in the set {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '.'} with no more than 1 of '.'. I want to convert it to a double.

Here's the problem: The only procedure I can think of would require knowing in advance where the decimal place is. Or is there another way?
 
Technology news on Phys.org
Starting from the first character in the string s, and using c(s) to represent the character at the current position in the string
Code:
0. Set x to 0
1. If c(s) != '.'
   1a. x *= 10
   1b. add c(s) to x
   1c. advance to next character
   1d. go to 1
2. (Code to treat the decimals)
 
For each digit, accumulate the converted value with x = 10*x + d.
Count the number of digits k after the '.' as you wotk along the string.
At the end, set x = x / 10^k.

That isn't the best possible way to do it, because you will get unnecessary roundimg errors with a string like 1.0000000000000000000000000000000000000000000000000 (with more than 16 0's after the decimal point).

Fixing that is left as an exercise - you can still do it in one pass :smile:
 
Last edited:
Jamin2112 said:
Here's the problem: The only procedure I can think of would require knowing in advance where the decimal place is. Or is there another way?
Option #1: Just search for the period and be done with it. Your obsessing about small optimizations is in creating disoptimizations. If you find that period (which is fast; std::string::find) you now have two integers to parse. That's fast. Parsing one character at a time is slow, and is also going to hurt accuracy. The goal in the parsing should be to be within half an ULP. A one character at a time parser isn't going to attain that.

Option #2: (Better): use std::strtod from #include <cstdlib>. This has nice hooks for error handling (which you can ignore at the start), it knows all the different ways doubles can be represented, and it knows all the little gotchas with those representations. Use the standard library. Don't reinvent the wheel.

Option #3: (Best): Use std::stod from #include <string> (new with c++11). This is just a wrapper around std::strtod, but with a more c++ like interface.
 
This is actually a lot trickier than it sounds, unless you're allowed to lose the last couple bits of precision. See this answer on stackoverflow.

If you're fine with losing bits of precision, just start with a double total of 0. Then iteratively multiply by ten and adding the current digit into the total until you hit the decimal point. Then start iteratively dividing a scale factor (starting from 1) by ten then adding the current digit multiplied by the scale factor into the total. You'll lose precision when dividing by ten, and also there will be rounding in the additions. But, like I said, avoiding that entirely is hard.
 
Or just use the library functions described in post #4. Or

Option #4: Use a std::stringstream from #include <sstream>.
 
You really should consider reading 'What every computer scientist should know about floating point' By David Goldberg.

https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/02Numerics/Double/paper.pdf.

An ULP is the ultimate limit of precision. I am defining ULP minimally since D H mentions it, and it seems possible that because of the way you phrased your question you may not know about it.

Some C compilers support the Decimal datatype, which is a floating representation without the binary floating point problems.
 
jim mcnamara said:
An ULP is the ultimate limit of precision.
Or unit of least precision. Or unit in the last place. They're all ULP, there's a tiny angel dancing on the point of a needle distinction between some of these.

Think of that tiny dancing angel as an ULP.
 
jim mcnamara said:
Some C compilers support the Decimal datatype, which is a floating representation without the binary floating point problems.

Except that it probably just swaps the binary floating point problems for a different set.

For example the IBM 360 and 370 mainframes worked in floating point to base 16 not base 2, which had some "interesting" consequences - like calculating the mean of a set of n numbers by adding them and dividing by n, and getting a result that was outside of the range of the largest and smallest input numbers. (And that "interesting mathematical result" didn't involve any underflows or overflows).
 
  • #11
If the length of the string is known in advance, then the string could be scanned backwards to reduce the loss of precision. You'd could assume the string is an integer until and/or unless a '.' is encountered.
 
Back
Top