# Is it possible to parse a string to a double with only 1 pass

1. May 15, 2014

### Jamin2112

through the string?

I'm at a point in a program I'm making where I have a string representation of a valid decimal representation of a double. By "valid representation" I mean that it contains only characters in the set {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '.'} with no more than 1 of '.'. I want to convert it to a double.

Here's the problem: The only procedure I can think of would require knowing in advance where the decimal place is. Or is there another way?

2. May 15, 2014

### Staff: Mentor

Starting from the first character in the string s, and using c(s) to represent the character at the current position in the string
Code (Text):

0. Set x to 0
1. If c(s) != '.'
1a. x *= 10
1d. go to 1
2. (Code to treat the decimals)

3. May 15, 2014

### AlephZero

For each digit, accumulate the converted value with x = 10*x + d.
Count the number of digits k after the '.' as you wotk along the string.
At the end, set x = x / 10^k.

That isn't the best possible way to do it, because you will get unnecessary roundimg errors with a string like 1.0000000000000000000000000000000000000000000000000 (with more than 16 0's after the decimal point).

Fixing that is left as an exercise - you can still do it in one pass

Last edited: May 15, 2014
4. May 15, 2014

### D H

Staff Emeritus
Option #1: Just search for the period and be done with it. Your obsessing about small optimizations is in creating disoptimizations. If you find that period (which is fast; std::string::find) you now have two integers to parse. That's fast. Parsing one character at a time is slow, and is also going to hurt accuracy. The goal in the parsing should be to be within half an ULP. A one character at a time parser isn't going to attain that.

Option #2: (Better): use std::strtod from #include <cstdlib>. This has nice hooks for error handling (which you can ignore at the start), it knows all the different ways doubles can be represented, and it knows all the little gotchas with those representations. Use the standard library. Don't reinvent the wheel.

Option #3: (Best): Use std::stod from #include <string> (new with c++11). This is just a wrapper around std::strtod, but with a more c++ like interface.

5. May 15, 2014

### Strilanc

This is actually a lot trickier than it sounds, unless you're allowed to lose the last couple bits of precision. See this answer on stackoverflow.

If you're fine with losing bits of precision, just start with a double total of 0. Then iteratively multiply by ten and adding the current digit into the total until you hit the decimal point. Then start iteratively dividing a scale factor (starting from 1) by ten then adding the current digit multiplied by the scale factor into the total. You'll lose precision when dividing by ten, and also there will be rounding in the additions. But, like I said, avoiding that entirely is hard.

6. May 15, 2014

### D H

Staff Emeritus
Or just use the library functions described in post #4. Or

Option #4: Use a std::stringstream from #include <sstream>.

7. May 15, 2014

### Staff: Mentor

You really should consider reading 'What every computer scientist should know about floating point' By David Goldberg.

https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/02Numerics/Double/paper.pdf.

An ULP is the ultimate limit of precision. I am defining ULP minimally since D H mentions it, and it seems possible that because of the way you phrased your question you may not know about it.

Some C compilers support the Decimal datatype, which is a floating representation without the binary floating point problems.

8. May 15, 2014

### D H

Staff Emeritus
Or unit of least precision. Or unit in the last place. They're all ULP, there's a tiny angel dancing on the point of a needle distinction between some of these.

Think of that tiny dancing angel as an ULP.

9. May 15, 2014

### AlephZero

Except that it probably just swaps the binary floating point problems for a different set.

For example the IBM 360 and 370 mainframes worked in floating point to base 16 not base 2, which had some "interesting" consequences - like calculating the mean of a set of n numbers by adding them and dividing by n, and getting a result that was outside of the range of the largest and smallest input numbers. (And that "interesting mathematical result" didn't involve any underflows or overflows).

10. May 15, 2014

### AlephZero

11. May 16, 2014

### rcgldr

If the length of the string is known in advance, then the string could be scanned backwards to reduce the loss of precision. You'd could assume the string is an integer until and/or unless a '.' is encountered.