valenumr said:
though perhaps not applicable in this specific case
Yes, I did get it, but I don't think this is a good implementation for reasons mentioned.
You are correct that I wasn't thinking about very small values initially, but...
In python, you can use frexp(x) to obtain the exponent of x, and just choose h (e.g, h = 2** [(frexp(x)-53)] for double precision) on that value without all the ugliness involved using h = (1+e)x. It requires some knowledge of the platform specific floating point implementation, but its pretty standard, and if IIRC, python has some methods and constants that will help out.
As an aside, you can abuse c, and just cast abs(x) to a uint with the proper number of bits and add 1 and then reapply the sign (IEEE 754 is lexographically ordered, excepting the sign). Of course in python or c, you will need to deal with over/under flow. It might look messier, but I think it is better.
I would personally avoid the OPs implementation because it is very possible to implement a better algorithm that is probably even more efficient, and just overall consistent over any interval.
But toward your example, choosing h with my approach would yield smaller values for most numbers, rather than running with x, so h on the interval from [0.25, 0.5), for example, would actually be the same, whereas that is not the case for the implementation in question.