nonequilibrium
- 1,412
- 2
Say we want to know the zero of a function f, then of course we can use the iterative Newton-Raphson method, given by x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}.
But say f typically has really big values, and say you can actually, without much trouble, rewrite your problem as solving for the zero of a function g which reaches much smaller values. Is it then "better" to use x_{n+1} = x_n - \frac{g(x_n)}{g'(x_n)} instead of x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}? Or does it make no difference? After all, the "bigness" of f doesn't seem to influence the relative error on x (since if [.] denotes the computed value of something, then [f(x_n)] = (1+\epsilon_r^{(1)})f(x_n) and [f'(x_n)] = (1+\epsilon_r^{(2)})f'(x_n), such that \frac{[f](x_n)}{[f'](x_n)} = \frac{f(x_n)}{f'(x_n)}(1+2 \epsilon_r), so no problem there)
On the other hand, although the method with the big f might be well conditioned(?), it might still be better to use g, since you'll avoid an overflow?
But say f typically has really big values, and say you can actually, without much trouble, rewrite your problem as solving for the zero of a function g which reaches much smaller values. Is it then "better" to use x_{n+1} = x_n - \frac{g(x_n)}{g'(x_n)} instead of x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}? Or does it make no difference? After all, the "bigness" of f doesn't seem to influence the relative error on x (since if [.] denotes the computed value of something, then [f(x_n)] = (1+\epsilon_r^{(1)})f(x_n) and [f'(x_n)] = (1+\epsilon_r^{(2)})f'(x_n), such that \frac{[f](x_n)}{[f'](x_n)} = \frac{f(x_n)}{f'(x_n)}(1+2 \epsilon_r), so no problem there)
On the other hand, although the method with the big f might be well conditioned(?), it might still be better to use g, since you'll avoid an overflow?