- #1
Kyouran
- 70
- 10
- TL;DR Summary
- Convergence criterion Newton-Raphson
The Newton-Raphson algorithm is well-known:
##x_{n+1} = x_n - \frac{f(x_n)}{f'(x_{n})}##
Looking at a few implementations online, I have encountered two methods for convergence:
1) The first method uses the function value of the last estimate itself, ##f(x_n)## or ##f(x_{n+1})##. Since at the root the function value is zero, this limits us to only specifying a maximum absolute error on ## f(x)##, as a relative error w.r.t. zero doesn't make much sense.
2) The second method uses the change in the root of the last iteration. If this change falls below a certain treshold value, then the algorithm is assumed to have converged. Here, one has a bit more freedom: one can look at the relative change ##(\frac{x_{n+1}}{x_n}-1)## in the root, or one can look at the step size ##x_{n+1}-x_n##.
Obviously, the two criteria (or three, if you count the last two as distinct) are different, so the question here is what are the consequences of choosing one criterion over the other.
##x_{n+1} = x_n - \frac{f(x_n)}{f'(x_{n})}##
Looking at a few implementations online, I have encountered two methods for convergence:
1) The first method uses the function value of the last estimate itself, ##f(x_n)## or ##f(x_{n+1})##. Since at the root the function value is zero, this limits us to only specifying a maximum absolute error on ## f(x)##, as a relative error w.r.t. zero doesn't make much sense.
2) The second method uses the change in the root of the last iteration. If this change falls below a certain treshold value, then the algorithm is assumed to have converged. Here, one has a bit more freedom: one can look at the relative change ##(\frac{x_{n+1}}{x_n}-1)## in the root, or one can look at the step size ##x_{n+1}-x_n##.
Obviously, the two criteria (or three, if you count the last two as distinct) are different, so the question here is what are the consequences of choosing one criterion over the other.