- #1

- 322

- 51

- TL;DR Summary
- I'm looking to numerically solve equations when Newton Raphson's method only converges for certain initial guesses, so I'm wondering if there's ways to perform NR repeatedly with initial guesses that are more likely to converge.

I'm writing code to numerically solve a single variable equation, currently with Newton Raphson's method. Right now, I'm just using an initial guess of 1, and reporting a failure if it doesn't converge. While it usually works, it does of course fails for many functions with asymptotes or other discontinuous behavior. I can improve things slightly by just incrementing the initial guess (e.g. 1, -1, 2, -2 etc.), and performing Newton Raphson repeatedly until if finds a solution or it fails after n times. I'm wondering if there's a more intelligent method though, perhaps by using previous attempts to determine a better initial guess for the next time.

To be clear, I know there's always going to be some functions with roots the method can't find -- I'm just looking for an efficient method to find roots in most cases . I'm also just looking to find one root, not every root.

I've been using Newton-Raphson since that's what I'm familiar with, but if an entirely different algorithm would be better for this, I would be happy to try something else.

Are there any existing methods for something like this? Does anyone have ideas how this could be done?

Thanks!

To be clear, I know there's always going to be some functions with roots the method can't find -- I'm just looking for an efficient method to find roots in most cases . I'm also just looking to find one root, not every root.

I've been using Newton-Raphson since that's what I'm familiar with, but if an entirely different algorithm would be better for this, I would be happy to try something else.

Are there any existing methods for something like this? Does anyone have ideas how this could be done?

Thanks!