Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Simple Fixed Point Iteration for Root-solving

  1. Jan 22, 2007 #1
    I just want to know why in the world this works?

    I am speaking about the simple iteration of taking a function, f(x), setting it to 0, f(x) = 0, solving for x in the function and setting it equal to g(x)...and then iterating.

    For example the function :f(x) = x^2 +2x - 1

    Setting it to 0 and solving for x: x = (1 - 2x)/x

    and then you pick a 'guess' and iterate.

    Why in the world does the iteration narrow down onto the root? It seems so arbitrary. We just keep plugging in the values, keep iterating on, and it just converges on the root.

    Why oh why is this behavior so? Btw, I promise to learn LaTeX soon..so it won't look as funky next time.
  2. jcsd
  3. Jan 23, 2007 #2

    Gib Z

    User Avatar
    Homework Helper

    Because with each iteration it gets you a bit closer to the root. Even if your guess is inaccurate, iteration will still work, but take longer to converge.The iteration narrows down onto the root because the algorithm in designed to get closer to the root.

    Eg Newtons Method to Square Root numbers.

    [tex]E=\frac{\frac{N}{E_0} + E_0}{2}[/tex] Where E is the estimate.

    The simple theory behind it is, when we divide a number by its root, we get the root again. We when divide it by a number bigger than its root, the result is a number smaller than its square root. When divided by a number smaller than its root, the result is a number bigger than its root.

    So the theory is, Say E is less than the root. We divided the number by something less than the root, so we Get a number bigger than the root. By averaging ones bigger and smaller, we get a new approximation, which can be iterated again and again.

    It doesn't matter how bad the approximation is, eventually it will converge because the theory behind it is very simple and designs it so.
  4. Jan 23, 2007 #3
    If you want to know the arithmetical reasons for all those algorithms to converge to a value, then take a look at:


    There you will find the true story about roots solving.


    Ing. Domingo Gomez
  5. Jan 23, 2007 #4


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Suppose x is small; do you know an approximate formula for f(a+x)? If you set f(a+x) ~ 0, can you solve for x?

    (this may be easier if you forget that you know exactly what f is)
  6. Jan 23, 2007 #5


    User Avatar
    Staff Emeritus
    Science Advisor

    It doesn't necessarily. For example, if you pick 1/2 as your "estimate", the caculation "blows up" almost immediately. For that to work, your "estimate" must be relatively close to the solution: here that is [itex]-1+\sqrt{2}[/itex] and to work, your initial guess must be larger than 1.

    Notice that
    [tex]\frac{1-2x}{x}-\frac{1-2y}{y}= \frac{y-2xy-x+2xy}{xy}= \frac{y-x}{xy}[/tex]
    As long as xy> 1 (which is certainly true if both x and y are larger than 1), that will be less than y-x. That's a "contraction" map: f(x) is close to f(y) than x is to y. The "Banach fixed point theorem" shows that any sequence iterating a contraction map converges to the fixed point of the map.
  7. Jan 23, 2007 #6


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    IIRC you must have f'(x) < 1 around your root and initial guess for convergence of the fixed point method.

    I "discovered" fixed point iteration by repeatly taking the square root on one of my first calculators.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Simple Fixed Point Iteration for Root-solving