Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Numerical method: iteratation to converge on root

  1. May 30, 2012 #1
    Hello!

    I'm having trouble understanding the method/reasoning behind finding the root of an equation though iterative convergence.

    x2 - 4x + 1 = 0
    x2 = + 4x - 1
    x = 4 - 1/x

    I can understand that once we input a 'root' the equation will equal be equal on both sides. (Due to the remainder theorem) However I can't grasp why it converges to a root in all cases...

    I also can't seem to understand how if we set x as 3, then we get x or x0 as 3.667...
    x = 4 - 1/(3) = 3.667
    I obviously understand the equation it just doesn't quite fit how x is equal to 2 different values neither of which are a root?

    How would it account for a equation that would have a 'higher' Y value for: x0 . Which 'goes past' the root? (I've drawn a blue line over the graph).
     

    Attached Files:

  2. jcsd
  3. May 30, 2012 #2

    CAF123

    User Avatar
    Gold Member

    Are you familiar with the Newton- Raphson method, in particular the form which involves the derivative of a function?
    I don't think the iterative process needs to converge to a root in all cases. It depends on the value chosen for the initial x. (e.g if you take initial x to be close to a stationary point, x(1) will be be given by drawing a tangent line to the sp and thus it might take you further from your iniital approximation.)
     
  4. May 30, 2012 #3

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    Good! You shouldn't grasp that because it isn't true. IF the sequence converges THEN it must converge to a solution to the equation. But there will always exist initial values for which the sequence does not converge.

    They are not. What you should have is [itex]x_{n+1}= 4- 1/x_n[/itex] where [itex]x_n[/itex] and [itex]x_{n+1}[/itex] are different numbers. They are successive terms in the sequence.

    I don't know what you mean by that.
     
  5. Jun 8, 2012 #4
    Thank you for your help!

    Let say the function (the iterative function?) equals x(n+1) = x2 + 9999999999999999
    which translates to y = x2 + 9999999999999999
    If we assume x is a root and hence intersects the y=x line, lets say there is a root at 3 x-coordinate... If we input 2.5/2.6/2.8/2.99999999 however we would get a very large Y value and if we input the very large y value back into the iterative function it would go 'way past' the root (3 in this case).

    Obviously the root isn't 3 in this example but I'm only asking if this could 'occur', not too sure how I could 'find' an actual equation that replicates this behavior if that makes sense.
     
  6. Jun 8, 2012 #5

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    You have chosen an example which clearly does not have a real root! It is impossible for your sequence to converge to a non-existant root!
     
  7. Jun 8, 2012 #6
    I do realize that. I'm guessing though, that I'm thinking of an something that doesn't exist.

    Thank you for everyones help!
     
  8. Jun 8, 2012 #7

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    What you have done is to re-write the expression [itex]f(x)=0[/itex] with [itex]f(x)=x^2-4x+1[/itex] into the form [itex]x=g(x)[/itex] with [itex]g(x)=4-1/x[/itex] and then used this latter expression as an iterative method [itex]x_{n+1} = g(x_n)[/itex]. There's a name for this technique: It's called "fixed point iteration". The reason it's called this is that the point at which [itex]x=g(x)[/itex] is called a "fixed point" of the function [itex]g(x)[/itex].

    And it doesn't always work.

    What if there isn't such a point? For example, the expression [itex]x^2-x+1=0[/itex] can be rewritten as [itex]x=1-1/x[/itex]. The sequence [itex]x_{n+1} = 1 - 1/x_n[/itex] won't converge. It will instead either blow up or it will cycle between three values. Note that there are no real zeros of [itex]x^2-x+1[/itex]. The function [itex]g(x)=1-1/x[/itex] does have fixed points, but they're complex.

    Another problem arises with the example at hand. There are two real roots of [itex]f(x)=x^2-4x+1[/itex], 2+√3 and 2-√3. Pick an initial value that is not one of these roots and the iterator [itex]x_{n+1} = 4-1/x_n[/itex] will either blow up because you happened to hit [itex]x_n=1/4[/itex] or it will converge to 2+√3. It won't find that other root, 2-√3. The reason is that for small epsilon in the vicinity of x=2-√3, [itex]g(x+\epsilon) \approx x + (7+4\sqrt 3)\epsilon[/itex]. That factor of 7+4√3 makes an initial guess near the 2-√3 diverge away from that solution.

    Yet another example is [itex]x^2-2x+1=0[/itex]. Here the fixed point iterator is [itex]x_{n+1} = 2-1/x_n[/itex]. This will converge to 1 for any initial guess greater than 1, but the convergence is extremely slow. The reason for this slow convergence is that the derivative of [itex]g(x)=2-1/x[/itex] is exactly 1 at the solution point x=1.

    Fixed point iteration can be a useful tool because it is so easy to formulate. It doesn't always work. A solution has to exist, and the absolute value of the derivative needs to be less than 1 in the vicinity of the solution in order for the technique to converge to that solution.
     
  9. Jun 8, 2012 #8
    This is exactly what I'm asking about if I understand what you mean by 'small epsilon'. I assume it's not likely to be asked in a question, which would have this behavior present?
     
  10. Jun 8, 2012 #9

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    Suppose you use the iterator [itex]x_{n+1} = 4-1/x_n[/itex] with an initial value that is just 1/100 away from one of the roots 2±√3. If you start with [itex]x_0 = 2+\surd 3 + 1/100[/itex], the next element in the sequence is closer to 2+√3 than is the initial guess. That initial error of 1/100 shrinks by a factor of almost 14. Now look at what happens with a guess near the smaller root. If you start with [itex]x_0 = 2-\surd 3 + 1/100[/itex], the next element in the sequence is further from 2-√3 than is the initial guess. Instead of shrinking, that initial error of 1/100 grows by a factor of over 13.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Numerical method: iteratation to converge on root
  1. Root 3 + root 2 (Replies: 1)

Loading...