Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Root-finding by iteration

  1. Jun 16, 2012 #1
    Dear all,

    I have a question about root-finding. In fact my problem is about solving system of nonlinear equations but I have simplified my question as follows:

    Suppose I would like to find the root of the following function by iteration:


    If I can calculate the total derivative with respect to x, I can use Newton-Raphson method effectively. However, for some reason, I can't calculate the total derivative. I can calculate the partial derivative though.

    I am using the naive method of finding the root by "partial derivative". To this end, in each iteration I replace p(x) by its value evaluated from the previous value of x, treating it as a constant, and apply Newton-Raphson method. Convergence is achieved in most cases but I'm not sure if this is the right way.

    My question is:

    1. Is there a theory behind this method? Is it related to fixed-point iteration?

    2. Does the convergence depend on p(x)?

    Your help would be appreciated,

  2. jcsd
  3. Jun 16, 2012 #2


    User Avatar
    Science Advisor

    Hey Hassan2.

    Is this function just a 'complicated' (because of p(x)) one-dimensional function? This is what you seem to imply since y is only a function of x.

    If this is the case, then in terms of the theory and application, the one-dimensional root finding methods like Newton-Rhapson and other similar ones should suffice provided that the function has the desired properties (like continuity, differentiability over a given interval). Hopefully this answers your first question.

    For the second question, it depends on whether the function given p(x) has differentiability and continuity requirements fulfilled.

    In terms of whether a root actually exists, then we can use the first derivative (might need the second for points of inflection) and the mean value theorem (or something similar) to show that a root exists given that the function is continuous (and differentiable).

    If your function has the above properties, you will be able to use any root-finding algorithm for your function y (assuming only depends on x) and the algorithm should be able to tell you whether or not a real root even exists (although most algorithms generate complex roots as well).
  4. Jun 16, 2012 #3
    Thanks a lot,

    I think I should explains the original problem.

    I have the following system of "nonlinear" equations in matrix form:


    A=A(x,p(x)) is a symmetric sparse matrix and the number of unknowns could be as many as one million. The matrix elements depend on p(x), with x being the vector of unknowns. Since p depends on all unknowns, the matrix of derivatives becomes non-sparse and , I guess, non-symmetric. That's why I can't use the derivatives. Beside that, the dependence of p(x) on x is complicated and is not in the form of a functional but it is evaluated by numerical methods.

    The only method I know for solving system of nonlinear equations is the Newton-Raphson method. The method explained in my problem seems to be different and I have no material to support it. I am looking for a material which discusses the method.

    Back to one dimensional problem,

    can I solve the equation [itex]x^{3}-6=0[/itex]

    by writing it as [itex]x^{2}x-6=0[/itex] and iterate as


    This example doesn't converge in my code.

    Thanks again.
  5. Jun 18, 2012 #4
    Sorry for the confusing setup of the problem.

    After some research, I realized that the method is in fact the fixed point iteration method. In each iteration, the Newton-Raphson method first solve the following equation for [itex]x_{k+1}[/itex] .


    and yields the following fixed-point iteration:


    In the fixed-point method, the convergence depends both on g(x) and the starting point [itex]x_{0}[/itex] .
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook