What is the theory behind the partial derivative method for root-finding?

  • Thread starter Hassan2
  • Start date
In summary: In your case, I would recommend trying a different starting point to see if the convergence improves. Thanks for your question.
  • #1
Hassan2
426
5
Dear all,

I have a question about root-finding. In fact my problem is about solving system of nonlinear equations but I have simplified my question as follows:

Suppose I would like to find the root of the following function by iteration:

[itex]y=f(x,p(x))[/itex]

If I can calculate the total derivative with respect to x, I can use Newton-Raphson method effectively. However, for some reason, I can't calculate the total derivative. I can calculate the partial derivative though.

I am using the naive method of finding the root by "partial derivative". To this end, in each iteration I replace p(x) by its value evaluated from the previous value of x, treating it as a constant, and apply Newton-Raphson method. Convergence is achieved in most cases but I'm not sure if this is the right way.

My question is:

1. Is there a theory behind this method? Is it related to fixed-point iteration?

2. Does the convergence depend on p(x)?

Your help would be appreciated,

Hassan
 
Physics news on Phys.org
  • #2
Hassan2 said:
Dear all,

I have a question about root-finding. In fact my problem is about solving system of nonlinear equations but I have simplified my question as follows:

Suppose I would like to find the root of the following function by iteration:

[itex]y=f(x,p(x))[/itex]

If I can calculate the total derivative with respect to x, I can use Newton-Raphson method effectively. However, for some reason, I can't calculate the total derivative. I can calculate the partial derivative though.

I am using the naive method of finding the root by "partial derivative". To this end, in each iteration I replace p(x) by its value evaluated from the previous value of x, treating it as a constant, and apply Newton-Raphson method. Convergence is achieved in most cases but I'm not sure if this is the right way.

My question is:

1. Is there a theory behind this method? Is it related to fixed-point iteration?

2. Does the convergence depend on p(x)?

Your help would be appreciated,

Hassan

Hey Hassan2.

Is this function just a 'complicated' (because of p(x)) one-dimensional function? This is what you seem to imply since y is only a function of x.

If this is the case, then in terms of the theory and application, the one-dimensional root finding methods like Newton-Rhapson and other similar ones should suffice provided that the function has the desired properties (like continuity, differentiability over a given interval). Hopefully this answers your first question.

For the second question, it depends on whether the function given p(x) has differentiability and continuity requirements fulfilled.

In terms of whether a root actually exists, then we can use the first derivative (might need the second for points of inflection) and the mean value theorem (or something similar) to show that a root exists given that the function is continuous (and differentiable).

If your function has the above properties, you will be able to use any root-finding algorithm for your function y (assuming only depends on x) and the algorithm should be able to tell you whether or not a real root even exists (although most algorithms generate complex roots as well).
 
  • #3
Thanks a lot,
I think I should explains the original problem.

I have the following system of "nonlinear" equations in matrix form:

[itex]Ax=b[/itex]

A=A(x,p(x)) is a symmetric sparse matrix and the number of unknowns could be as many as one million. The matrix elements depend on p(x), with x being the vector of unknowns. Since p depends on all unknowns, the matrix of derivatives becomes non-sparse and , I guess, non-symmetric. That's why I can't use the derivatives. Beside that, the dependence of p(x) on x is complicated and is not in the form of a functional but it is evaluated by numerical methods.

The only method I know for solving system of nonlinear equations is the Newton-Raphson method. The method explained in my problem seems to be different and I have no material to support it. I am looking for a material which discusses the method.

Back to one dimensional problem,

can I solve the equation [itex]x^{3}-6=0[/itex]

by writing it as [itex]x^{2}x-6=0[/itex] and iterate as

[itex]x_{k}^{2}x_{k+1}-6=0[/itex]?

This example doesn't converge in my code.

Thanks again.
 
  • #4
Sorry for the confusing setup of the problem.

After some research, I realized that the method is in fact the fixed point iteration method. In each iteration, the Newton-Raphson method first solve the following equation for [itex]x_{k+1}[/itex] .

[itex]f(x_{k+1},p(x_{k}))=0[/itex]

and yields the following fixed-point iteration:

[itex]x_{k+1}=g(x_{k})[/itex]

In the fixed-point method, the convergence depends both on g(x) and the starting point [itex]x_{0}[/itex] .
 

1. What is "Root-finding by iteration" and how does it work?

Root-finding by iteration is a numerical method used to approximate the root of a given function. It works by starting with an initial guess and then repeatedly applying a specific formula or algorithm to refine the guess until the desired accuracy is achieved. This method is particularly useful for functions that cannot be solved algebraically.

2. What are some common iteration techniques used in root-finding?

Some of the most commonly used iteration techniques in root-finding include the bisection method, the secant method, and the Newton-Raphson method. Each technique has its own advantages and disadvantages, so the choice of which one to use depends on the specific problem at hand.

3. How do I choose an appropriate initial guess for root-finding by iteration?

The choice of an initial guess for root-finding by iteration is crucial as it can greatly affect the convergence rate and accuracy of the solution. A good initial guess can be obtained by visually inspecting the function and selecting a value that is close to the root or by using other methods such as interval bisection or the method of false position.

4. What are some common sources of error in root-finding by iteration?

Some common sources of error in root-finding by iteration include rounding errors, errors in the initial guess, and the possibility of encountering a singularity or discontinuity in the function. It is important to carefully choose the iteration method and initial guess to minimize these errors.

5. Are there any limitations to root-finding by iteration?

While root-finding by iteration is a powerful numerical method, it does have some limitations. It may fail to converge if the initial guess is too far from the root or if the function has multiple roots. It also requires the function to be differentiable, which may not always be the case. In these situations, other methods such as interval bisection or the method of false position may be more appropriate.

Similar threads

  • Calculus
Replies
13
Views
1K
Replies
4
Views
951
  • Calculus
Replies
9
Views
1K
  • General Math
Replies
7
Views
1K
Replies
3
Views
1K
  • General Math
Replies
1
Views
788
Replies
18
Views
2K
  • Precalculus Mathematics Homework Help
Replies
3
Views
3K
Back
Top