[Numerical analysis] Stability and condition of Newton's method

Click For Summary
SUMMARY

This discussion focuses on the concepts of stability and condition in numerical analysis, particularly in relation to Newton's method. Condition refers to how output changes with input variations, while stability pertains to the behavior of an algorithm under small perturbations. The conversation highlights that Newton's method can exhibit sensitivity to initial values, leading to different roots, which raises questions about its conditioning compared to other methods like Müller's method. The participants also explore the quantification of stability and discrepancies in accuracy between different numerical methods.

PREREQUISITES
  • Understanding of numerical analysis concepts, specifically stability and condition.
  • Familiarity with Newton's method for root-finding.
  • Knowledge of algorithmic performance metrics.
  • Basic grasp of error analysis in numerical computations.
NEXT STEPS
  • Research the mathematical foundations of stability and condition in numerical algorithms.
  • Explore the differences between Newton's method and Müller's method in depth.
  • Learn how to quantify stability using numerical experiments and theoretical comparisons.
  • Investigate the implications of the butterfly effect in numerical computations.
USEFUL FOR

Mathematicians, computer scientists, and engineers interested in numerical methods, particularly those analyzing the performance and reliability of root-finding algorithms.

srn
Messages
17
Reaction score
0
I am confused by the concept of stability and condition. As I understand it, condition is defined by how much the output changes when the input changes. But why is it linked to the problem and not the algorithm? What if I have two algorithms that calculate the same thing but in a completely different manner, and the same change in input causes algorithm (a) to return more erroneous results than algorithm (b), can I not conclude (a) is worse conditioned than (b)? Or is that what stability is about? From Wikipedia: the difference between function and approximations, or "whether or not errors blow up". Is condition simply f(x) - f(x+\epsilon) and stability f_{theory}(x) - f(x)? If so I more or less understand it, but I can't apply it to Newton's method, for example.

Consider the fractal in http://mathworld.wolfram.com/NewtonsMethod.html. It is obvious that a small change in starting value can cause it to converge to a completely different root. Is the method then ill-conditioned? But condition is linked to the problem, so it would be the same for a different root solving method; but i.e. Müller's method has less trouble with changing start values. So it's dependent on algorithm and hence not condition. But then what does the fractal tell you?

Is it stability? I don't think so because there's 'nothing wrong', I mean, it's still going t oa root, it's not diverging or anything. I understand substracting two nearly equal numbers is unstable because the error "blows up", but in this case? Speaking about that, 1) how would you quantify stability when you have no theoretical values? Would you use a function built into a computerprogram and assume that it is implemented as accurately as possible and then compare with that? 2) how come you even get differences in accuracy between methods? In this case you would iteratively calculate a root until the difference between consequetive terms is lower than a tolerance. If you use the same tolerance, how can two values differ? It would mean that going from the step where error > tol to the step where it is <= tol, one method "added" more than the other.
 
Mathematics news on Phys.org
I like Newton's method. It prevents the absurdity of having a square root of number being equal in magnitude to the square without breaking a sweat. That way, I'm comfortable in having this neat equations
sqrt. 1=0.9r
and conversely
sqrt. 0.9r=1
(I noticed that square roots of fractions have a higher numerical value than their squares while square roots of whole numbers are lesser).
NB. Am not insinuating that .9r is any lesser than 1. I've seen the FAQs.
 
I presume that by "fraction" you mean "a number between 0 and 1". Mathematically, a fraction is any number of the for a/b where a and b are integers. That is, 30/5 is a fraction and 0.5 is not (unless you use the bastard phrase "decimal fraction").

Yes, if 0< x< 1 then, multiplying each part by the positive number x, 0< x^2< x while if 1< x, then x< x^2.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
890
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 16 ·
Replies
16
Views
4K
Replies
10
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K