[Numerical analysis] Stability and condition of Newton's method

Click For Summary

Discussion Overview

The discussion revolves around the concepts of stability and condition in the context of numerical methods, particularly Newton's method. Participants explore the relationship between the algorithm used and the problem being solved, questioning how changes in input affect the output and the implications for different algorithms.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant expresses confusion about the definitions of stability and condition, suggesting that condition relates to how output changes with input, while stability concerns the behavior of errors in approximations.
  • The same participant questions whether the fractal nature of Newton's method indicates ill-conditioning, noting that small changes in starting values can lead to convergence on different roots.
  • Another participant introduces the concept of the butterfly effect as a potential analogy for understanding sensitivity in numerical methods.
  • A different participant shares a personal appreciation for Newton's method, discussing its implications for square roots and numerical values, although this does not directly address stability or condition.
  • Further clarification is offered regarding the definition of fractions, with a focus on mathematical terminology rather than numerical analysis concepts.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the definitions and implications of stability and condition, with multiple competing views and interpretations remaining unresolved.

Contextual Notes

There are limitations in the discussion regarding the definitions of stability and condition, as well as the implications of the fractal behavior of Newton's method. The relationship between algorithm performance and problem characteristics is also not fully explored.

srn
Messages
17
Reaction score
0
I am confused by the concept of stability and condition. As I understand it, condition is defined by how much the output changes when the input changes. But why is it linked to the problem and not the algorithm? What if I have two algorithms that calculate the same thing but in a completely different manner, and the same change in input causes algorithm (a) to return more erroneous results than algorithm (b), can I not conclude (a) is worse conditioned than (b)? Or is that what stability is about? From Wikipedia: the difference between function and approximations, or "whether or not errors blow up". Is condition simply f(x) - f(x+\epsilon) and stability f_{theory}(x) - f(x)? If so I more or less understand it, but I can't apply it to Newton's method, for example.

Consider the fractal in http://mathworld.wolfram.com/NewtonsMethod.html. It is obvious that a small change in starting value can cause it to converge to a completely different root. Is the method then ill-conditioned? But condition is linked to the problem, so it would be the same for a different root solving method; but i.e. Müller's method has less trouble with changing start values. So it's dependent on algorithm and hence not condition. But then what does the fractal tell you?

Is it stability? I don't think so because there's 'nothing wrong', I mean, it's still going t oa root, it's not diverging or anything. I understand substracting two nearly equal numbers is unstable because the error "blows up", but in this case? Speaking about that, 1) how would you quantify stability when you have no theoretical values? Would you use a function built into a computerprogram and assume that it is implemented as accurately as possible and then compare with that? 2) how come you even get differences in accuracy between methods? In this case you would iteratively calculate a root until the difference between consequetive terms is lower than a tolerance. If you use the same tolerance, how can two values differ? It would mean that going from the step where error > tol to the step where it is <= tol, one method "added" more than the other.
 
Mathematics news on Phys.org
I like Newton's method. It prevents the absurdity of having a square root of number being equal in magnitude to the square without breaking a sweat. That way, I'm comfortable in having this neat equations
sqrt. 1=0.9r
and conversely
sqrt. 0.9r=1
(I noticed that square roots of fractions have a higher numerical value than their squares while square roots of whole numbers are lesser).
NB. Am not insinuating that .9r is any lesser than 1. I've seen the FAQs.
 
I presume that by "fraction" you mean "a number between 0 and 1". Mathematically, a fraction is any number of the for a/b where a and b are integers. That is, 30/5 is a fraction and 0.5 is not (unless you use the bastard phrase "decimal fraction").

Yes, if 0< x< 1 then, multiplying each part by the positive number x, 0< x^2< x while if 1< x, then x< x^2.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
583
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 16 ·
Replies
16
Views
4K
Replies
10
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K