Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

[Numerical analysis] Stability and condition of Newton's method

  1. Dec 20, 2012 #1

    srn

    User Avatar

    I am confused by the concept of stability and condition. As I understand it, condition is defined by how much the output changes when the input changes. But why is it linked to the problem and not the algorithm? What if I have two algorithms that calculate the same thing but in a completely different manner, and the same change in input causes algorithm (a) to return more erroneous results than algorithm (b), can I not conclude (a) is worse conditioned than (b)? Or is that what stability is about? From Wikipedia: the difference between function and approximations, or "whether or not errors blow up". Is condition simply [itex]f(x) - f(x+\epsilon)[/itex] and stability [itex]f_{theory}(x) - f(x)[/itex]? If so I more or less understand it, but I can't apply it to Newton's method, for example.

    Consider the fractal in http://mathworld.wolfram.com/NewtonsMethod.html. It is obvious that a small change in starting value can cause it to converge to a completely different root. Is the method then ill-conditioned? But condition is linked to the problem, so it would be the same for a different root solving method; but i.e. Müller's method has less trouble with changing start values. So it's dependent on algoritm and hence not condition. But then what does the fractal tell you?

    Is it stability? I don't think so because there's 'nothing wrong', I mean, it's still going t oa root, it's not diverging or anything. I understand substracting two nearly equal numbers is unstable because the error "blows up", but in this case? Speaking about that, 1) how would you quantify stability when you have no theoretical values? Would you use a function built into a computerprogram and assume that it is implemented as accurately as possible and then compare with that? 2) how come you even get differences in accuracy between methods? In this case you would iteratively calculate a root until the difference between consequetive terms is lower than a tolerance. If you use the same tolerance, how can two values differ? It would mean that going from the step where error > tol to the step where it is <= tol, one method "added" more than the other.
     
  2. jcsd
  3. Dec 22, 2012 #2

    chiro

    User Avatar
    Science Advisor

  4. Dec 29, 2012 #3
    I like Newton's method. It prevents the absurdity of having a square root of number being equal in magnitude to the square without breaking a sweat. That way, I'm comfortable in having this neat equations
    sqrt. 1=0.9r
    and conversely
    sqrt. 0.9r=1
    (I noticed that square roots of fractions have a higher numerical value than their squares while square roots of whole numbers are lesser).
    NB. Am not insinuating that .9r is any lesser than 1. I've seen the FAQs.
     
  5. Dec 29, 2012 #4

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    I presume that by "fraction" you mean "a number between 0 and 1". Mathematically, a fraction is any number of the for a/b where a and b are integers. That is, 30/5 is a fraction and 0.5 is not (unless you use the bastard phrase "decimal fraction").

    Yes, if 0< x< 1 then, multiplying each part by the postive number x, 0< x^2< x while if 1< x, then x< x^2.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: [Numerical analysis] Stability and condition of Newton's method
  1. Numerical Analysis (Replies: 6)

  2. Numerical Analysis (Replies: 1)

  3. Numerical methods (Replies: 0)

Loading...