I think there is some thing wrong

  • Context: Graduate 
  • Thread starter Thread starter MIB
  • Start date Start date
Click For Summary
SUMMARY

The discussion centers on the convergence of the Newton sequence for a differentiable function f: (a,b) → ℝ, where f' is bounded and has a root r in (a,b). The original assertion that the Newton sequence converges for any initial point x0 in (a,b) is challenged, particularly when f'(x0) = 0. The example f(x) = x^3 illustrates that if the initial point is the root, convergence is trivial. The conclusion is that the statement should specify that the derivative does not vanish at the initial point x0 for the theorem to hold true.

PREREQUISITES
  • Differentiable functions and their properties
  • Newton's method for finding roots
  • Convexity and concavity of functions
  • Understanding of bounded derivatives
NEXT STEPS
  • Explore the implications of Newton's method when f'(x0) = 0
  • Investigate examples of functions with bounded derivatives and their roots
  • Study the conditions for convergence in Newton's method
  • Learn about convex and concave functions in detail
USEFUL FOR

Mathematicians, calculus students, and anyone interested in numerical methods for root-finding, particularly those studying the convergence properties of Newton's method.

MIB
Messages
17
Reaction score
0
I think there is some thing wrong in this exercise which I met by chance in a book of Calculus and analysis while looking for rigorous definition for angle , it says

let f : (a,b) → ℝ be a differentiable function , suppose that f' is bounded , and that f has a root r in (a,b) . suppose that for x ≠ r , Jx denote the open interval between x and r , where if f(x) > 0 then f is convex on Jx , and if f(x) < 0 then f is concave on Jx.Then prove that for any x0 in (a,b) the Newton sequence converges to a root where x0 is it initial point .

The problem here is that we can take the initial point x0 where f'(x0)=0 .for the simplicity consider f(x) := x^3 for all x in (-4,4)
 
Physics news on Phys.org
I think we must replace "f' is bounded on (a,b)" by "f' is non-zero throughout (a,b)"
 
Last edited:
MIB said:
I think there is some thing wrong in this exercise which I met by chance in a book of Calculus and analysis while looking for rigorous definition for angle , it says

let f : (a,b) → ℝ be a differentiable function , suppose that f' is bounded , and that f has a root r in (a,b) . suppose that for x ≠ r , Jx denote the open interval between x and r , where if f(x) > 0 then f is convex on Jx , and if f(x) < 0 then f is concave on Jx.Then prove that for any x0 in (a,b) the Newton sequence converges to a root where x0 is it initial point .

The problem here is that we can take the initial point x0 where f'(x0)=0 .for the simplicity consider f(x) := x^3 for all x in (-4,4)

There is no problem in this particular example because the initial point you chose is the root of your function, so you've already converged to the root and aren't going anywhere.

Try coming up with an example where the derivative vanishes at a point that is not the root of the function. Be sure to double-check that the concavity conditions of the theorem are still met in this case. If you can show that you can't pick a function with f'(x0) = 0 for which the concavity requirements are satisfied, then there is no problem with the theorem as stated.
 
Mute said:
There is no problem in this particular example because the initial point you chose is the root of your function, so you've already converged to the root and aren't going anywhere.

Try coming up with an example where the derivative vanishes at a point that is not the root of the function. Be sure to double-check that the concavity conditions of the theorem are still met in this case. If you can show that you can't pick a function with f'(x0) = 0 for which the concavity requirements are satisfied, then there is no problem with the theorem as stated.

I know it can be very hard to think an example where the derivative vanishes at a point which is not a root , and I think it is impossible the problem is that it says " for any x0 in (a,b) the Newton sequence converges to a root where x0 is it initial point " , and then this wrong ,because the root is in (a,b) .
 
OK I restated it as following

let f : (a,b) → ℝ be a differentiable function , suppose that f' is bounded , and that f has a root r in (a,b) . suppose that for x ≠ r , Jx denote the open interval between x and r , where if f(x) > 0 then f is convex on Jx , and if f(x) < 0 then f is concave on Jx.Then prove that for any x0 in (a,b) the Newton sequence converges to a root where x0 is it initial point where the derivative doesn't vanish at x0.

And I proved it easily then .
 

Similar threads

  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 22 ·
Replies
22
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K