I think there is some thing wrong

  • Thread starter Thread starter MIB
  • Start date Start date
AI Thread Summary
The discussion revolves around a calculus exercise that claims the Newton sequence converges to a root for any initial point x0 in the interval (a,b) under certain conditions. The original statement is challenged due to the possibility of selecting an initial point where the derivative f'(x0) equals zero, which could hinder convergence. An example using the function f(x) = x^3 illustrates that if x0 is chosen as a root, convergence is trivial. The conclusion reached is that the theorem should specify that the derivative does not vanish at the initial point x0 for the convergence claim to hold true. The revised statement successfully proves the convergence under this new condition.
MIB
Messages
17
Reaction score
0
I think there is some thing wrong in this exercise which I met by chance in a book of Calculus and analysis while looking for rigorous definition for angle , it says

let f : (a,b) → ℝ be a differentiable function , suppose that f' is bounded , and that f has a root r in (a,b) . suppose that for x ≠ r , Jx denote the open interval between x and r , where if f(x) > 0 then f is convex on Jx , and if f(x) < 0 then f is concave on Jx.Then prove that for any x0 in (a,b) the Newton sequence converges to a root where x0 is it initial point .

The problem here is that we can take the initial point x0 where f'(x0)=0 .for the simplicity consider f(x) := x^3 for all x in (-4,4)
 
Mathematics news on Phys.org
I think we must replace "f' is bounded on (a,b)" by "f' is non-zero throughout (a,b)"
 
Last edited:
MIB said:
I think there is some thing wrong in this exercise which I met by chance in a book of Calculus and analysis while looking for rigorous definition for angle , it says

let f : (a,b) → ℝ be a differentiable function , suppose that f' is bounded , and that f has a root r in (a,b) . suppose that for x ≠ r , Jx denote the open interval between x and r , where if f(x) > 0 then f is convex on Jx , and if f(x) < 0 then f is concave on Jx.Then prove that for any x0 in (a,b) the Newton sequence converges to a root where x0 is it initial point .

The problem here is that we can take the initial point x0 where f'(x0)=0 .for the simplicity consider f(x) := x^3 for all x in (-4,4)

There is no problem in this particular example because the initial point you chose is the root of your function, so you've already converged to the root and aren't going anywhere.

Try coming up with an example where the derivative vanishes at a point that is not the root of the function. Be sure to double-check that the concavity conditions of the theorem are still met in this case. If you can show that you can't pick a function with f'(x0) = 0 for which the concavity requirements are satisfied, then there is no problem with the theorem as stated.
 
Mute said:
There is no problem in this particular example because the initial point you chose is the root of your function, so you've already converged to the root and aren't going anywhere.

Try coming up with an example where the derivative vanishes at a point that is not the root of the function. Be sure to double-check that the concavity conditions of the theorem are still met in this case. If you can show that you can't pick a function with f'(x0) = 0 for which the concavity requirements are satisfied, then there is no problem with the theorem as stated.

I know it can be very hard to think an example where the derivative vanishes at a point which is not a root , and I think it is impossible the problem is that it says " for any x0 in (a,b) the Newton sequence converges to a root where x0 is it initial point " , and then this wrong ,because the root is in (a,b) .
 
OK I restated it as following

let f : (a,b) → ℝ be a differentiable function , suppose that f' is bounded , and that f has a root r in (a,b) . suppose that for x ≠ r , Jx denote the open interval between x and r , where if f(x) > 0 then f is convex on Jx , and if f(x) < 0 then f is concave on Jx.Then prove that for any x0 in (a,b) the Newton sequence converges to a root where x0 is it initial point where the derivative doesn't vanish at x0.

And I proved it easily then .
 
Thread 'Video on imaginary numbers and some queries'
Hi, I was watching the following video. I found some points confusing. Could you please help me to understand the gaps? Thanks, in advance! Question 1: Around 4:22, the video says the following. So for those mathematicians, negative numbers didn't exist. You could subtract, that is find the difference between two positive quantities, but you couldn't have a negative answer or negative coefficients. Mathematicians were so averse to negative numbers that there was no single quadratic...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Thread 'Unit Circle Double Angle Derivations'
Here I made a terrible mistake of assuming this to be an equilateral triangle and set 2sinx=1 => x=pi/6. Although this did derive the double angle formulas it also led into a terrible mess trying to find all the combinations of sides. I must have been tired and just assumed 6x=180 and 2sinx=1. By that time, I was so mindset that I nearly scolded a person for even saying 90-x. I wonder if this is a case of biased observation that seeks to dis credit me like Jesus of Nazareth since in reality...
Back
Top