Approximating 96^(1/96) with Newton's Method

In summary, to approximate 96^(1/96) to 8 decimal places using Newton's Method, the initial guess must be carefully chosen and should be above the root. The function to use is f(x) = (1+x)^96 - 96, and the first correction should be positive to ensure convergence. It is also important to use a calculator or math program with sufficient precision, as rounding errors can greatly affect the results.
  • #1
skyturnred
118
0

Homework Statement



Use Newton's Method to approximate 96^(1/96) correct to 8 decimal places.

Homework Equations



Newton's Method equation: Second Approx = (first approx) - (f(first approx))/(f'(first approx))

The Attempt at a Solution



So I know that 96^(1/96) must = some number x, so I set it up as 96^(1/96)-x=0, but what that does is cause f'(x)=1. This must not be right because when I approximate using this, I get a negative value which is significantly farther away than my first approx. Please help!
 
Physics news on Phys.org
  • #2
You might have better luck if you write this using

x = 961/96 --> x96 = 96 ,

and apply Newton's Method to x96 - 96 = 0 .

This looks like it's going to be a bit challenging using Newton's Method (which involves only a first derivative from the tangent line), so unless a first guess is already pretty close to the answer, you can expect some pretty violent swings in the iterations at first. (And use all the precision your calculator offers, because "round-off error" will be a killer when 95th-powers are involved.)

You know the answer has to be a number not much larger than one, so try a first guess, say, 1/96 to a few hundredths above one (that is, x0 ≈ 1.01 - 1.03 ) , and hang on for the ride...
 
Last edited:
  • #3
skyturnred said:
So I know that 96^(1/96) must = some number x, so I set it up as 96^(1/96)-x=0, but what that does is cause f'(x)=1. This must not be right because when I approximate using this, I get a negative value which is significantly farther away than my first approx. Please help!
You don't have the right f(x).

Suppose you wanted to find the 4th root of 15. That means you are looking for x such that
x = 151/4, or
x4 = 15, or
x4 - 15 = 0.
So f(x) for this example is
f(x) = x4 - 15.
Now try your problem again.


EDIT: Beaten to it. :smile:
 
  • #4
eumyang said:
EDIT: Beaten to it. :smile:

I actually wrote and deleted my answer twice before because I started trying to solve this and found unexpected complications. Newton's Method is actually a pretty terrible technique to use for extracting roots that large... :eek:
 
  • #5
I recommend f(x)=(1+x)^96 - 96.

This will only give you the decimals, but I think it will rid you of serious rounding errors.
 
  • #6
I'm thinking Taylor expansions to an appropriate degree for both f(x) and f'(x), and in turn their quotient, to get rid of rounding errors (if that is relevant).
 
  • #7
I like Serena said:
I recommend f(x)=(1+x)^96 - 96.

This will only give you the decimals, but I think it will rid you of serious rounding errors.

Unfortunately, it looks like that just relocates the difficulty: it seems the cause of the trouble is the extreme steepness of the function (that 96th-power is really touchy).

The insistence of whoever posed the problem on having eight decimal places is going to be a problem in any case if OP is using a calculator with only nine decimal places on the display. It is generally a good idea with this Method to compute with at least a couple digits more precision than the intended level (speaking from painful personal experience with this technique)...
I like Serena said:
I'm thinking Taylor expansions to an appropriate degree for both f(x) and f'(x), and in turn their quotient, to get rid of rounding errors (if that is relevant).

If the OP is in a numerical analysis class, they may have seen Taylor series and will be able to apply those. If they are in a conventional calculus sequence, Newton's method is often a first-semester topic and power series are second-semester...
 
  • #8
dynamicsolo said:
If the OP is in a numerical analysis class, they may have seen Taylor series and will be able to apply those. If they are in a conventional calculus sequence, Newton's method is often a first-semester topic and power series are second-semester...

I think the OP will either need to have a calculator-type or math program with sufficient precision, or he will need more advanced numerical techniques than just Newton-Raphson, to get the required precision.
 
  • #9
All right, having actually run this through, I'll say that the first guess is extremely critical. When I started from x0 = 1.02 , the first "correction" ran to about 1.5 · 106 (!) . So I "cheated" a bit and started again from a value of x0 = 1.04 (1.05 would be even better!) and got the process to converge in eight cycles. I used a calculator to nine decimal places of precision throughout. (If you don't use more precision in your computation than you're after in the final answer, the process will never fully settle, but just sort of "wander around" close to the solution.)

So the Method will converge, but the region of convergence is fairly "tight" around the actual solution...
 
  • #10
If the initial guess is below the root, then the first iteration diverges instead of converges.
After that it will converge.

So the initial guess should be above the root.
Then it should always converge...
 
  • #11
I like Serena said:
If the initial guess is below the root, then the first iteration diverges instead of converges.
After that it will converge.

So the initial guess should be above the root.
Then it should always converge...

I've never heard of this as a rule. What I did find is that 1.04 produced a "positive correction" (subtraction of a negative f(x)/f'(x) ratio) that "kicked" the second guess above the root. From there, it did (generally) converge toward the root "from above". (Starting from 1.02 [and probably 1.03] produced a "kick" so huge that resolving the process would have taken much longer, if it would have worked at all -- I didn't feel encouraged to continue.)
 
  • #12
dynamicsolo said:
I've never heard of this as a rule.

The 2nd order derivative determines from which side N-R converges.
If you start from the "wrong" side, then the first iteration jumps to the other side, and worse, it worsens the approximation.
In this animation you can see why that is:
300px-NewtonIteration_Ani.gif

Note how in this case the 2nd iteration worsens the approximation.

I believe this underwrites your signature!
 
  • #13
I like Serena said:
The 2nd order derivative determines from which side N-R converges.

That I had seen before. I think I generally overlook it because most of the problems where I've ever used this method had broader regions of convergence than in this problem.
 
  • #14
Thank you to ALL of you for your help you guys helped so much! I figured it out!
 

FAQ: Approximating 96^(1/96) with Newton's Method

1. How does Newton's Method work?

Newton's Method is an iterative method used to approximate the roots of a function. It involves taking an initial guess, finding the slope of the function at that point, and then using that slope to find a better guess for the root. This process is repeated until the desired level of accuracy is achieved.

2. Why is Newton's Method useful for approximating roots?

Newton's Method is useful because it can provide a more accurate approximation of a root compared to other methods, such as the bisection method. It also converges to the root faster, requiring fewer iterations to reach the desired level of accuracy.

3. How can Newton's Method be used to approximate 96^(1/96)?

To use Newton's Method to approximate 96^(1/96), we first need to define a function f(x) = x^96 - 96. The root of this function will be 96^(1/96). Then, we choose an initial guess for the root, plug it into the formula for Newton's Method, and repeat the process until the desired level of accuracy is achieved.

4. What is the convergence rate of Newton's Method?

The convergence rate of Newton's Method is quadratic, meaning that with each iteration, the number of correct digits in the approximation is doubled. This makes it a highly efficient method for approximating roots.

5. Are there any limitations to using Newton's Method for approximation?

Yes, there are some limitations to using Newton's Method. It requires the function to be differentiable, and the initial guess must be close enough to the root in order for the method to converge. It may also fail to converge or converge to the wrong root if the function has multiple roots or if there are discontinuities or singularities in the function.

Back
Top