MHB Program for Approximating nth root of a Number

  • Thread starter Thread starter annie122
  • Start date Start date
  • Tags Tags
    Program Root
annie122
Messages
51
Reaction score
0
I have a question for a programming exercise I'm working on for C.

The problem is to "Write a program that uses Newton's method to approximate the nth root of a number to six decimal places." The problem also said to terminate after 100 trials if it failed to converge.

Q1. What does "converge" mean?
Does it mean the difference between two approximation can be made as small as I like?

Q2. On what condition should the program terminate?
There are two such conditions: 1) if the loop has been executed 100 times, 2) the difference between the "true" answer and the approximation is less than 0.000001.
I know how to set 1), but how should I express 2)?
Right now, I am setting the condition as
|approximation - root| < 0.00001,
but I feel it's kind of cheating, because I'm not supposed to know the real answer if I'm making approximations.
Are there any other any ways to express the condition, especially one involving the function f(x) = x^n - c (x is the nth root of c)?
 
Mathematics news on Phys.org
Re: question on program for approximating nth root of a number

Hi Yuuki! :)

Yuuki said:
I have a question for a programming exercise I'm working on for C.

The problem is to "Write a program that uses Newton's method to approximate the nth root of a number to six decimal places." The problem also said to terminate after 100 trials if it failed to converge.

Q1. What does "converge" mean?
Does it mean the difference between two approximation can be made as small as I like

Yes. Basically.
More specifically, that you can get as close to the nth root as you want by just taking enough trials.

Q2. On what condition should the program terminate?
There are two such conditions: 1) if the loop has been executed 100 times, 2) the difference between the "true" answer and the approximation is less than 0.000001.
I know how to set 1), but how should I express 2)?
Right now, I am setting the condition as
|approximation - root| < 0.00001,
but I feel it's kind of cheating, because I'm not supposed to know the real answer if I'm making approximations.
Are there any other any ways to express the condition, especially one involving the function f(x) = x^n - c (x is the nth root of c)?

Exactly. You're not supposed to use the real root.
But what you can do is setting the condition as for instance
$$|\text{approximation} - \text{previous approximation}| < 0.0000001$$
If you achieve that, it is unlikely that the first 6 decimal digits will change in more iterations.
 
Re: question on program for approximating nth root of a number

Thanks. :)

I set a new variable root0 to store the previous approximation, and set the condition as you said.
It worked beautifully.
 
It is possible to estimate the error in an iterate directly (assuming it small anyway).

Let $$x$$ be the 6-th root of $$k$$ and $$x_n$$ an estimate of $$x$$ with error $$\varepsilon_n$$ such that:

$$x=x_n+\varepsilon_n$$

Then raising this to the 6-th power gives:

$$x^6=x_n^6 + 6 \varepsilon_n x_n^5 + O(\varepsilon_n^2)$$

Now ignoring second and higher order terms in $$\varepsilon$$ and rearranging we get:

$$\varepsilon_n=\frac{x_n^6-x^6}{6x_n^5}=\frac{x_n^6-k}{6x_n^5}$$

OK let's look at an example: Take $$k=66$$, and $$x_n=2$$, so $$x_n^6=64$$, then

$$\varepsilon_n=\frac{66-64}{6\times32}\approx 0.01042$$

which compares nicely with $$66^{1/6}=2.01028...$$

The above is very similar (for similar read identical) to computing the next iterate and taking the difference of the iterates as an estimate of the error in the first.
 
Last edited:
zzephod said:
It is possible to estimate the error in an iterate directly (assuming it small anyway).

It is also possible to specify an upper boundary for the remaining error in a specific iteration (in this specific case).

First off, after the first (positive) iteration, all iterations are guaranteed to be above the root.
The remaining error in those iterations is guaranteed to be less than the change in the approximation.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.

Similar threads

Replies
1
Views
2K
Replies
2
Views
1K
Replies
2
Views
3K
Replies
16
Views
3K
Replies
11
Views
3K
Replies
5
Views
3K
Back
Top