Calculating something correct up to ##x## decimal places

  • Thread starter Thread starter Wrichik Basu
  • Start date Start date
AI Thread Summary
The discussion focuses on determining the accuracy of series approximations, particularly in calculating sums to a specified number of decimal places. The proposed method suggests stopping calculations when the difference between consecutive sums is less than a specified threshold, but concerns arise about potential corrections in later terms affecting accuracy. It is noted that this issue primarily occurs with divergent series, while convergent series tend to stabilize as more terms are added. Examples illustrate that even with small terms, a subsequent larger term can significantly alter the sum, challenging the initial stopping condition. Ultimately, careful analysis of series behavior and derivative bounds is essential for ensuring accurate approximations.
Wrichik Basu
Science Advisor
Insights Author
Gold Member
Messages
2,180
Reaction score
2,717
Homework Statement
Calculate ##\sin 10## using the sine series, correct upto two places of decimal.
Relevant Equations
The sine series
The above is one specific example of this type of problem that we often encounter in our course. The series can be, in general, anything like the log series, cosine series, etc. We are supposed to solve this problem in Python, but that doesn't matter here.

When this question was put forward for the first time in class, I proposed that if the difference between the sum upto ##(n+1)## terms and the sum upto ##n## terms is less ##10^{-x}##, then we have obtained the sum correct to ##x## decimal places. Basically the condition boils down to this: $$S_{n+1} - S_n \ = \ t_{n + 1} \ < \ 10^{-x},$$where ##t_{n + 1}## is the ##(n+1)##th term. The Professor said that this is correct, and applicable to any series.

I stop calculating the series at ##t_{n + 1}## when the above condition is met. But is it possible that somewhere down the line, after adding many more terms, a correction suddenly pops up in the ##x##th decimal place? Say for some series, the ##n##th term is##<10^{-2}##, and as per the question, I stop calculating the sum further. But if I had calculated for many more terms, I would have reached a state where the sum would have been ##2.33995223##, and after adding the next term (which is of the order of ##10^{-3}##), it becomes ##2.346995##. Is this possible? If yes, then the condition put above is wrong, isn't it?
 
Physics news on Phys.org
Basically you're asking whether, even if we know that the chosen n+1 term on which we terminated the calculation is lower than some number, there will be a term of higher order in the series that will violate this condition.

That can be a problem only if the series is divergent, I think, in that case when you're calculating asymptotic value represented by such series(a series approaching the chosen value for a finite number of terms and then diverging from it), you need to be careful about the point at which you terminate.

For convergent series the difference ##S_{n+1} - S_{n}## of partial sums, tends to zero as ##n## goes to infinity. If it is not monotonously falling, then there exists some number ##n## after which your requirement is surely going to be satisfied. That is, there may exist a series(I think, at least), in which your requirement will be for example fulfilled for ##n=5##, and then violated for ##n=6##, but since the series is convergent, this difference between partial sums will definitely fall towards zero, so there must be a number, say ##n=10## after which your requirement will stop being violated by terms of higher order.
 
  • Like
Likes WWGD and Wrichik Basu
Wrichik Basu said:
Homework Statement: Calculate ##\sin 10## using the sine series, correct upto two places of decimal.
Homework Equations: The sine series

The above is one specific example of this type of problem that we often encounter in our course. The series can be, in general, anything like the log series, cosine series, etc. We are supposed to solve this problem in Python, but that doesn't matter here.

When this question was put forward for the first time in class, I proposed that if the difference between the sum upto ##(n+1)## terms and the sum upto ##n## terms is less ##10^{-x}##, then we have obtained the sum correct to ##x## decimal places. Basically the condition boils down to this: $$S_{n+1} - S_n \ = \ t_{n + 1} \ < \ 10^{-x},$$where ##t_{n + 1}## is the ##(n+1)##th term. The Professor said that this is correct, and applicable to any series.

I stop calculating the series at ##t_{n + 1}## when the above condition is met. But is it possible that somewhere down the line, after adding many more terms, a correction suddenly pops up in the ##x##th decimal place? Say for some series, the ##n##th term is##<10^{-2}##, and as per the question, I stop calculating the sum further. But if I had calculated for many more terms, I would have reached a state where the sum would have been ##2.33995223##, and after adding the next term (which is of the order of ##10^{-3}##), it becomes ##2.346995##. Is this possible? If yes, then the condition put above is wrong, isn't it?

It depends on the series. I wish you hadn't used ##x## for an integer. I'll change this to ##m##.

For a power series the kth term is ##\frac{f^{(k)}(x_0)}{k!} x^k##.

It depends on the kth derivatives being well-behaved. Most power series will have nicely bounded derivatives at ##x_0##.

Let's assume that we can ignore the case where the derivatives may get larger. Then:

##\frac{a_{k+1}}{a_k} = \frac{x}{k+1}##

If ##k## is large (##> 10##), then subsequent terms will be at least 10 times smaller and then can only add up to less than the previous decimal place. (Compare this with a decimal fraction).

But, if ##k## is small, then this may not be the case.

It would be interesting to find a good counter-example to what your Prof said.
 
  • Informative
Likes Wrichik Basu
PS To find a counterexample, you'll need to do something clever with the derivatives.

A trivial counterexample is where two consecutive derivatives are zero.

A better example would not make use of this, but have a two small derivatives, followed by a larger one. That's what you need.
 
Here's a (still quite trivial) counterexample. Assume the Taylor series about ##0## is:

##f(x) = 1 + x + x^2 + x^3 + 50000x^4 \dots##

Evaluating this at ##x = 0.1##, gives the following partial sums:

##1, 1.1, 1.11, 1.111, 6.111 \dots##

After four terms, you might conclude that you have the value to ##1## or even ##2## decimal places. But, you don't. The next term, thanks to the large 4th derivative, blows this up.

Technically any finite polynomial that begins with those terms is a counterexample. I can't immediately see how to construct that sort of thing from a standard (non-polynomial) function. But, that doesn't matter. You could always add a ##\sin x## to it if you want an infinite series:

##f(x) = \sin x + 1 + x + x^2 + x^3 + 50000x^4##

Is a counterexample, if you insist on an infinite series.

Technically, you need to know an upper bound for your derivatives as well before you know for sure it's safe to stop calculating.
 
  • Like
Likes Wrichik Basu and Antarres
PeroK said:
Here's a (still quite trivial) counterexample. Assume the Taylor series about ##0## is:

##f(x) = 1 + x + x^2 + x^3 + 50000x^4 \dots##

Evaluating this at ##x = 0.1##, gives the following partial sums:

##1, 1.1, 1.11, 1.111, 6.111 \dots##

After four terms, you might conclude that you have the value to ##1## or even ##2## decimal places. But, you don't. The next term, thanks to the large 4th derivative, blows this up.

Technically any finite polynomial that begins with those terms is a counterexample. I can't immediately see how to construct that sort of thing from a standard (non-polynomial) function. But, that doesn't matter. You could always add a ##\sin x## to it if you want an infinite series:

##f(x) = \sin x + 1 + x + x^2 + x^3 + 50000x^4##

Is a counterexample, if you insist on an infinite series.

Technically, you need to know an upper bound for your derivatives as well before you know for sure it's safe to stop calculating.
You're right, I just want to add, that investigating the monotoneity of the sequence of partial sums gives a good estimate of where to stop with desired precision. It's a, commonly discussed in calculus, Taylor series error estimation and analysis. For example, if we look at derivatives of ##a^x## for large ##a##, we get that ##k-th## derivative will be ##(\ln a)^k a^{x_0}## at a fixed point, that is, every derivative is larger than the last one(possibly even a lot larger). However, this increase in derivatives is going by a power law in ##\ln a##, so it will be overwhelmed by factorial in the denominator eventually. It is an example, however, where our derivatives can be without bound, but the series would still be convergent(also the smallness of ##x## at which we calculate the series would improve the quality of approximation obviously).

@Wrichik Basu Also the analysis I mentioned before(just to be clear), is applicable to general asymptotic series which don't have to converge, nor be power series. Taylor expansions of known functions are usually well behaved, so problems with estimation of the sort @PeroK and me mentioned won't arise, usually. It's good to keep in mind how it works though, so you can spot it if you're dealing with some irregular function.
 
  • Like
Likes Wrichik Basu and PeroK
In terms of very simple and general tools you can always reach for (i) geometric series and (ii) triangle inequality.

You should be aware of a simple if crude way of bounding the tail of the power series of the exponential function via the geometric series.

Then for cosine and sine series, apply triangle inequality and re-use above exponential error bound.
- - - - - -
Coming up with explicit, useable error bounds is hard work so quite often people don't do them.
 
  • Like
Likes Wrichik Basu
Of course with Real series we need to worry about them converging to the value of the function at the point, unlike with Complex series. And we can consider the rate of convergence of the series : https://en.wikipedia.org/wiki/Rate_of_convergence:
 
Last edited:
  • Like
Likes Wrichik Basu
Back
Top