Calculating something correct up to ##x## decimal places

  • Thread starter Wrichik Basu
  • Start date
In summary, the conversation discusses a problem encountered in a course involving summing series and how to determine the accuracy of the sum. The proposed condition for stopping the calculation is discussed and it is questioned whether it is possible for a correction to suddenly appear after adding more terms. It is concluded that for most power series, the derivatives will be well-behaved and the condition will hold true, but there may be cases where the derivatives grow fast enough to cause a "leap" in the value of the series.
  • #1
Wrichik Basu
Science Advisor
Insights Author
Gold Member
2,138
2,713
Homework Statement
Calculate ##\sin 10## using the sine series, correct upto two places of decimal.
Relevant Equations
The sine series
The above is one specific example of this type of problem that we often encounter in our course. The series can be, in general, anything like the log series, cosine series, etc. We are supposed to solve this problem in Python, but that doesn't matter here.

When this question was put forward for the first time in class, I proposed that if the difference between the sum upto ##(n+1)## terms and the sum upto ##n## terms is less ##10^{-x}##, then we have obtained the sum correct to ##x## decimal places. Basically the condition boils down to this: $$S_{n+1} - S_n \ = \ t_{n + 1} \ < \ 10^{-x},$$where ##t_{n + 1}## is the ##(n+1)##th term. The Professor said that this is correct, and applicable to any series.

I stop calculating the series at ##t_{n + 1}## when the above condition is met. But is it possible that somewhere down the line, after adding many more terms, a correction suddenly pops up in the ##x##th decimal place? Say for some series, the ##n##th term is##<10^{-2}##, and as per the question, I stop calculating the sum further. But if I had calculated for many more terms, I would have reached a state where the sum would have been ##2.33995223##, and after adding the next term (which is of the order of ##10^{-3}##), it becomes ##2.346995##. Is this possible? If yes, then the condition put above is wrong, isn't it?
 
Physics news on Phys.org
  • #2
Basically you're asking whether, even if we know that the chosen n+1 term on which we terminated the calculation is lower than some number, there will be a term of higher order in the series that will violate this condition.

That can be a problem only if the series is divergent, I think, in that case when you're calculating asymptotic value represented by such series(a series approaching the chosen value for a finite number of terms and then diverging from it), you need to be careful about the point at which you terminate.

For convergent series the difference ##S_{n+1} - S_{n}## of partial sums, tends to zero as ##n## goes to infinity. If it is not monotonously falling, then there exists some number ##n## after which your requirement is surely going to be satisfied. That is, there may exist a series(I think, at least), in which your requirement will be for example fulfilled for ##n=5##, and then violated for ##n=6##, but since the series is convergent, this difference between partial sums will definitely fall towards zero, so there must be a number, say ##n=10## after which your requirement will stop being violated by terms of higher order.
 
  • Like
Likes WWGD and Wrichik Basu
  • #3
Wrichik Basu said:
Homework Statement: Calculate ##\sin 10## using the sine series, correct upto two places of decimal.
Homework Equations: The sine series

The above is one specific example of this type of problem that we often encounter in our course. The series can be, in general, anything like the log series, cosine series, etc. We are supposed to solve this problem in Python, but that doesn't matter here.

When this question was put forward for the first time in class, I proposed that if the difference between the sum upto ##(n+1)## terms and the sum upto ##n## terms is less ##10^{-x}##, then we have obtained the sum correct to ##x## decimal places. Basically the condition boils down to this: $$S_{n+1} - S_n \ = \ t_{n + 1} \ < \ 10^{-x},$$where ##t_{n + 1}## is the ##(n+1)##th term. The Professor said that this is correct, and applicable to any series.

I stop calculating the series at ##t_{n + 1}## when the above condition is met. But is it possible that somewhere down the line, after adding many more terms, a correction suddenly pops up in the ##x##th decimal place? Say for some series, the ##n##th term is##<10^{-2}##, and as per the question, I stop calculating the sum further. But if I had calculated for many more terms, I would have reached a state where the sum would have been ##2.33995223##, and after adding the next term (which is of the order of ##10^{-3}##), it becomes ##2.346995##. Is this possible? If yes, then the condition put above is wrong, isn't it?

It depends on the series. I wish you hadn't used ##x## for an integer. I'll change this to ##m##.

For a power series the kth term is ##\frac{f^{(k)}(x_0)}{k!} x^k##.

It depends on the kth derivatives being well-behaved. Most power series will have nicely bounded derivatives at ##x_0##.

Let's assume that we can ignore the case where the derivatives may get larger. Then:

##\frac{a_{k+1}}{a_k} = \frac{x}{k+1}##

If ##k## is large (##> 10##), then subsequent terms will be at least 10 times smaller and then can only add up to less than the previous decimal place. (Compare this with a decimal fraction).

But, if ##k## is small, then this may not be the case.

It would be interesting to find a good counter-example to what your Prof said.
 
  • Informative
Likes Wrichik Basu
  • #4
PS To find a counterexample, you'll need to do something clever with the derivatives.

A trivial counterexample is where two consecutive derivatives are zero.

A better example would not make use of this, but have a two small derivatives, followed by a larger one. That's what you need.
 
  • #5
Here's a (still quite trivial) counterexample. Assume the Taylor series about ##0## is:

##f(x) = 1 + x + x^2 + x^3 + 50000x^4 \dots##

Evaluating this at ##x = 0.1##, gives the following partial sums:

##1, 1.1, 1.11, 1.111, 6.111 \dots##

After four terms, you might conclude that you have the value to ##1## or even ##2## decimal places. But, you don't. The next term, thanks to the large 4th derivative, blows this up.

Technically any finite polynomial that begins with those terms is a counterexample. I can't immediately see how to construct that sort of thing from a standard (non-polynomial) function. But, that doesn't matter. You could always add a ##\sin x## to it if you want an infinite series:

##f(x) = \sin x + 1 + x + x^2 + x^3 + 50000x^4##

Is a counterexample, if you insist on an infinite series.

Technically, you need to know an upper bound for your derivatives as well before you know for sure it's safe to stop calculating.
 
  • Like
Likes Wrichik Basu and Antarres
  • #6
PeroK said:
Here's a (still quite trivial) counterexample. Assume the Taylor series about ##0## is:

##f(x) = 1 + x + x^2 + x^3 + 50000x^4 \dots##

Evaluating this at ##x = 0.1##, gives the following partial sums:

##1, 1.1, 1.11, 1.111, 6.111 \dots##

After four terms, you might conclude that you have the value to ##1## or even ##2## decimal places. But, you don't. The next term, thanks to the large 4th derivative, blows this up.

Technically any finite polynomial that begins with those terms is a counterexample. I can't immediately see how to construct that sort of thing from a standard (non-polynomial) function. But, that doesn't matter. You could always add a ##\sin x## to it if you want an infinite series:

##f(x) = \sin x + 1 + x + x^2 + x^3 + 50000x^4##

Is a counterexample, if you insist on an infinite series.

Technically, you need to know an upper bound for your derivatives as well before you know for sure it's safe to stop calculating.
You're right, I just want to add, that investigating the monotoneity of the sequence of partial sums gives a good estimate of where to stop with desired precision. It's a, commonly discussed in calculus, Taylor series error estimation and analysis. For example, if we look at derivatives of ##a^x## for large ##a##, we get that ##k-th## derivative will be ##(\ln a)^k a^{x_0}## at a fixed point, that is, every derivative is larger than the last one(possibly even a lot larger). However, this increase in derivatives is going by a power law in ##\ln a##, so it will be overwhelmed by factorial in the denominator eventually. It is an example, however, where our derivatives can be without bound, but the series would still be convergent(also the smallness of ##x## at which we calculate the series would improve the quality of approximation obviously).

@Wrichik Basu Also the analysis I mentioned before(just to be clear), is applicable to general asymptotic series which don't have to converge, nor be power series. Taylor expansions of known functions are usually well behaved, so problems with estimation of the sort @PeroK and me mentioned won't arise, usually. It's good to keep in mind how it works though, so you can spot it if you're dealing with some irregular function.
 
  • Like
Likes Wrichik Basu and PeroK
  • #7
In terms of very simple and general tools you can always reach for (i) geometric series and (ii) triangle inequality.

You should be aware of a simple if crude way of bounding the tail of the power series of the exponential function via the geometric series.

Then for cosine and sine series, apply triangle inequality and re-use above exponential error bound.
- - - - - -
Coming up with explicit, useable error bounds is hard work so quite often people don't do them.
 
  • Like
Likes Wrichik Basu
  • #8
Of course with Real series we need to worry about them converging to the value of the function at the point, unlike with Complex series. And we can consider the rate of convergence of the series : https://en.wikipedia.org/wiki/Rate_of_convergence:
 
Last edited:
  • Like
Likes Wrichik Basu

FAQ: Calculating something correct up to ##x## decimal places

1. How do you calculate something correct up to a certain number of decimal places?

To calculate something up to a certain number of decimal places, you must first determine the level of accuracy needed. Then, you can use a rounding method, such as rounding half to even or truncation, to limit the number of decimal places in your calculation result.

2. What is the importance of calculating something up to a specific number of decimal places?

Calculating something up to a specific number of decimal places allows for more precise and accurate results. This is particularly important in scientific and mathematical calculations, where even small errors can have a significant impact.

3. Can you give an example of calculating something up to a certain number of decimal places?

For example, if we want to calculate the value of pi (π) correct up to 4 decimal places, we can use the formula π = 22/7. This will give us a result of 3.1429, which is correct up to 4 decimal places.

4. How do you know when to stop calculating decimal places?

The level of accuracy needed for a calculation usually depends on the purpose or application of the result. In some cases, a few decimal places may be enough, while in others, more decimal places may be required. It is important to consider the significance and precision needed for the specific calculation.

5. What are some common rounding methods used in calculating up to a certain number of decimal places?

Some common rounding methods include rounding half to even, rounding half away from zero, and truncation. These methods are used to determine which digit to round up or down in order to achieve the desired number of decimal places in the final result.

Back
Top