Undergrad Infinite Series of Infinite Series

  • Thread starter Thread starter Drakkith
  • Start date Start date
  • Tags Tags
    Series convergence
Click For Summary
SUMMARY

This discussion centers on the evaluation of infinite series where each term is itself an infinite series, specifically examining the series defined by ##a_n = (\frac{1}{2})^n + (\frac{1}{4})^n + (\frac{1}{8})^n...##. The convergence of this series is confirmed, leading to the computation of the double sum using Mathematica, resulting in the Erdős–Borwein constant, approximately 1.6. The discussion highlights the importance of careful manipulation of summations to avoid issues such as "mass vanishing at infinity," which can occur when reordering sums or integrating. The Erdős–Borwein constant also appears in the average case analysis of the heapsort algorithm.

PREREQUISITES
  • Understanding of infinite series and convergence criteria
  • Familiarity with the q-Polygamma Function, ##\psi_{q}^{n}(z)##
  • Proficiency in using Mathematica for mathematical computations
  • Knowledge of the Erdős–Borwein constant and its applications
NEXT STEPS
  • Research the properties and applications of the Erdős–Borwein constant
  • Learn about the q-Polygamma Function and its significance in series evaluation
  • Explore the implications of "mass vanishing at infinity" in series manipulation
  • Study the average case analysis of sorting algorithms, particularly heapsort
USEFUL FOR

Mathematicians, computer scientists, and anyone interested in advanced series evaluation and its applications in algorithm analysis.

Drakkith
Mentor
Messages
23,198
Reaction score
7,678
TL;DR
How to solve an infinite series where each term is itself an infinite series?
I had a random thought about infinite series the other day while watching a math video. Let's say we have an infinite series where each term in the series is itself another infinite series. How would one go about finding the sum?

For example, let's say we have the series ##a_1+a_2+a_3...## where ##a_n = (\frac{1}{2})^n+(\frac{1}{4})^n+(\frac{1}{8})^n...##
I assume the series converges since each term is the reciprocal powers of two series raised to a power (without the leading 1/1 term, which would make it diverge), which should fall off quite quickly. However it's been a while since I did any math work involving series so I'm a bit unsure. Thoughts? Is solving something like this fundamentally any different from solving a 'plain' series?
 
Physics news on Phys.org
Sometimes one can reorganize the series by grabbing the first term of each sub series and group them as the first term of a new series.
 
Drakkith said:
For example, let's say we have the series ##a_1+a_2+a_3...## where ##a_n = (\frac{1}{2})^n+(\frac{1}{4})^n+(\frac{1}{8})^n...##
This is a double sum (akin to a double integral) that we may call ##S##. The inner sum is trivial and Mathematica can compute the remaining sum:$$S\equiv\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}2^{-mn}=\sum_{m=1}^{\infty}\left(\frac{1}{2^{m}-1}\right)=1-\frac{\psi_{1/2}^{0}\left(1\right)}{\ln2}$$where ##\psi_{q}^{n}\left(z\right)## is the q-Polygamma Function.
 
  • Like
Likes pbuk, jedishrfu and Drakkith
Excellent! I would have never thought about the series being like a double integral! Thanks so much!
 
  • Like
Likes hutchphd and jedishrfu
renormalize said:
Mathematica can compute the remaining sum...
But don't leave us in suspense: the result is the Erdős–Borwein constant (approximately 1.6) and, remarkably, is also equal to $$ \sum_{n=1}^{\infty} \frac{\sigma_0(n)}{2^n} $$ where ## \sigma_0(n) ## is the number of divisors of ## n ##.
 
  • Like
Likes Drakkith, renormalize, fresh_42 and 1 other person
Drakkith said:
TL;DR Summary: How to solve an infinite series where each term is itself an infinite series?

Is solving something like this fundamentally any different from solving a 'plain' series?
You only have to be careful in case you re-order the summations, e.g. swapping the sums or replacing one sum with an integral and swapping sum and integration. In such cases, a phenomenon called "mass vanishing at infinity" can occur. For example, consider the telescope sum
$$
\sum_{k=0}^n\underbrace{
\left(\chi_{[k,k+1]}-\chi_{[k+1,k+2]}\right)
}_{=f_k}=\sum_{k=0}^n \left(\chi_{[0,1]}-\chi_{[n+1,n+2]}\right)\stackrel{n\to \infty }{\longrightarrow }\chi_{[0,1]}
$$
then we have
$$
0=\sum_{k=0}^\infty \left(\int_\mathbb{R}f_k(x)\,dx\right)\neq \int_\mathbb{R} \left(\sum_{k=0}^\infty f_k(x)\right)\,dx=1
$$
This isn't a problem in your example. I just wanted to mention that it can be tricky if two infinite sums are involved.
 
Last edited:
  • Like
Likes Drakkith and jedishrfu
pbuk said:
But don't leave us in suspense: the result is the Erdős–Borwein constant (approximately 1.6) and, remarkably, is also equal to $$ \sum_{n=1}^{\infty} \frac{\sigma_0(n)}{2^n} $$ where ## \sigma_0(n) ## is the number of divisors of ## n ##.
And here is the link to the Wikipedia article:
https://en.wikipedia.org/wiki/Erdős–Borwein_constant
I found it interesting that it actually occurs somewhere:
The Erdős–Borwein constant comes up in the average case analysis of the heapsort algorithm, where it controls the constant factor in the running time for converting an unsorted array of items into a heap.
 
  • Like
Likes Drakkith, renormalize and jedishrfu
Well, I'm glad I picked an interesting example!
 
  • Haha
  • Like
Likes pbuk and fresh_42

Similar threads

  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 20 ·
Replies
20
Views
3K
  • · Replies 20 ·
Replies
20
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K