Convergence of an infinite series

In summary: It can be shown using the fact that the partial sum of a geometric series is ##S_n=\sum_{(k=0)}^nr^k=\frac{(1-r^{(n+1)})}{(1-r)}##Here's maybe a more intuitive way of looking at it: let a_n be any sequence that tends to infinity, but for which a_{n+1}-a_n tends to zero (for example, a_n=\sqrt{n} or a_n=\log(n) work). Then \sum_{n=1}^\infty (a_{n+1}-a_n) diverges
  • #1
Jazzyrohan
32
0
For a series to be convergent,it must have a finite sum,i.e.,limiting value of sum.As the sum of n terms approaches a limit,it means that the nth term is getting smaller and tending to 0,but why is not the converse true?Should not the sum approach a finite value if the nth term of the series is tending to 0?

This is in relation to the preliminary test for convergence of infinite series.
 
Physics news on Phys.org
  • #2
I believe the sum of ## \sum\limits_{n=1}^{+\infty} \frac{1}{n} ## does not converge. This is a good example of why the converse is not true.
 
  • #3
I guess the standard example here is the harmonic sum: ##\sum_{n\in\mathbb{N}}\dfrac{1}{n}## does not converge.
 
  • Like
Likes Charles Link
  • #4
fresh_42 said:
I guess the standard example here is the harmonic sum: ##\sum_{n\in\mathbb{N}}\dfrac{1}{n}## does not converge.
I am aware of the example but what is the reason of its sum being infinite?Since it's terms are getting smaller,shouldn't the sum approach a finite value as only smaller and smaller values are getting added?
 
  • #5
Jazzyrohan said:
I am aware of the example but what is the reason of its sum being infinite?Since it's terms are getting smaller,shouldn't the sum approach a finite value as only smaller and smaller values are getting added?
Obviously not. The reason lies in its proof. Intuition often fails for terms like infinity. You add less and less, but you add something.
 
  • #6
fresh_42 said:
Obviously not. The reason lies in its proof. Intuition often fails for terms like infinity. You add less and less, but you add something.
Is that not true for every series(even convergent ones)? We keep adding something,even if it's less but still the convergent ones approach a finite value.

Is it related to the rates at which the term is getting smaller?
 
  • #7
Sure, it is true for convergent series, too. What do you want to know? ##\sum \frac{1}{n^2}## converges, ##\sum \frac{1}{n}## does not. So?
 
  • #8
It's just not that simple. Clearly the terms have to go to 0 or the series will not converge. But that is not enough. The terms have to go to zero fast enough. A simple example to consider is:
##\Sigma a_n##, where
##a_1 = 1##
##a_2 =a_3 = 1/2##
##a_4 = a_5 = a_6 = 1/3##
...
And if you don't like that so many are equal, you can have them decrease by tiny amounts.
 
Last edited:
  • Like
Likes fresh_42
  • #9
FactChecker said:
It's just not that simple. Clearly the terms have to go to 0 or the series will not converge. But that is not enough. The terms have to go to zero fast enough. An simple example to consider is:
##\Sigma a_n##, where
##a_1 = 1##
##a_2 =a_3 = 1/2##
##a_4 = a_5 = a_6 = 1/3##
...
And if you don't like that so many are equal, you can have them decrease by tiny amounts.
That's what I was asking earlier that the rate of decreasing must be fast enough,for example,the p series converges for p>1 as the terms are decreasing at a rate that they are going to hit a limit but for p<1,the terms are not decreasing fast enough.This change occurs at close to p=1 at which the terms just lose the rate required for them to converge.
Am I thinking right?
 
  • #10
I have to assume that "p series" means power series ##\Sigma \frac1 {p^n}##. Then you are correct. It converges for ##p>1## and diverges for ##p<1##.

It can be shown using the fact that the partial sum of a geometric series is ##S_n=\sum_{(k=0)}^nr^k=\frac{(1-r^{(n+1)})}{(1-r)}##
 
  • #11
Here's maybe a more intuitive way of looking at it: let [itex]a_n[/itex] be any sequence that tends to infinity, but for which [itex]a_{n+1}-a_n[/itex] tends to zero (for example, [itex]a_n=\sqrt{n}[/itex] or [itex]a_n=\log(n) [/itex] work). Then [itex]\sum_{n=1}^\infty (a_{n+1}-a_n)[/itex] diverges to infinity but its terms tend to zero.
 
  • #12
Jazzyrohan said:
I am aware of the example but what is the reason of its sum being infinite?Since it's terms are getting smaller,shouldn't the sum approach a finite value as only smaller and smaller values are getting added?

For any ##n\in\mathbb{N}##, no matter how large, you can find a number ##k\in\mathbb{N}## such that ##\sum\limits_{m=0}^{k}\frac{1}{n+m} > 1##. From this it should be easy to see that the harmonic series approaches infinity.
 
  • #13
look at the terms of the harmonic series and group them in sets of 1, 2,4,8,16,... terms, i.e.

1/2 + (1/3 + 1/4) + (1/5 + 1/6 + 1/7 + 1/8) +...

you can see that the terms in each grouping add up to at least 1/2.

i.e. 1/3 + 1/4 ≥ 1/4 + 1/4 = 1/2, and (1/5 + 1/6 + 1/7 + 1/8) ≥ (1/8 + 1/8 + 1/8 + 1/8) = 1/2, ...etc

since there are infinitely many of these groupings, the full sum is infinite.
 
  • Like
Likes FactChecker
  • #14
On another note: The Riemann integral [itex] \int_{t=1}^{N}\frac{1}{t}dt=\ln(N)[/itex] can be approximated pretty good with the series [itex]\sum_{n=1}^{N-1}\frac{1}{n} [/itex]. Thus [itex]\sum_{n=1}^{N-1}\frac{1}{n}\approx \ln(N) [/itex]. It can be proved that [itex]\lim_{N\rightarrow \infty}(\sum_{n=1}^{N-1}\frac{1}{n} - \ln(N))=\gamma [/itex] where γ (the Euler-Mascheroni constant) ≈ 0.57721566490153286060651209008240243104215933593992... (https://en.wikipedia.org/wiki/Euler–Mascheroni_constant).
 
Last edited:
  • Like
Likes Charles Link
  • #15
indeed since the function 1/x is decreasing, it follows that the integral of 1/x from n-1 to n, is less than 1/(n-1) and greater than 1/n. Hence the integral from 1 to n, i.e. ln(n), is less than 1 + 1/2 + 1/3 + ...+1/(n-1) and greater than 1/2 + 1/3 +...+ 1/n. Thus since the integral approaches +infinity as n does, so is the series. And the difference 1 + 1/2 +...+1/n - ln(n) is increasing and always ≤ 1. (Just look at the graphs of these functions.) Hence this difference converges to some limit less than 1, which limit is the definition of the euler constant. It was given as a homework problem in my first year calculus class in 1960, to decide whether this limit is rational, a question I learned was (and is still) an unsolved problem. Of course it was remarked that "undying fame" awaited the solver of that homework problem.
 
Last edited:
  • #16
I did give this in my thread on summing divergent series but there is a method that can sometimes be used to tell if a series is divergent and tell us a bit more about the divergence - Ramanujan Summation. It's based on the Euler-Maclauren series; a not what I would call rigorous, but short, derivation I will now give. Define the linear operator Ef(x) = f(x+1). Define Df(x) = f'(x). Also f(x+1) = f(0) + f'(x) + f''(x)/2! + f'''(x)/3! ... (the Maclauren expansion), or Ef(x) = f(x) +Df(x) + D^2f(x)/2! + D^3f(x)/3! ... = e^Df(x). We Also need the Bernoulli numbers which are defined by the expansion of x/(e^x - 1) = ∑B(k)*x^k/k! - you can look up the values with a simple internet search.

f(0) + f(1) + f(2) ... f(n-1) = (1 + E + E^2 ++++++E^(n-1))f(0) = (E^n -1/E-1)f(0) = (E^n -1)(1/e^D -1)f(0) = D^-1*(D/e^D -1)f(x)|0 to n = D^-1f(x)|0 to n + ∑ B(k)*D^(k-1)f(x)/k!| 0 to n = 0 to n ∫f(x) + ∑B(k)*D^(k-1)f(n)/k! - ∑B(k)*D^(k-1)f(0)/k!. Now the sum is from 1. Let n → ∞ and most of the time Sn = ∑B(k)*D^(k-1)f(n)/k! → 0 so we will assume that - certainly its the case for most convergent series since,usually, d^n/dx f(n) → 0 for all n. So you end up with ∫f(x) - ∑B(k)*D^(k-1)f(0)/k!. So f(0) + f(1) + f2) +f(3) ...= ∫f(x) - ∑B(k)*D^(k-1)f(0)/k!. Notice regardless of n (- ∑B(k)*D^(k-1)f(0)/k!) does not depend on n and is called the Ramanujan sum. Again note the sum is from 1. We would like the Ramanujan Sum to be the same for convergent series. This is done by defining it as ∫f(x) + C where C is the Ramanujan sum. If it is finite then it is ∫f(x) + C. If it is infinite it is defined as C.

It makes mincemeat out of summing the Harmonic Series. 1/n → 0 as n→∞. Same for the derivatives. So that part is satisfied - ie you can neglect the middle bit and get Σ1/n = ∫1/x+1 +R where R (the Ramanujan Sum) is the Euler-Mascheroni constant. You see immediately its (1 to ∞) ∫1/x + R after a change of variable x' = x+1 we have In (∞) + R'

You get a deeper appreciation of what's going on if you use Ramanujan summation on the zeta function ζ(s):
https://en.wikipedia.org/wiki/Riemann_zeta_function

If s>1 its convergent (integral test) and you get ζ(s): = (0 to ∞)∫1/(x+1)^-s + R where R is the Ramanujan sum. But a little work with integration shows this is 1/(s-1) + R and is true for any s > 1. So one gets ζ(s) - 1/(s-1) = R. Now the trick; R is holomorphic in the complex plane so can be extended to the entire plane. So we have ζ(s) = 1/(s-1) + R valid for the entire plane except for a non-removable singularity at s =1 ie the Harmonic series. By analytic continuation do things like 1+1+1... become finite? I will leave that for you to ponder - there is an ∞ - ∞ hidden deep in there. But there is no trick I am aware of to get rid of that singularity - some divergent series you can use tricks (a bit more of that later) to sum it and get sums for things like 1 +1 +1 + 1... which is obviously not summable - or is it? See later.

There is another function called the Eta function defied by η(s) = 1 - 1/2^s + 1/3^s - 1/4^s ... (Note by the alternating series test this is convergent for s>0). Now one can easily derive a simple relation between the two:
https://proofwiki.org/wiki/Riemann_Zeta_Function_in_terms_of_Dirichlet_Eta_Function

We have ζ(s) = 1/(1 - 2^(1-s))*η(s). Which make ζ(s) convergent for s>0 except for s = 1. What happened? Well in the derivation you started with ζ(s) - η(s) so divergences may cancel and you are left with a finite number. Of course that finite number must be the number from analytic continuation - so really you have a hidden infinity cancellation - somehow..

This is very easily seen in 1 + 1 + + 1 ... Its obviously ∞. But wait using the above equation you have 1 + 1 + 1 ... = (1 - 1 + 1 - 1...)/-1

Now 1 - 1 + 1 - 1...is called Grande's Series and is in fact 1/2. Some simply can't see this so I will explain it very carefully. 1 - x + x^2 - x^3... is convergent for |x|<1 (ratio test) and is 1/(1+x). It is not convergent for x=1. But wait - how about if x = 1-1/n then it is convergent for all n. If n is very large - (you know typical non rigorous calculus stuff) you have for all practical purposes 1/n is zero - in fact if you want you can take the limit n→∞. So intuitively, and for all practical purposes 1 - 1 +1 -1 + 1... = 1/2. Rigorously we can generalize Abel's theorem and take the limit x- → 1:
https://en.wikipedia.org/wiki/Abel's_theorem

Some however say things like S = 1 - 1 + 1 - 1...1 - 1 + 1 - 1... = S + S or S =0 so is inconsistent. Well the answer lies in the definition of an infinite sum which says you can't do that except in some cases - but that would take us off topic. I simply wanted to show its very reasonable if its 1/2 - in fact so reasonable it can't really be anything else.

In fact its an example of a summation method called Abel Summation - but that will take us too far.

The thing to note is 1 - 1 + 1 - 1... = 1/2 so 1 +1 +1 +1 ... = -1/2. The infinity in 1+1+1... must be canceled by something - it's in the Eta function - somehow it cancels it out.

Have a bit of a muck around using Ramanujan summation and look at ζ(s) - η(s). I haven't done it so see what you find out.

Thanks
Bill
 
Last edited:

What is the definition of "convergence of an infinite series"?

The convergence of an infinite series is a mathematical concept that refers to the behavior of a series as the number of terms in the series approaches infinity. It is the process of determining whether a series will approach a finite value (converge) or continue to increase indefinitely (diverge).

How is the convergence of an infinite series determined?

The convergence of an infinite series is determined using various tests, such as the ratio test, the root test, and the integral test. These tests evaluate the behavior of the terms in the series to determine if the series converges or diverges. If a series passes one of these tests, it can be said to converge.

What are the different types of convergence for an infinite series?

There are three main types of convergence for an infinite series: absolute convergence, conditional convergence, and divergence. Absolute convergence occurs when a series converges regardless of the order of the terms, conditional convergence occurs when the series converges only under certain conditions, and divergence occurs when a series does not converge at all.

What is the importance of studying the convergence of an infinite series?

Studying the convergence of an infinite series is important in many areas of mathematics, including calculus, analysis, and differential equations. It allows us to determine the behavior of a series and make predictions about its future values. It also has practical applications in fields such as physics, engineering, and economics.

Are there any real-life examples of the convergence of an infinite series?

Yes, there are many real-life examples of the convergence of an infinite series. For instance, the calculation of compound interest, the estimation of the value of pi, and the determination of the sum of an infinite geometric sequence all involve the convergence of an infinite series. These examples demonstrate the practical applications and importance of studying the convergence of an infinite series.

Similar threads

Replies
11
Views
2K
Replies
15
Views
2K
Replies
6
Views
690
Replies
3
Views
947
Replies
3
Views
247
Replies
14
Views
2K
Replies
2
Views
1K
Replies
2
Views
794
Replies
9
Views
2K
Back
Top