1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Convergent or divergent, p-series.

  1. Jan 13, 2008 #1
    1. The problem statement, all variables and given/known data

    Problem is to determine if this is convergent or divergent:
    n = 1 E infinity (27 + pi) / sqrt(n)

    2. Relevant equations

    p-series test?

    3. The attempt at a solution

    I was looking at this problem, It looks as if the p-series may apply, it is continuous, decreasing, and positive.

    Could we change the sqrt(n) => n^(1/2) ?

    then p <= 1, so by the p-series rules, it diverges?

    thanks for any help.
     
  2. jcsd
  3. Jan 13, 2008 #2

    olgranpappy

    User Avatar
    Homework Helper

    that's not "changing" anything--that is exactly what sqrt(n) means...
     
  4. Jan 14, 2008 #3
    Of course, so does this diverge then?
     
  5. Jan 14, 2008 #4

    Gib Z

    User Avatar
    Homework Helper

    What exactly is your question? I suspect it is to find [tex] \lim_{n\to \infty} a_n[/tex] where [tex]a_n = (27+ \pi) \left(\frac{1}{\sqrt{1}} + \frac{1}{\sqrt{2}} + ... + \frac{1}{\sqrt{n}}\right)[/tex].

    If so, there are several ways of doing it, of which I will only show you two, the first being elegant, but the second is more general (and proves your p-series rules as well). I will also leave out the factor of [itex]( 27 + \pi)[/itex] as multiplication by a constant is of no concern to us.

    ---------------

    The first is to compare it to a sequence which is term by term smaller (except for the last term). Let this sequence be [tex]b_n = \sqrt{n}= \frac{n}{\sqrt{n}} = \left( \frac{1}{\sqrt{n}} + \frac{1}{\sqrt{n}} + ...\frac{1}{\sqrt{n}} \right) \leftarrow \mbox{ n terms} [/tex].

    We can see that since the denominator of each term (except for the last one) in b_n is larger than the denominator of each term in a_n, [itex]a_n > b_n[/itex].

    But [tex]\lim_{n\to \infty} b_n = \lim_{n\to \infty} \sqrt{n}[/tex], which diverges. Hence, a_n also diverges.

    -----------

    The more general solution is to compare the series to its integral.

    Take the more general problem, of finding the convergence of the series [tex]\sum_{t=1}^{\infty} \frac{1}{t^a} = \zeta (a) [/tex]. It is obvious that if [itex]a \leq 0, \zeta (a) \to \infty}[/itex]. So let's restrict this problem to a > 0.

    If we construct a graph of [itex] y = 1/x^a[/itex], we see that it is a monotonic decreasing function, the general shape being similar to that of a rectangular hyperbola. Now, on the graph, construct lines at x= n-1, x= n, and x = n+1, where n is an unknown real number greater than 1.

    Now, Construct a rectangle with vertices [tex] (n-1,0), (n,0), (n-1, \frac{1}{n^a}) , (n, \frac{1}{n^a}) [/tex]. Now, geometrically, the area of the rectangle is less than the area under the curve between x= n-1 and x=n, and more than the area under the curve between x=n+1 and x=n. More over, the area of the rectangle is [itex] 1/n^a[/itex]. Algebraically, this follows from the fact the function is monotonically decreasing. Either way, interpreting the area under the curve by an integral, we arrive at the following inequality:

    [tex]\int^{n+1}_n \frac{dx}{x^a} < \frac{1}{n^a} < \int^n_{n-1} \frac{dx}{x^a}[/tex].

    Summing this inequality for n=2, n=3...n=m, then adding 1 to each expression, we obtain*;
    [tex]1+ \int^{m+1}_2 \frac{dx}{x^a} < \sum_{k=1}^m \frac{1}{k^a} < 1 + \int^m_1 \frac{dx}{x^a}[/tex].

    If we let m increase beyond all bounds, the summation term approaches [tex]\zeta (a)[/tex], the original series in question. We must consider 3 separate cases when evaluating the integral directly with the Fundamental Theorem of Calculus- [itex]0 < a < 1[/itex], [itex] a= 1[/itex] and [itex] a> 1[/itex].

    Considering each case separately;

    If [itex] 0 < a < 1[/itex], then in the anti-derivatives the limits of integration are substituted into the numerator. Since m is increasing beyond all bounds, so is the numerator and hence so are both integrals.

    So [itex] \zeta (a)[/itex] does not converge to any finite limit if [itex] 0 < a < 1[/itex].

    If [itex]a=1[/itex], then the integral on the left evaluates to [itex]\log_e \left( \frac{m+1}{2} \right) [/itex] whilst the integral on the right evaluates** to [itex]\log_e m[/itex], which both increase beyond any finite bounds as m increases.

    If [itex] a > 1[/itex], then the limits of integration get substituted into the denominator of the anti-derivative, and as the upper limit increases, the integrals tend to definite limits which we substitute into the equality;

    [tex] 1 + \frac{1}{2^{a-1}\cdot (a-1)} < \zeta (a) < \frac{a}{a-1} [/tex], if and only if [itex] a > 1[/itex].**

    We still need to rule out the possibility that the sum still does not tend to a limit, though it may not diverge to positive or negative infinity it may oscillate between the two sides of the inequality. However, since each term is positive and decreasing, this is not possible.

    We may finally conclude that [tex]\zeta (s) [/tex] tends to a limit if and only if [itex]s > 1[/itex].
    -----------------

    *To prove this result rigorously rather than relying on geometric intuitions, we can evaluate those integrals directly and prove the inequality that way, but with that method there is no obvious motivation for doing that step, so it seems like one is only doing it because one already knows the result before hand. Geometrically, we can see a motivation, hence it is best to do it geometrically, then prove it rigorously afterwards.

    ---------------------
    ** I specifically use the word "evaluates" rather than my usual preference of "is defined to be". Normally I define the natural logarithm by [itex]\log_e x= \int^x_1 \frac{dt}{t}[/itex], however in this proof we needed to use the property that as x goes to infinity, so does its logarithm. Using my usual definition would then make this argument circular. So instead, in this proof I define the natural logarithm to be the inverse function of the exponential function, which I define as [tex]e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!}[/tex] from which we can easily see that it is a monotonically increasing function with no upper bound, and hence so is its inverse.

    -------------

    ***From this, we do not merely achieve the conditions of convergence for such series, but we can also estimate the value of the series, with bounds on the error. A crude estimate would be the arithmetic mean of the left and right sides of the inequality. We get a pretty good estimate from the weighted geometric mean [itex]\sqrt_n{p^2q}[/itex] where p is the lower bound and q is the upper, but to justify this estimate requires some more in-depth numerical analysis.

    An easy application of this is to substitute the case where a=1. Then we can see that [tex]\frac{3}{2} < \zeta (2) < 2[/tex]. Indeed, if we were to take the Fourier series of the second Bernoulli polynomial [itex]B_2(t)[/itex] and substitute t=0, we would arrive at the result [tex]\zeta (2) = \frac{\pi^2}{6}[/tex], which one can verify for themselves satisfies the inequality. The error when using the arithmetic mean of the left and right sides for this case is approximately 0.105, which I personally think is quite good taking into account of all the more precise calculations we could have done, though omitted for the sake of the already lost brevity of this post. The error also decreases quite rapidly as a increases, thanks to the exponential in the denominator of the lower bound.

    ------------------------

    OK! Out of textbook writers mode, though it has been fun =]
    Maybe I should write a textbook :biggrin: Or we could put this in the tutorial section as a p-series tutorial ! :biggrin: Anyway, I just hope someone learns something out of this post or else I just wasted a whole heap of time ! To everyone else, ratings of rigor are welcome =]
     
    Last edited: Jan 14, 2008
  6. Jan 14, 2008 #5

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    Gib Z, sometimes you are hilarious.

    rcmango, yes, the "p-test" says that [itex]\sum x^{-p}[/itex] converges if and only if p> 1.
    For your sum you have [itex]\sum A/\sqrt{n}= A \sum n^{-1/2}[/itex]. Yes, that is of the form [itex]n^{-1/2} with p= 1/2. What do you think?
     
  7. Jan 14, 2008 #6

    Gib Z

    User Avatar
    Homework Helper

    We can make that condition even more strict. I'm always hilarious :cool:
     
  8. Jan 14, 2008 #7
    Hey there Gib Z, wow great post! I don't think i've ever had that much help online before on a single problem. I'm going to have to read this several times to completely absorb the concept you have explained here, i'm sure I will have questions, I am much slower at this stuff, and its all very new to me. However, i really appreciate the amount of effort you put forth toward your explanation. Thankyou!

    So then hallsofIvy, i answer your questions which is, it diverges because of the value p holds.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?