Sum of series involving natural log

Click For Summary
The series sum of ln(n/(n+1)) diverges despite the terms approaching zero as n increases. This divergence is clarified through the telescoping nature of the series, where the partial sums do not converge to a finite limit. The logarithmic terms, while small, remain negative and accumulate to an infinite sum, reinforcing the divergence. The discussion emphasizes that terms converging to zero is a necessary but not sufficient condition for convergence. Understanding the behavior of partial sums is crucial in determining the convergence of a series.
icosane
Messages
48
Reaction score
0

Homework Statement



Find the sum of the series, if it converges, of

n=1 to infinity of ln(n/(n+1))


The Attempt at a Solution



My intuition tells me the series converges because as n goes to infinity the series is taking the log of 1, which is 0. How to sum it though... I have no clue? How should I go about solving this?
 
Physics news on Phys.org
Rewrite ln(n/(n+1)) as ln(n)-ln(n+1)
 
Use Office Shredder's advice to find the partial sums. Then ask yourself again whether it converges.
 
I always forget about useful log manipulations, thanks guys. So it diverges because I can't say infinity - infinity = 0, correct? This really goes against my intuition on series. As n goes to infinity how does the natural log of a number infinitesimally larger than 1 not go to 0? Is there a way to think about this to make it make sense, or should I shrug it off as a strange property of the mysterious natural logarithm?
 
I also see that its a telescoping series, with a final term ln(n) which goes to infinity... but my question remains.
 
icosane said:
I always forget about useful log manipulations, thanks guys. So it diverges because I can't say infinity - infinity = 0, correct? This really goes against my intuition on series. As n goes to infinity how does the natural log of a number infinitesimally larger than 1 not go to 0? Is there a way to think about this to make it make sense, or should I shrug it off as a strange property of the mysterious natural logarithm?


ln(n/(n+1)) does go to zero as n goes to infinity. But the same with 1/n, but the harmonic series diverges (I imagine you've seen that example). The terms of a series converging to zero is a necessary condition, not a sufficient one. There's nothing about ln that makes this mysterious.

When you telescope the series, since you're starting at n=1, your partial sums come out to

ln(1)-ln(k+1) for the kth partial sum. You should be able to see this clearly be experimentation... there's no infinity-infinity term involved
 
Back to the definition of convergence. What are the partial sums? Can you write them as a function of n?
 
If the partial sums go off to infinity, then it diverges. This shouldn't be too hard to grasp. If the sum converges, then it's really a number, after all. I'm not sure what you mean by the natural log of a number infinitesimally greater than than 1... it is clear when we take the partial sums, it telescopes nicely, so we can treat the partial sums as a sequence that, as you have shown, diverges.
 
slider142 said:
Back to the definition of convergence. What are the partial sums? Can you write them as a function of n?

0-ln(2)+ln(2)-ln(3)...ln(n)-ln(n+1)

When its written like this Its obvious that it diverges.

But when looking at it like an infinite sum of ln(n/(n+1)), intuitively I just look at the limit as n goes to infinity so it seems like eventually the partial sums would become ln(1)+ln(1)+ln(1), except actually a number slightly smaller than 1 for each term. So if i was to write out each sum individually and I ended up just adding 0 + 0 + 0 after a long enough time I don't see why it doesn't converge.
 
  • #10
icosane said:
0-ln(2)+ln(2)-ln(3)...ln(n)-ln(n+1)

When its written like this Its obvious that it diverges.

But when looking at it like an infinite sum of ln(n/(n+1)), intuitively I just look at the limit as n goes to infinity so it seems like eventually the partial sums would become ln(1)+ln(1)+ln(1), except actually a number slightly smaller than 1 for each term. So if i was to write out each sum individually and I ended up just adding 0 + 0 + 0 after a long enough time I don't see why it doesn't converge.

You aren't paying enough attention to the real evidence in front of your face that it doesn't converge. 'Seems like' isn't evidence for convergence.
 
  • #11
icosane said:
0-ln(2)+ln(2)-ln(3)...ln(n)-ln(n+1)

When its written like this Its obvious that it diverges.

But when looking at it like an infinite sum of ln(n/(n+1)), intuitively I just look at the limit as n goes to infinity so it seems like eventually the partial sums would become ln(1)+ln(1)+ln(1), except actually a number slightly smaller than 1 for each term. So if i was to write out each sum individually and I ended up just adding 0 + 0 + 0 after a long enough time I don't see why it doesn't converge.

The important part is that each argument is slightly smaller than 1 and thus the logarithm is emphatically not 0, it is a negative term. So you are actually adding quite a lot (an infinite amount) of negative numbers, that while close to 0, are still non-zero. Ie., adding an infinite amount of 1/(a googol) is still infinite, not the same as adding a bunch of 0's even though it may be indecipherable from 0 on a number line.
 
  • #12
Office_Shredder said:
ln(n/(n+1)) does go to zero as n goes to infinity. But the same with 1/n, but the harmonic series diverges (I imagine you've seen that example). The terms of a series converging to zero is a necessary condition, not a sufficient one. There's nothing about ln that makes this mysterious.

Okay I guess my question now can be generalized to why doesn't a series converge if the terms of the series converge to zero? I know that 1/n diverges but only because my professor and textbook says so. I know its a p series, and I know how to apply the integral test... but I want to know why a bunch of finite positive numbers that eventually go to essentially 0 + 0 + 0 doesn't converge. It just doesn't make sense to me.
 
  • #13
icosane said:
Okay I guess my question now can be generalized to why doesn't a series converge if the terms of the series converge to zero? I know that 1/n diverges but only because my professor and textbook says so. I know its a p series, and I know how to apply the integral test... but I want to know why a bunch of finite positive numbers that eventually go to essentially 0 + 0 + 0 doesn't converge. It just doesn't make sense to me.

The terms never go to 0. In particular, the partial sums, the defining element of a series converging or not converging, never approach any number; the series of partial sums is unbounded. The property of being bounded or unbounded is the defining property of what it means for an infinite process to converge. It is fairly easy to show that there is no number M that is an upper bound for that sequence of sums, and thus it is meaningless to assign it a number (if there is at least one upper bound, then there is some smallest upper bound) in that context.
There are tests that use the actual terms in the infinite series to ascertain whether it converges or not, but these are all theorems strictly dependent on the original definition of convergence based on comparing the infinite series to the behavior of finite sums of the series, ie., are the sums bounded. Don't confuse the tests with the definition of convergence. Also remember the 1/googol example. Just because the terms "get very small", it does not say anything when you are considering infinity. Infinity means you eventually get googol/googol, (2googol)/googol and beyond.
 
Last edited:
  • #14
Dick said:
You aren't paying enough attention to the real evidence in front of your face that it doesn't converge. 'Seems like' isn't evidence for convergence.

I see the evidence and I'm not trying to make an argument for convergence. I'm just looking for an intuitive explanation, Dick.
 
  • #15
Here is another way to think about it. ln(1) + ln(1) + ln(1) + ... is certainly 0, nothing could be simpler. This is of course due to the fact that when we raise a real number to the 0-th power, we get 1.

But ln(1/2) + ln(2/3) + ln(3/4) + ... + ln(99/100) is just a bunch of exponents, each term being the exponent you need to raise e to get 1/2, 2/3, etc. These terms are small, but they are not zero. If we add these exponents, and remember that the property ln(a) + ln(b) = ln(ab) for real a,b > 0 simply tells us that when we multiply two exponents with the same base, we add their exponents, then the number that you need to raise e to get (1/2)(2/3)(3/4)*...*(99/100) = 1/100 is in fact smaller than any of the original numbers. This number that we need to raise e to is in fact the 99th partial sum.
 
  • #16
slider142 said:
The terms never go to 0. In particular, the partial sums, the defining element of a series converging or not converging, never approach any number; the series of partial sums is unbounded. The property of being bounded or unbounded is the defining property of what it means for an infinite process to converge. It is fairly easy to show that there is no number M that is an upper bound for that sequence of sums, and thus it is meaningless to assign it a number in that context.

Okay, its starting to sink in now the more I think about it. Thanks.
 
  • #17
icosane said:
I see the evidence and I'm not trying to make an argument for convergence. I'm just looking for an intuitive explanation, Dick.

Pay attention to the evidence. Apparently a whole lot of really small numbers can add up to a really large number. That shouldn't seem too crazy.
 
  • #18
snipez90 said:
Here is another way to think about it. ln(1) + ln(1) + ln(1) + ... is certainly 0, nothing could be simpler. This is of course due to the fact that when we raise a real number to the 0-th power, we get 1.

But ln(1/2) + ln(2/3) + ln(3/4) + ... + ln(99/100) is just a bunch of exponents, each term being the exponent you need to raise e to get 1/2, 2/3, etc. These terms are small, but they are not zero. If we add these exponents, and remember that the property ln(a) + ln(b) = ln(ab) for real a,b > 0 simply tells us that when we multiply two exponents with the same base, we add their exponents, then the number that you need to raise e to get (1/2)(2/3)(3/4)*...*(99/100) = 1/100 is in fact smaller than any of the original numbers. This number that we need to raise e to is in fact the 99th partial sum.

Thanks for the reply. I can see now that the series of a log function has some interesting properties.
 
  • #19
Dick said:
Pay attention to the evidence. Apparently a whole lot of really small numbers can add up to a really large number. That shouldn't seem too crazy.

But .1 + .01 + .001 doesn't add up to a really large number. How much does each successive term need to decrease in order to converge?
 
  • #20
icosane said:
But .1 + .01 + .001 doesn't add up to a really large number. How much does each successive term need to decrease in order to converge?

That's exactly what convergence tests where created to determine. And there's a good number of them. Some series decrease fast enough to have a finite sum, and some don't. ln((n+1)/n) doesn't, as you've proved. (1/10)^n does.
 
Last edited:

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
5
Views
2K
  • · Replies 22 ·
Replies
22
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K