Does the Convergence of \(\Sigma \frac{f(x)}{x}\) Depend on \(\Sigma f(x) = 0\)?

CalTech>MIT
Messages
7
Reaction score
0

Homework Statement


Let f:Z\rightarrowR be periodic such that f(x+a) = f(x) for some fixed a\geq1.

Prove that \Sigma ^{infinity}_{x=1} \frac{f(x)}{x} converges if and only if \Sigma ^{a}_{x=1} f(x) = 0.


Homework Equations


n/a


The Attempt at a Solution


Ok, so I have a general idea of how to write the proof. We can do this by contradiction and assume that the second series isn't equal to zero. As a result, the first series becomes similar to the harmonic series? and thus doesn't converge?
 
Physics news on Phys.org
First show that the converse holds.

Then, by contradiction, suppose that \sum_{x=1}^{a-1}{f(x)}=-f(a)-b. Then we put g(a)=f(a)+b and g(x)=f(x) for other values (while still making sure that the function is periodic). Then we got (since both series converge), that

\sum_{x=1}^{+\infty}{\frac{f(x)}{x}}-\sum_{x=1}^{+\infty}{\frac{g(x)}{x}}

converges. But this difference diverges, since it looks very much like a harmonic series.
 
micromass said:
First show that the converse holds.

Then, by contradiction, suppose that \sum_{x=1}^{a-1}{f(x)}=-f(a)-b. Then we put g(a)=f(a)+b and g(x)=f(x) for other values (while still making sure that the function is periodic). Then we got (since both series converge), that

\sum_{x=1}^{+\infty}{\frac{f(x)}{x}}-\sum_{x=1}^{+\infty}{\frac{g(x)}{x}}

converges. But this difference diverges, since it looks very much like a harmonic series.

Its been a while since I took analysis but I would be careful with this approach. It seems to me that you are trying to use two ideas here.
First, is the convergence of a series (seen as a sequence) to conclude that the sum of the limits is the limit of the sums, in this case we do not have absolute convergence so we are not free to assume that we can pair up terms exactly how we want- even if we know that both converge. IE, although it is true that the sum of the limits is the limit of the sums in this case, the sum of the limits equals the limit of the sums of partials sums, not the sum of terms of the sequence.

Second, you are trying to conclude that the second sequence converges by virtue of the convergence of the first(so I am reading), but in this case you have changed the sign of an entire subsequence of the sequence of terms- the convergence of the first series no longer gives you convergence of the second because of this (at least I do not think so, not without absolute convergence).

Have you guys tried induction on a? I am not 100% positive that it works (again because you are not guaranteed absolute convergence), but it might be worth a shot.
 
I know that the series is not conditionally convergent. And I've been very careful not to use this...

Secondly, the convergence of \sum{\frac{g(x)}{x}} does not follow from the convergence of the first series. It follows from the reverse implication of what he's trying to prove (or alternatively from Dirichlets criterion).
 
Ok well I wrote up a reply and then PF timed me out- so that reply went bye bye. In a nutshell-

1) I apologize if I misunderstood your argument, in hindsight I had no good reason to assume that you were using absolute convergence.

2) I am convinced that if argued carefully, your approach works just fine (I had suspected that it did before(except for point 4 below), my point was not to argue that it didn't- I was simply saying be careful with it, this approach needs to be done carefully).

3) "It follows from the reverse implication of what he's trying to prove" - agreed
"or alternatively from Dirichlets criterion" -Nice!

4) "but in this case you have changed the sign of an entire subsequence of the sequence of terms" - this was wrong on my part. I had misread your construction.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top