Analysis(sequences) proof: multiplying infinite limit at infinity by 0

K29
Messages
103
Reaction score
0

Homework Statement


Let \stackrel{lim}{_{n \rightarrow \infty}}a_{n} = \infty
Let c \in R
Prove that
\stackrel{lim}{_{n \rightarrow \infty}} ca_{n}=

\infty for c>0 (i)

- \infty for c<0 (ii)

0 for c=0 (iii)

Homework Equations


Definition of divergence to infinity (infinite limit at infinity)
\forall A \in R. \exists K\in R such that a_{n} \geq A, \forall n \geq K

The Attempt at a Solution


For the first two cases I just used the above definition and essentially multiplied c by the inequality.
For the c=0 case used the definition for a finite limit:
\forall \epsilon > 0 \exists K_{\epsilon} \in R such that \forall n \in N, n \geq K_{\epsilon}, |a_{n}-L|<\epsilon
Now if I can squeeze 0 \leq |c a_{n}-0| \leq ?=0 then I'm done
But I can't see an upper limit for the inequality.Stuck there.
Or is there a way to prove this by contradiction instead of the way I've chosen.
Help?
 
Last edited:
Physics news on Phys.org
But in the case where c = 0, what is can?
 
Its 0.So sure, the sandwich theorem can't work. So could it be as simple as using the definition of divergence to get contradicion.
I must have ca_n \geq A, \forall A but by fixing n I get ca_{n}=0 < A. Contradiciton.
But is it enough to say that it does not diverge to infinity and minus infinity therefore it must converge to zero? Surely not?
 
Last edited:
Oh wait I think I see somthing else
Using the theorem that says
If |a_{n}-L|=0 \forall n then |a_{n}-L| < \epsilon etc etc defn of limit.
So I could prove by induction that |ca_{n}-0|=0 or more simply that
ca_{n}=0 \forall n But then I get stuck on what to do with
ca_{n+1}
 
LCKurtz said:
But in the case where c = 0, what is can?

K29 said:
Its 0.So sure, the sandwich theorem can't work.

You are trying to make |can - 0| < ε

If can = 0, how hard is that?
 
I thought I'd have to prove ca_{n} = 0 \forall n by induction. But I've thought about it and I think I see that I shouldn't need to
Thanks
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top