# Proof of a Limit Law

1. Oct 25, 2007

### bjgawp

Edit: Whoops. Was intending to post this in the homework forum but accidentally didn't...

Question: If $$\lim_{x \to a} f(x) = L$$ and $$\lim_{x \to a} g(x) = M$$, then $$\lim_{x \to a} (f(x)g(x)) = LM$$.

Proof from James Stewarts':

Problem: I don't exactly how they made the statements with the *s. Where did these inequalities come from and how can they be asserted? Also, I'm kind of iffy on why we need three $$\delta$$s ... Any help would be appreciated!

2. Oct 25, 2007

### EnumaElish

The statements with * are certainly true for |# - M| < e*, for # = g or f and arbitrary e* > 0.

Now define e = e*/denominator, where "denominator" is any positive quantity, and the statement is still true!

3. Oct 26, 2007

### Kummer

If $$f(x)$$ is defined in some open interval containing $$a$$ except possibly at $$a$$ and $$g(x)$$ then the product function $$f(x)g(x)$$ is defined on some open interval containing $$a$$ except possibly at $$a$$.
So it makes sense to talk about the limit of $$\lim_{x\to a}f(x)g(x)$$.

Instead of doing it your way I will do it a different way because it is more elegant. First we make an important observation that $$f(x)$$ is bounded on some open interval containing $$a$$ (except possibly at $$a$$). Since $$\lim_{x\to a}f(x) = L$$ it means $$|f(x) - L| < 1$$ for $$0<|x-a|<\delta_1$$. So $$||f(x)- |L|| \leq |f(x)-L| < 1$$ so $$|f(x)|-|L| < 1$$ so $$|f(x)| < 1 + |L|$$ for $$0<|x-a|<\delta_1$$. This establishes the claim.

Now $$|f(x)g(x) - LM| = |f(x)g(x) - Mf(x)+Mf(x) - LM| \leq |f(x)||g(x)-M|+|M||f(x)-L|$$.
There exists $$0<|x-a|<\delta_2$$ and $$0<|x-a|<\delta_3$$ so that $$|g(x) - M|< \epsilon$$ and $$|f(x) - L|<\epsilon$$. If $$\delta_4 = \min (\delta_2,\delta_3)$$ then, $$|f(x)||g(x)-M|+|M||f(x)-L| < |f(x)|\epsilon + |M|\epsilon$$ if $$\delta_5 = \min (\delta_1,\delta_4)$$ then $$|f(x)|\epsilon + |M|\epsilon \leq |1+|L||\epsilon + |M|\epsilon$$ for $$0<|x-a|<\delta$$.

4. Oct 28, 2007

### Howers

Good question.
I actually remember not getting this either.

I THINK what he's doing is selecting a certain epsilon less than e/2 on purpose, and obviouslly adding |L| + 1 or |M| + 1 to the denominator will make the terms less than e/2 because he knows the end result. Just like he chose epislon for |g -M| <= 1.

However, I would very much like if this is correct because this proof sounds incomplete.

5. Mar 27, 2010

### FoxBox

Maybe a late reaction but if you're really interested in maths it doesn't matter :)

I don't see any errors in this proof. Indeed, he selects a certain epsilon on purpose to obtain epsilon/2 in the end to make the proof more elegant.

Greetings, Foxbox

6. Feb 19, 2011

I always skipped this proof precisely because of the $\frac{ \epsilon }{2(|M| + \epsilon) }$
terms in the proofs, I mean they are too strange to be intuited or just
thought up. Well I searched for ages today to find an explanation but
nowhere is it to be found, not one book I checked mentioned it, including
all of the classics! Well I thought about it & this is what I came up with,
I think this is correct - please correct me if I'm wrong.

Theorem: If $\lim_{x \to a} f(x) \ = \ L$ & $\lim_{x \to a} g(x) \ = \ M$ then $\lim_{x \to a} f(x)g(x) \ = \ L M$.

Proof: If $\lim_{x \to a} f(x) \ = \ L$

then $0 \ < \ |x \ - \ a| \ < \ \delta_1 \ \Rightarrow \ |f(x) \ - \ L| < \ \epsilon_1$

so $L \ - \ \epsilon_1 \ < \ f(x) \ < \ L \ + \epsilon_1$

Therefore $|L \ - \ \epsilon_1| \ < \ |f(x)| \ < \ |L \ + \ \epsilon_1| \ \le \ |L| \ + \ \epsilon_1$

which leads to $|f(x)| \ < \ |L| \ + \ \epsilon_1$.

The same process for $0 \ < \ |x \ - \ a| \ < \ \delta_2 \ \Rightarrow \ |g(x) \ - \ M| \ < \ \epsilon_2$

derives $|g(x)| \ < \ |M| \ + \ \epsilon_2$

Now,using $|f(x) \ - \ L| < \ \epsilon_1$we multiply through by |g(x)|

to get $|g(x)||f(x) \ - \ L| < \ |g(x)| \epsilon_1$

and I think you see that:

$|g(x)||f(x) \ - \ L| < \ (|M| \ + \ \epsilon_2)|f(x) \ - \ L| < \ (|M| \ + \ \epsilon_2) \epsilon_1$

Just to clean things up lets pick some epsilon such that

$(|M| \ + \ \epsilon_2) \epsilon_1 \ < \ \frac{ \epsilon}{2}$

so we have $(|M| \ + \ \epsilon_2)|f(x) \ - \ L| < \ \frac{ \epsilon}{2}$

from which comes $| f(x) \ - \ L| < \ \frac{ \epsilon}{2(|M| \ + \ \epsilon_2)}$.

The same process only multiplying the |g(x) - L| by f(x) derives $|g(x) \ - \ M| \ < \ \frac{ \epsilon}{2(|L| \ + \ \epsilon_1)}$.

So that's how you derive those strange terms in the proofs!

When the proof tells you to set δ = min{δ₁,δ₂,δ₃} it's just convenient
notation telling you to pick the smallest δ in that set so that all of the
|x - a| < δ ⇒ ... things are satisfied.

|(fg)(x) - LM| = |f(x)g(x) - Lg(x) + Lg(x) - LM| ≤ |f(x) - L||g(x)| + |g(x) - M||L|

so
$|(fg)(x) \ - \ LM|\ \le \ |f(x) \ - \ L||g(x)| \ + \ |g(x) \ - \ M||L| \ < \ [ \ \frac{ \epsilon}{2(|M| \ + \ \epsilon_2)} ] \cdot \ (|M| \ + \ \epsilon_2) \ + \ \frac{ \epsilon}{2(|L| \ + \ \epsilon_1)}] \ \cdot \ |L|$

Notice the (|L| + ε₁) in the denominator and the |L| in the numerator!

Certainly $\frac{ \epsilon}{2(|L| \ + \ \epsilon_1)} \ \cdot \ |L|$ is less than $\frac{ \epsilon}{2(|L| \ + \ \epsilon_1)} \ \cdot \ (|L| \ + \ \epsilon_1)$

by hypothesis so we can just use that term so that they cancel out
while also encompassing all the evil that came before it thereby giving a
final answer of ε showing that our limit holds! Note that we could have
relied on the theorem that shows a convergent sequence is always
bounded to make things even prettier & quicker.

7. Feb 19, 2011

### Robert1986

In my limited experience, things like this are reversed engineered.

8. Feb 20, 2011

### Landau

It's exactly like Robert says. That's why Spivak is such a great author, he explains things like these. Exercise 1.21 in the very first chapter, while chapter 5 is about limits, is about this. You want to prove that if x is close to x0 and y is close to y0, then xy is close to x0y0. Try to do this yourself. i.e. for every epsilon>0 try to find delta1>0 and delta2>0 such that if |x-x0|<delta1 and |y-y0|<delta2 then |xy-x0y0|<epsilon.

9. Feb 20, 2011

I don't see what's any different about Spivak compared to other authors, he also gives,
without explanation, that crazy fraction that was bothering me & the OP.

If we're going to take it on faith that this magic fraction works why not just bypass this
strange thing & just use any epsilon? Courant does this and ends up with (M + |L|)ε &
does not give any crazy requirements to be memorized. Since it can be proven that
|f(x) - L| < Cε satisfies the limit requirements I see no reason for nearly every author to
give the same crazy fraction in the proofs in their books unless they are going to motivate
it - especially if it's used in something like Stewart! Also Rudin's proof of this is far superior
to being given some crazy fraction and told it just works, similarly Apostol's is also more
instructive (if not very good in my opinion).

It just seems to me to be a very poor pedagogue that gives students new to a
subject proofs that rely on heavy machinery not adequately developed in the text -
especially when there are easier & far more instructive proofs that could have been used.
Spivak can be forgiven because I think you are expected to rederive that term yourself
(though I seriously wonder how many people actually do from reading all of the online
forums & pages discussing this question) but certainly not Stewart or Thomas etc...

10. Feb 20, 2011

### Landau

Spivak singles out the issue and lets you think about it. Then later it is used in a proof, and it won't come as a surprise anymore.
Yes, this might be more instructive. In fact, I agree it is preferable, at least pedagogically. Many people do some estimate and end up with Cε. Then they divide everywhere ε by C only to get ε at the end. I'm not sure why they prefer this over just ending up with Cε at the end.

11. Feb 20, 2011

### disregardthat

Such proofs usually starts out with "choose an $$\epsilon > 0$$". So the proofs usually ends with "so $$|...|<\epsilon$$". There is nothing fishy about choosing your other epsilons wisely under the proof to finally arrive at this conclusion. If you end with $$|...|<C \epsilon$$ you haven't proven the condition for convergence! ..though you could reconstruct your proof in order to do so. To understand why they make these "crazy" choices of epsilons you must merely look at how they help you arriving at the wanted conclusion. Proofs in mathematics usually are "reversely engineered", as proofs in euclidean geometry classically was. It will be instructive to do the proofs independently yourself to see why the seemingly peculiar choices are completely natural.

Last edited: Feb 20, 2011
12. Feb 20, 2011

### Landau

If that was a reply to me: of course there is nothing fishy, the approaches are trivially equivalent. My point is: why would you desperately want to arrive at <e instead of <Ce? Beauty? Beacuse everyone does? Because it looks more like what the definition says?

13. Feb 20, 2011

### disregardthat

I happened to edit my post giving an answer this before i saw your reply, but I can repeat. The conditions for convergence is that "for any $$\epsilon>0$$ there is a delta such that $$|...|<\epsilon$$ for etc..". If you arrive at $$|...|<C \epsilon$$ you have simply not proven the necessary conditions, even though it will be easy to reconstruct your proof in order to do so.

14. Feb 20, 2011

### Landau

That's dull. Just prove the following lemma directly after the definition of limit:

Definition. We say $\lim_{x\to a}f(x)=b$ iff for all $\epsilon>0$ there exists $\delta>0$ such that for all x the implication $|x-a|<\delta\Rightarrow |f(x)-b|<\epsilon$ holds.

Lemma. For any C>0 the following are equivalent:
(i) $\lim_{x\to a}f(x)=b$
(ii) For all $\epsilon>0$ there exists $\delta>0$ such that for all x the implication $|x-a|<\delta\Rightarrow |f(x)-b|<C\epsilon$ holds.

And now we proceed as usual.

15. Feb 20, 2011

### disregardthat

Sure, but it hardly seems necessary to implement such a trivial lemma in an introductory course in calculus/analysis which already consists of a long obilogatory series of theorems. In my opinion it is both pedagogically, economically and aesthetically better to do it in its usual fashion. This is of course as you surely understand a matter of taste and opinion, not mathematics.

16. Feb 20, 2011

### Landau

Wait. Just now you said that ending up with Ce instead of e is not valid because that is what the definition says. But when I state this explicitly as a lemma, it is trivial after all?
I believe this thread is purely about pedagogy (so indeed not about mathematics). And pedagogy is exactly the reason for singeling out this lemma: the student is made aware of the fact that it doesn't matter whether you end up with e or Ce (as the e was arbitrary to begin with), instead of having to find this out for himself.
In practice, everyone who does these proofs ends up with Ce, and then artificially replaces all e's with e/C. One could argue that this is aesthetically preferable. Economically it is pretty much the same. But I don't see how such an artificial replacement secretly done by the author could be better pedagogically.

17. Feb 20, 2011

### disregardthat

You must agree that it is invalid without a reference to such an explanation or lemma. But as I also said; it can be easily reconstructed to do so, e.g. by reference to such a lemma or intention as to how it can be reconstructed. I never said it was non-trivial.

I consider this a non-issue actually. In the end it's up to the reader to decide what suits him best. We merely differ in personal preferences.

18. Feb 20, 2011

I agree