What Is the Limit Product Rule and Why Is It Confusing?

Click For Summary
The discussion centers on the confusion surrounding the limit product rule in calculus, particularly in proving it using epsilon-delta definitions. The limit product rule states that if the limits of two functions f(x) and g(x) exist as x approaches c, then the limit of their product is the product of their limits. Participants express difficulty in understanding the steps involving epsilon and delta values, especially when trying to follow proofs from textbooks like Thomas Calculus. The conversation highlights the challenge of grasping the mathematical reasoning behind selecting specific epsilon values and the subsequent calculations that lead to the conclusion of the proof. Overall, the complexity of the epsilon-delta approach contributes to the confusion surrounding the limit product rule.
sponsoredwalk
Messages
531
Reaction score
5
In trying to prove the limit product rule I've found all explanations

to hit on a point where I lose understanding.

1: If \lim_{x \to c} f(x) \ = \ L \ and \ \lim_{x \to c} g(x) \ = \ M \

We define the limit as;

\ \forall \ \epsilon \ >\ 0 \ \exists \ \delta > 0 \ : \ \forall \ x \ \rightarrow \ 0\ < \ | \ x \ - \ c \ | < \delta \ \Rightarrow \ 0 \ < \ | \ f(x)g(x) \ - \ LM \ | \ < \ \epsilon



2: Rewrite f(x) \ = \ L \ + \ (f(x) \ - \ L) \ and \ g(x) \ = \ M \ + \ (g(x) \ - \ M)

3: Rewrite f(x)g(x) \ - \ LM \ as

[L \ + \ (f(x) \ - \ L)] \ [ M \ + \ (g(x) \ - \ M) ] \ - \ LM \ =

LM \ + \ L(g(x) \ - \ M) \ + M(f(x) \ - \ L) \ + \ (f(x) \ - \ L)(g(x) \ - \ M) \ - \ lm

L(g(x) \ - \ M) \ + M(f(x) \ - \ L) \ + \ (f(x) \ - \ L)( g(x) \ - \ M)

All this I'm fine with, but next each source I've read confuses me. I'll give the one from Thomas Calculus.

"Since f & g have limits L & M as x-->c, ∃ positive numbers δ_1, δ_2, δ_3, δ_4 such that ∀ x;

0 \ < \ |x \ - \ c| \ < \delta_1 \Rightarrow \ |f(x) \ - \ L| \ < \ \sqrt{ \frac{ \epsilon }{3} }

0 \ < \ |x \ - \ c| \ < \delta_2 \Rightarrow \ |g(x) \ - \ M| \ < \ \sqrt{ \frac{ \epsilon }{3} }

0 \ < \ |x \ - \ c| \ < \delta_3 \Rightarrow \ |f(x) \ - \ L| \ < \ \sqrt{ \frac{ \epsilon }{3(1 \ + \ |M|} }

0 \ < \ |x \ - \ c| \ < \delta_4 \Rightarrow \ |g(x) \ - \ M| \ < \ \sqrt{ \frac{ \epsilon }{3(1 \ + \ |L|} }

What does this even mean and where does it come from?
 
Physics news on Phys.org
How you find these numbers, is perhaps a bit unclear to you. However, if you follow the proof through to the end, you will hopefully see what it does.

So apart from all this magic with the epsilons and delta's, do you agree with the statement? When you look at the definition for limit, you probably will. For example, since f(x) has limit L, I can always make |f(x) - L| as small as I want. In symbols,
<br /> \ \forall \ \epsilon&#039; \ &gt;\ 0 \ \exists \ \delta&#039; &gt; 0 \ : \ \forall \ x \ \rightarrow \ 0\ &lt; \ | \ x \ - \ c \ | &lt; \delta&#039; \ \Rightarrow \ 0 \ &lt; \ | \ f(x) \ - \ L \ | \ &lt; \ \epsilon&#039; <br /> (*)
Now what the proof does, is simply pick two such \epsilon&#039; (namely \sqrt{\epsilon / 3} and \sqrt{\epsilon / 3(1 + |M|)} where \epsilon, M are given numbers, then the existence of the limit ensures that I can find values of delta' for which (*) is true.

The proof then probably goes on to pick the smallest delta of the four, such that all four estimates hold simultaneously.
Then you can plug all those estimates into
<br /> f(x) g(x) - L M = L(g(x) \ - \ M) \ + M(f(x) \ - \ L) \ + \ (f(x) \ - \ L)( g(x) \ - \ M) <br />
and show that it is smaller than \epsilon.
 
Hey thanks for the reply, well I did try to follow the proof forwards but not only does that crazy value for epsilon scare the **** out of me but I just get confused. I'l show you where;

I should have written a bit more; the absolute value in the original equation is equivalent to;

|f(x)g(x) \ - \ LM| \le \ | L(g(x) \ - \ M) \ + M(f(x) \ - \ L) \ + \ (f(x) \ - \ L)( g(x) \ - \ M) |

\le \ | L | \ | (g(x) \ - \ M) | \ + |M| \ | (f(x) \ - \ L) | \ + \ | (f(x) \ - \ L)| |( g(x) \ - \ M) |

but then my book goes off writing the following which I have no idea where it came from nor why you'd do it nor how you'd figure out that this is what you do.

\le ( 1 \ + \ |L| ) |g(x) \ - \ M | \ + \ (1 \ + \ |M| ) |f(x) \ - \ L | \ + \ |f(x) \ - \ L| |g(x) \ - \ M|

Then this becomes < \frac{ \epsilon }{3} \ + \ \frac{ \epsilon }{3} \ +\sqrt{ {\frac{ \epsilon }{3}} } \sqrt{ {\frac{ \epsilon }{3}}

And I'm lost
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K