Proof of Limit Law: Proving LM = \lim_{x \to a}f(x)g(x)

In summary, the proof shows that if the limits of two functions f(x) and g(x) as x approaches a are L and M respectively, then the limit of their product is LM. This is proven using the triangle inequality and choosing certain values for epsilon to make the proof more elegant. The *s in the proof are used to show that the statements are true for arbitrary positive values and are not necessary for the proof itself.
  • #1
bjgawp
84
0
Edit: Whoops. Was intending to post this in the homework forum but accidentally didn't...

Question: If [tex]\lim_{x \to a} f(x) = L[/tex] and [tex]\lim_{x \to a} g(x) = M[/tex], then [tex]\lim_{x \to a} (f(x)g(x)) = LM[/tex].

Proof from James Stewarts':

Let [tex]\epsilon > 0[/tex] . We want to find [tex]\delta > 0[/tex] such that [tex]|f(x)g(x) - LM|[/tex] whenever [tex]0 < |x - a| < \delta[/tex].

[tex]\left| f(x)g(x) - LM \right|[/tex]

[tex]= \left| f(x)g(x) - Lg(x) + Lg(x) - LM \right|[/tex]

[tex]= \left|\left[f(x) - L\right]g(x) + L\left[g(x) - M\right]\right|[/tex]

[tex]\leq \left|\left[f(x) - L\right]\left(g(x)\right)\right| + \left|\left(L\right)\left[g(x) - M\right]\right|[/tex] (Triangle inequality)

[tex]= \left|f(x) - L\right|\left|g(x)\right| + \left|L\right|\left|g(x) - M\right|[/tex]

We want to make each of these terms less than [tex]\frac{\epsilon}{2}[/tex].

Since [tex]\lim_{x \to a} g(x) = M[/tex], there is a number [tex]\delta_{1} > 0[/tex] such that: [tex]|g(x) - M| < \frac{\epsilon}{2\left(1 + |L|\right)}[/tex]* whenever [tex]0 < |x - a| < \delta_{1}[/tex].

Also, there is [tex]\delta_{2}>0[/tex] such that if [tex]0 < |x - a| < \delta_{2}[/tex], then [tex]|g(x) - M| < 1[/tex] and therefore:

[tex]\left|g(x)\right| = \left|g(x) - M + M\right| \hspace{4mm} \leq \hspace{4mm} \left|g(x) - M\right| + \left|M\right| \hspace{4mm}< \hspace{4mm} 1 + \left|M\right|[/tex]

Since [tex]\lim_{x \to a}f(x) = L[/tex], there is a number [tex]\delta_{3}>0[/tex] such that: [tex]\left|f(x) - L\right|<\frac{\epsilon}{2\left(1+|M|\right)}[/tex]* whenever [tex]0 < |x - a| < \delta_{3}[/tex].

Let [tex]\delta = min\left\{\delta_{1},\delta_{2},\delta_{3}\right\}[/tex]. If [tex]0 < |x - a| < \delta[/tex].

Then we have [tex]0 < |x-a| <\delta_{1}, \hspace{4mm} 0 < | x-a| < \delta_{2}, \hspace{4mm} 0 <|x-a|<\delta_{3}[/tex]

so we can combine the inequalities to obtain:

[tex]\left|f(x)g(x) - LM\right|[/tex]

[tex] \leq \hspace{4mm}\left|f(x)-L\right| \left|g(x)\right| \hspace{4mm}+ \hspace{4mm}\left|L\right|\left|g(x)-M\right|[/tex]

[tex]= \frac{\epsilon}{2\left(1 + |M|\right)} \left(1 + |M|\right) + |L| \frac{\epsilon}{2\left(1 + |L|\right)}[/tex]

[tex]< \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon[/tex]

Therefore, [tex]\lim_{x \to a} f(x)g(x) = LM[/tex]
Problem: I don't exactly how they made the statements with the *s. Where did these inequalities come from and how can they be asserted? Also, I'm kind of iffy on why we need three [tex]\delta[/tex]s ... Any help would be appreciated!
 
Physics news on Phys.org
  • #2
The statements with * are certainly true for |# - M| < e*, for # = g or f and arbitrary e* > 0.

Now define e = e*/denominator, where "denominator" is any positive quantity, and the statement is still true!
 
  • #3
If [tex]f(x)[/tex] is defined in some open interval containing [tex]a[/tex] except possibly at [tex]a[/tex] and [tex]g(x)[/tex] then the product function [tex]f(x)g(x)[/tex] is defined on some open interval containing [tex]a[/tex] except possibly at [tex]a[/tex].
So it makes sense to talk about the limit of [tex]\lim_{x\to a}f(x)g(x)[/tex].

Instead of doing it your way I will do it a different way because it is more elegant. First we make an important observation that [tex]f(x)[/tex] is bounded on some open interval containing [tex]a[/tex] (except possibly at [tex]a[/tex]). Since [tex]\lim_{x\to a}f(x) = L[/tex] it means [tex]|f(x) - L| < 1[/tex] for [tex]0<|x-a|<\delta_1[/tex]. So [tex]||f(x)- |L|| \leq |f(x)-L| < 1[/tex] so [tex]|f(x)|-|L| < 1[/tex] so [tex]|f(x)| < 1 + |L|[/tex] for [tex]0<|x-a|<\delta_1[/tex]. This establishes the claim.

Now [tex]|f(x)g(x) - LM| = |f(x)g(x) - Mf(x)+Mf(x) - LM| \leq |f(x)||g(x)-M|+|M||f(x)-L|[/tex].
There exists [tex]0<|x-a|<\delta_2[/tex] and [tex]0<|x-a|<\delta_3[/tex] so that [tex]|g(x) - M|< \epsilon[/tex] and [tex]|f(x) - L|<\epsilon[/tex]. If [tex]\delta_4 = \min (\delta_2,\delta_3)[/tex] then, [tex]|f(x)||g(x)-M|+|M||f(x)-L| < |f(x)|\epsilon + |M|\epsilon[/tex] if [tex]\delta_5 = \min (\delta_1,\delta_4)[/tex] then [tex]|f(x)|\epsilon + |M|\epsilon \leq |1+|L||\epsilon + |M|\epsilon[/tex] for [tex]0<|x-a|<\delta[/tex].
 
  • #4
Good question.
I actually remember not getting this either.

I THINK what he's doing is selecting a certain epsilon less than e/2 on purpose, and obviouslly adding |L| + 1 or |M| + 1 to the denominator will make the terms less than e/2 because he knows the end result. Just like he chose epislon for |g -M| <= 1.

However, I would very much like if this is correct because this proof sounds incomplete.
 
  • #5
Maybe a late reaction but if you're really interested in maths it doesn't matter :)

I don't see any errors in this proof. Indeed, he selects a certain epsilon on purpose to obtain epsilon/2 in the end to make the proof more elegant.

Greetings, Foxbox
 
  • #6
I always skipped this proof precisely because of the [itex]\frac{ \epsilon }{2(|M| + \epsilon) } [/itex]
terms in the proofs, I mean they are too strange to be intuited or just
thought up. Well I searched for ages today to find an explanation but
nowhere is it to be found, not one book I checked mentioned it, including
all of the classics! Well I thought about it & this is what I came up with,
I think this is correct - please correct me if I'm wrong.

Theorem: If [itex] \lim_{x \to a} f(x) \ = \ L[/itex] & [itex] \lim_{x \to a} g(x) \ = \ M[/itex] then [itex] \lim_{x \to a} f(x)g(x) \ = \ L M[/itex].

Proof: If [itex] \lim_{x \to a} f(x) \ = \ L[/itex]

then [itex] 0 \ < \ |x \ - \ a| \ < \ \delta_1 \ \Rightarrow \ |f(x) \ - \ L| < \ \epsilon_1[/itex]

so [itex] L \ - \ \epsilon_1 \ < \ f(x) \ < \ L \ + \epsilon_1[/itex]

Therefore [itex] |L \ - \ \epsilon_1| \ < \ |f(x)| \ < \ |L \ + \ \epsilon_1| \ \le \ |L| \ + \ \epsilon_1[/itex]

which leads to [itex] |f(x)| \ < \ |L| \ + \ \epsilon_1[/itex].

The same process for [itex] 0 \ < \ |x \ - \ a| \ < \ \delta_2 \ \Rightarrow \ |g(x) \ - \ M| \ < \ \epsilon_2[/itex]

derives [itex] |g(x)| \ < \ |M| \ + \ \epsilon_2[/itex]

Now,using [itex] |f(x) \ - \ L| < \ \epsilon_1[/itex]we multiply through by |g(x)|

to get [itex] |g(x)||f(x) \ - \ L| < \ |g(x)| \epsilon_1[/itex]

and I think you see that:

[itex] |g(x)||f(x) \ - \ L| < \ (|M| \ + \ \epsilon_2)|f(x) \ - \ L| < \ (|M| \ + \ \epsilon_2) \epsilon_1[/itex]

Just to clean things up let's pick some epsilon such that

[itex] (|M| \ + \ \epsilon_2) \epsilon_1 \ < \ \frac{ \epsilon}{2}[/itex]

so we have [itex] (|M| \ + \ \epsilon_2)|f(x) \ - \ L| < \ \frac{ \epsilon}{2}[/itex]

from which comes [itex] | f(x) \ - \ L| < \ \frac{ \epsilon}{2(|M| \ + \ \epsilon_2)}[/itex].

The same process only multiplying the |g(x) - L| by f(x) derives [itex] |g(x) \ - \ M| \ < \ \frac{ \epsilon}{2(|L| \ + \ \epsilon_1)}[/itex].

So that's how you derive those strange terms in the proofs!

When the proof tells you to set δ = min{δ₁,δ₂,δ₃} it's just convenient
notation telling you to pick the smallest δ in that set so that all of the
|x - a| < δ ⇒ ... things are satisfied.

|(fg)(x) - LM| = |f(x)g(x) - Lg(x) + Lg(x) - LM| ≤ |f(x) - L||g(x)| + |g(x) - M||L|

so
[itex] |(fg)(x) \ - \ LM|\ \le \ |f(x) \ - \ L||g(x)| \ + \ |g(x) \ - \ M||L| \ < \ [ \ \frac{ \epsilon}{2(|M| \ + \ \epsilon_2)} ] \cdot \ (|M| \ + \ \epsilon_2) \ + \ \frac{ \epsilon}{2(|L| \ + \ \epsilon_1)}] \ \cdot \ |L| [/itex]

Notice the (|L| + ε₁) in the denominator and the |L| in the numerator!

Certainly [itex] \frac{ \epsilon}{2(|L| \ + \ \epsilon_1)} \ \cdot \ |L| [/itex] is less than [itex] \frac{ \epsilon}{2(|L| \ + \ \epsilon_1)} \ \cdot \ (|L| \ + \ \epsilon_1)[/itex]

by hypothesis so we can just use that term so that they cancel out
while also encompassing all the evil that came before it thereby giving a
final answer of ε showing that our limit holds! Note that we could have
relied on the theorem that shows a convergent sequence is always
bounded to make things even prettier & quicker.
 
  • #7
sponsoredwalk said:
I always skipped this proof precisely because of the [itex]\frac{ \epsilon }{2(|M| + \epsilon) } [/itex]
terms in the proofs, I mean they are too strange to be intuited or just
thought up. Well I searched for ages today to find an explanation but
nowhere is it to be found, not one book I checked mentioned it, including
all of the classics!

In my limited experience, things like this are reversed engineered.
 
  • #8
It's exactly like Robert says. That's why Spivak is such a great author, he explains things like these. Exercise 1.21 in the very first chapter, while chapter 5 is about limits, is about this. You want to prove that if x is close to x0 and y is close to y0, then xy is close to x0y0. Try to do this yourself. i.e. for every epsilon>0 try to find delta1>0 and delta2>0 such that if |x-x0|<delta1 and |y-y0|<delta2 then |xy-x0y0|<epsilon.
 
  • #9
I don't see what's any different about Spivak compared to other authors, he also gives,
without explanation, that crazy fraction that was bothering me & the OP.

If we're going to take it on faith that this magic fraction works why not just bypass this
strange thing & just use any epsilon? Courant does this and ends up with (M + |L|)ε &
does not give any crazy requirements to be memorized. Since it can be proven that
|f(x) - L| < Cε satisfies the limit requirements I see no reason for nearly every author to
give the same crazy fraction in the proofs in their books unless they are going to motivate
it - especially if it's used in something like Stewart! Also Rudin's proof of this is far superior
to being given some crazy fraction and told it just works, similarly Apostol's is also more
instructive (if not very good in my opinion).

It just seems to me to be a very poor pedagogue that gives students new to a
subject proofs that rely on heavy machinery not adequately developed in the text -
especially when there are easier & far more instructive proofs that could have been used.
Spivak can be forgiven because I think you are expected to rederive that term yourself
(though I seriously wonder how many people actually do from reading all of the online
forums & pages discussing this question) but certainly not Stewart or Thomas etc...
 
  • #10
sponsoredwalk said:
I don't see what's any different about Spivak compared to other authors, he also gives,
without explanation, that crazy fraction that was bothering me & the OP.
Spivak singles out the issue and let's you think about it. Then later it is used in a proof, and it won't come as a surprise anymore.
If we're going to take it on faith that this magic fraction works why not just bypass this
strange thing & just use any epsilon? Courant does this and ends up with (M + |L|)ε &
does not give any crazy requirements to be memorized. Since it can be proven that
|f(x) - L| < Cε satisfies the limit requirements I see no reason for nearly every author to
give the same crazy fraction in the proofs in their books
Yes, this might be more instructive. In fact, I agree it is preferable, at least pedagogically. Many people do some estimate and end up with Cε. Then they divide everywhere ε by C only to get ε at the end. I'm not sure why they prefer this over just ending up with Cε at the end.
 
  • #11
Such proofs usually starts out with "choose an [tex]\epsilon > 0[/tex]". So the proofs usually ends with "so [tex]|...|<\epsilon[/tex]". There is nothing fishy about choosing your other epsilons wisely under the proof to finally arrive at this conclusion. If you end with [tex]|...|<C \epsilon[/tex] you haven't proven the condition for convergence! ..though you could reconstruct your proof in order to do so. To understand why they make these "crazy" choices of epsilons you must merely look at how they help you arriving at the wanted conclusion. Proofs in mathematics usually are "reversely engineered", as proofs in euclidean geometry classically was. It will be instructive to do the proofs independently yourself to see why the seemingly peculiar choices are completely natural.
 
Last edited:
  • #12
If that was a reply to me: of course there is nothing fishy, the approaches are trivially equivalent. My point is: why would you desperately want to arrive at <e instead of <Ce? Beauty? Beacuse everyone does? Because it looks more like what the definition says?
 
  • #13
Landau said:
If that was a reply to me: of course there is nothing fishy, the approaches are trivially equivalent. My point is: why would you desperately want to arrive at <e instead of <Ce? Beauty? Beacuse everyone does? Because it looks more like what the definition says?

I happened to edit my post giving an answer this before i saw your reply, but I can repeat. The conditions for convergence is that "for any [tex]\epsilon>0[/tex] there is a delta such that [tex]|...|<\epsilon[/tex] for etc..". If you arrive at [tex]|...|<C \epsilon[/tex] you have simply not proven the necessary conditions, even though it will be easy to reconstruct your proof in order to do so.
 
  • #14
That's dull. Just prove the following lemma directly after the definition of limit:

Definition. We say [itex]\lim_{x\to a}f(x)=b[/itex] iff for all [itex]\epsilon>0[/itex] there exists [itex]\delta>0[/itex] such that for all x the implication [itex]|x-a|<\delta\Rightarrow |f(x)-b|<\epsilon[/itex] holds.

Lemma. For any C>0 the following are equivalent:
(i) [itex]\lim_{x\to a}f(x)=b[/itex]
(ii) For all [itex]\epsilon>0[/itex] there exists [itex]\delta>0[/itex] such that for all x the implication [itex]|x-a|<\delta\Rightarrow |f(x)-b|<C\epsilon[/itex] holds.

And now we proceed as usual.
 
  • #15
Sure, but it hardly seems necessary to implement such a trivial lemma in an introductory course in calculus/analysis which already consists of a long obilogatory series of theorems. In my opinion it is both pedagogically, economically and aesthetically better to do it in its usual fashion. This is of course as you surely understand a matter of taste and opinion, not mathematics.
 
  • #16
Jarle said:
Sure, but it hardly seems necessary to implement such a trivial lemma
Wait. Just now you said that ending up with Ce instead of e is not valid because that is what the definition says. But when I state this explicitly as a lemma, it is trivial after all?
In my opinion it is both pedagogically, economically and aesthetically better to do it in its usual fashion. This is of course as you surely understand a matter of taste and opinion, not mathematics.
I believe this thread is purely about pedagogy (so indeed not about mathematics). And pedagogy is exactly the reason for singeling out this lemma: the student is made aware of the fact that it doesn't matter whether you end up with e or Ce (as the e was arbitrary to begin with), instead of having to find this out for himself.
In practice, everyone who does these proofs ends up with Ce, and then artificially replaces all e's with e/C. One could argue that this is aesthetically preferable. Economically it is pretty much the same. But I don't see how such an artificial replacement secretly done by the author could be better pedagogically.
 
  • #17
Landau said:
Wait. Just now you said that ending up with Ce instead of e is not valid because that is what the definition says. But when I state this explicitly as a lemma, it is trivial after all?

You must agree that it is invalid without a reference to such an explanation or lemma. But as I also said; it can be easily reconstructed to do so, e.g. by reference to such a lemma or intention as to how it can be reconstructed. I never said it was non-trivial.

I consider this a non-issue actually. In the end it's up to the reader to decide what suits him best. We merely differ in personal preferences.
 
  • #18
Jarle said:
I consider this a non-issue actually. In the end it's up to the reader to decide what suits him best. We merely differ in personal preferences.
I agree :cool:
 

What is the definition of limit law?

The limit law states that the limit of a product is equal to the product of the limits, as long as the individual limits exist.

How is the limit law used in proving LM = \lim_{x \to a}f(x)g(x)?

The limit law can be used to simplify the proof of LM = \lim_{x \to a}f(x)g(x) by breaking it down into two separate limits, one for f(x) and one for g(x). Then, the limit law can be applied to find the overall limit.

What are the conditions for using the limit law?

The individual limits of f(x) and g(x) must exist, and the limit of the product must not be an indeterminate form, such as 0 x ∞. Additionally, the limit law can only be used for products, not for sums, differences, or quotients.

Can the limit law be used for functions that are not continuous?

Yes, the limit law can still be used as long as the individual limits exist. Continuity is not a requirement for using the limit law.

Can the limit law be used for more than two functions?

Yes, the limit law can be extended to any number of functions as long as all individual limits exist and the overall limit is not an indeterminate form.

Similar threads

Replies
1
Views
1K
Replies
9
Views
919
Replies
4
Views
743
Replies
1
Views
1K
  • Calculus
Replies
9
Views
2K
  • Calculus
Replies
2
Views
1K
Replies
3
Views
324
Replies
3
Views
1K
Replies
2
Views
939
Back
Top