# Epsilon-Delta Proof Of Limits

1. Jul 19, 2010

### razegfx

1. The problem statement, all variables and given/known data
Proof of product of limits of f(x) and g(x). In terms of epsilon-delta.

2. Relevant equations
See below.

3. The attempt at a solution
I am familiar with a slightly different proof, and was wondering if this was also valid, even if not as eloquent. My thoughts were that because I could express epsilon in terms of only other constants, it could work. But alas, I am new to the idea of these proofs and have doubts about what I did. My work is pictured below.

http://img833.imageshack.us/img833/4452/img7308.jpg [Broken]
^^ just click on the link. I apologize, but I couldn't resize and if I posted the actual image it would've distorted the thread appearance. Thank you in advance for any help.

Last edited by a moderator: May 4, 2017
2. Jul 19, 2010

I'm not sure your last two lines work as a proof, i.e., I don't think |f(x)g(x) - LM| < E( E + M + L ) is sufficient proof for |f(x)g(x) - LM| < E.

3. Jul 19, 2010

### razegfx

Thank you for your input - that was certainly the part I was unsure about. If you could elaborate on why exactly you think it isn't sufficient I'd be even more grateful.

My thoughts were this - because it was given that f(x) and g(x) were individually bounded by some E, and I was able to express both of them by using the triangle inequality on their product, then it followed that I could bound that product accordingly. The one major difference from the other proof I had seen was the inclusion of 4 separate delta terms, as opposed to my 2. I figured that having 2 would be sufficient to express the solution but what do I know? haha

Again, thanks!

4. Jul 19, 2010

### snipez90

Well I think Raskolnikov is simply stating the fact that in the original proof you can't just say |f(x)g(x) - LM| < E when you have |f(x)g(x) - LM| < E( E + M + L ). Hypothetically, if M = L = 1 and E = 1, then you don't necessarily have |f(x)g(x) - LM| < E.

Fortunately, this is really not a big issue. In basic analysis it's nice to have the < E bound at the end, but it's certainly not necessary to do that. People who are good at working with the constants in an analysis problem can do that, but it usually doesn't matter as long as you can get the epsilon factor in the final bound.

In this case, simply letting E be arbitrarily close to 0 will do the trick, since E(E + M + L) can be made arbitrarily close to 0, so that f(x)g(x) and ML are arbitrarily close. However, to be more rigorous about this requires another simple limit property, so I will dispense with this and try to fix up your proof to get the < E.

One thing you can try without using a different approach is to start modifying at the line = |g(x) - M|*|f(x) - L| + |M|*|f(x) - L| + |L|*|g(x) - M|. We want this to be less than epsilon, but we have the sum of 3 terms, the first of which will get us an E^2 factor, the second an E and the third an E. Since E can be any positive number, we can replace it with a fraction of E (e.g. E/4) or the root of some positive factor of E. This should give you some indication of how to make the sum < E. The larger issue is that the first term in the sum does not have a factor of |M| or |L|, while the subsequent terms have one of each. One thing to consider here is requiring say |f(x)-L| to be less than the minimum of multiple numbers to ensure that multiple inequalities are satisfied. I don't want to give too much away, but I'd be happy to clarify provided you have made further attempts.

5. Jul 19, 2010

If (E + L + M) > 1, then E < E( E + M + L ). So just because |f(x)g(x) - LM| < E( E + M + L ) doesn't necessarily mean it's also less than E itself. I'm just reading line by line. Of course your end result is right, but idk if the end of your proof is sound.

There's a somewhat pretty proof I remember that only needs 2 deltas as well. It's quite similar to yours, but a few nitty-bitty cool parts.

Note that lim[f(x) - L] = lim[f(x)] - lim[L] = L - L = 0. And similar observation with g(x) & K.

Let E > 0. Then there exists d1>0 and d2>0 with:
|f(x) - L - 0| < sqrt(E)...whenever 0 < |x-a| < d1.
|g(x) - K - 0| < sqrt(E)...whenever 0 < |x-a| < d2.

Choose d = min{d1,d2}. If 0 < |x-a| < d, then

| [f(x) - L - 0] * [g(x) - K - 0] |
= |[f(x) - L][g(x) - K]|
= |f(x) - L| * |g(x) - K|
< sqrt(E) * sqrt(E)
= E.

In other words, we've proved

lim( [f(x) - L]*[g(x) - K] ) = 0. We'll use this in a bit.

But we also have

[f(x) - L]*[g(x) - K] = fg - Lg - Kf + LK

EDIT: I realized I gave away too much. I'll cut it off here. Hopefully it's helpful ^^

Last edited: Jul 19, 2010
6. Jul 19, 2010

### snipez90

If you really wanted to avoid the L,M business, it is fine to just break off rest of the proof after writing |f(x)g(x)-LM| = |g(x)[f(x)-L] + L[g(x)-M]|. You still need to use an estimate involving the minimum of two numbers, but this is probably overall less work than the approach you took. To reiterate, requiring say |f(x) - L| < min(a,b) where a > 0, b > 0 is allowed because the min is positive, and epsilon can be any positive number. It's profitable because basically you get two bounds for the price of one, since |f(x) - L| < min(a,b) simply means |f(x) - L| < a and |f(x) - L| < b, and a and b can be arbitrary positive numbers.

However, it's still instructive to finish the proof as you wrote it and look at Raskolnikov's argument. Knowing what type of estimates are available to you at any stage of analysis is obviously important.

7. Jul 20, 2010

### razegfx

Thank you both very much! These were very helpful posts, and I can certainly see the flaw in my final conclusion now. I'll try to shore up my explanation and make sure I understand everything perfectly. Raskolnikov, I'll be sure to look at that eloquent argument you posted up there. Your patience is appreciated; I have yet to take any calculus courses (Calc-1 in the fall, though!) so all the help is great!

8. Jul 20, 2010

No problemo. And it's not all that elegant. I just liked it because it works and I actually remember it.

You're well on your way then. I remember my teacher in HS skipped over the epsilon-delta definition of a limit because it was too complicated for him haha. Instead, he just told us to trust him when he says it works. Needless to say, I didn't pay much attention in his classes anymore.

9. Jul 20, 2010

### razegfx

haha. I hope that my calc professor at least attempts to go over this stuff.

http://i25.tinypic.com/302xpj8.jpg"
^^ would something like this work? and is this what you were referring to in the quoted passage?

i had another related question ... (E^2 + |M|E + |L|E) could be rewritten as E (E + |M|E + |L|E). would it be valid if i attempted to interpret the E's within the parentheses as a constant (or as constants), so that in turn everything within the parentheses would constitute one fixed term (say 'k'). in other words could I express all E within parentheses as something fixed s.t. k = (E + |M|E + |L|E), (and k is a constant) s.t.|f(x) - L| < kE. thanks again!

Last edited by a moderator: Apr 25, 2017
10. Jul 20, 2010

### snipez90

Very good. Now you can simply choose delta to be the minimum of those 4 delta's with subscripts so that all 4 of those inequalities are satisfied if 0 < |x-a| < delta. The way I had it in mind was that given E > 0, min{E/(4|M|) , sqrt(E/2)} is also > 0, so we can find a delta_1 such that 0 < |x-a| < delta_1 implies |f(x) - L| < min{E/(4|M|) , sqrt(E/2)}, so the first two inequalities are satisfied if 0 < |x-a| < delta_1. Of course both methods achieve the same end in keeping with the definition of the limit. *EDIT* We should be careful about the case where M = 0 or L = 0 since then we would have division by 0. This consideration comes up a lot in analysis, but each time the fix is to consider a much simpler case to patch the proof.

Good question. If you set k = (E + |M| + |L|), then k is really a function of epsilon, taking on the value k(E) for a given epsilon, so it's not really a constant. Since a full solution has already been obtained, I'll elaborate on how you might finish without requiring the < E bound.

I should mention first that my original thought had an element of circular reasoning. I argued that if E is arbitrarily close to 0 then E(E + |M| + |L|) could be made arbitrarily close to 0, but since we have a product, this kind of reasoning sort of depends on the very theorem we're attempting to prove.

One way to resolve this is to use the fact that if two functions f and g have the property that $f(x) \leq g(x)$, then we have lim f(x) $\leq$ lim g(x) (note that the conclusion requires a weakly less than inequality). The proof of this is arguably of the same difficulty as the product theorem, but I think it's easier. We can then apply this by taking the limit as x -> a of both sides of |f(x)g(x) - LM| < E(E + |M| + |L|), replacing the strict inequality by its weak version. But the right hand side does not depend on x (just E), so we have $\lim_{x \rightarrow a} |f(x)g(x) - LM| \leq E(E + |M| + |L|)$. Finally, one can argue that the only number that is less than or equal to E(E + |M| + |L|) for every epsilon is 0 (using contradiction should work). Another way would be to take yet another limit of both sides of the last inequality, this time letting E -> 0, but one needs to be careful to avoid the circular reasoning I mentioned, instead relying on the related sum property and some basic limits.

I think the min approach is the easiest way for finishing your original proof, but if I think of a better way I'll let you know. But yeah as you move on to more advanced analysis an E*k(E) bound, where k(E) stays fixed as E -> 0, would suffice.

Last edited: Jul 20, 2010