Epsilon-delta definition of limit

  • Thread starter Thread starter Inhsdehkc
  • Start date Start date
  • Tags Tags
    Definition Limit
Inhsdehkc
Messages
19
Reaction score
0
While solving problems regarding Epsilon-delta definition of limit from my textbook i found that every answer was like ε= a×∂,where a was any constant.Is it necessary that ε should always be directly proportional to ∂ for limit to exist?? Cant they be inversely proportional? If they can please give me an example of that kind of function ( i didnt find any kind of function like that in my textbook)
Please help!
 
Physics news on Phys.org
Inhsdehkc said:
While solving problems regarding Epsilon-delta definition of limit from my textbook i found that every answer was like ε= a×∂,where a was any constant.Is it necessary that ε should always be directly proportional to ∂ for limit to exist?? Cant they be inversely proportional? If they can please give me an example of that kind of function ( i didnt find any kind of function like that in my textbook)
Please help!

Hey Inhsdehkc and welcome to the forums.

One thing that might help you answer your own question: if one is inversely proportional to the other, then what will happen is one goes to zero? Will the other shrink, or will the other also approach zero?
 
actually very few examples fit the pattern you described.

try some simple examples like powers of X.
 
Inhsdehkc said:
While solving problems regarding Epsilon-delta definition of limit from my textbook i found that every answer was like ε= a×∂,where a was any constant.Is it necessary that ε should always be directly proportional to ∂ for limit to exist?? Cant they be inversely proportional? If they can please give me an example of that kind of function ( i didnt find any kind of function like that in my textbook)
Please help!

Epsilon is arbitrarily small (though always greater than zero) but a is a constant. So you can make epsilon * a as small as you like just by making epsilon small and keeping the constant ... constant.

Reminds me of a grad school algebra prof who would always say, "Fixed but arbitrary" in a sort of sarcastic or humorous tone of voice when describing a situation like this. I always thought he was making some kind of commentary on the funny contortions you have to get used to. Constants vary, but they don't vary the same way variables do. I always thought this prof was calling attention to the kinds of things that we never think about ... but if we did think about them, they'd become complicated!

So yeah, epsilon varies and a doesn't vary; except that a could still be anything.

That's math!
 
chiro said:
Hey Inhsdehkc and welcome to the forums.

One thing that might help you answer your own question: if one is inversely proportional to the other, then what will happen is one goes to zero? Will the other shrink, or will the other also approach zero?

Thanks chiro!

Ya, if they are inversely proportional to each other then as ε→0 ,∂→∞ (because ∂= a/ε) so rather than moving toward any point (say c) the interval around c ,where we want limit if it exists,will move away from c when ε is decreased so i guess ε and ∂ can never be inversely proportional for limit to exist! Am i right??
 
Last edited:
Inhsdehkc said:
Thanks chiro!

Ya, if they are inversely proportional to each other then as ε→0 ,∂→∞ (because ∂= a/ε) so rather than moving toward any point (say c) the interval around c ,where we want limit if it exists,will move away from c when ε is decreased so i guess ε and ∂ can never be inversely proportional for limit to exist! Am i right??

Yeah I think you're right. Basically what we want is for both to get smaller and not have one get smaller while the other gets larger.

When you get the proportionality like this, you get the kind of behaviour that you find in a diverging situation analagous to what happens with say tan(x) at +-pi*n/2 where n is a whole number.
 
Inhsdehkc said:
While solving problems regarding Epsilon-delta definition of limit from my textbook i found that every answer was like ε= a×∂,where a was any constant.

What do you mean by "answer". Are you are talking about proofs involving limits that begin with something like "Let \epsilon be given" and then do some work with inequalities and conclude "So let \delta be a number < \frac{\epsilon}{6} ?

If that's what you're talking about then it is not true that such proofs always end with letting \delta be proportional to \epsilon. The method of specifying \delta must be quite elaborate in some proofs. The proof might end up with a statement like "Let \delta be the smaller of \sqrt{\epsilon^2 + 2\epsilon} and \sqrt{\epsilon^3 + \epsilon^2)}".

In doing a proof that \lim_{x \rightarrow a} = L, how hard it is to specify \delta depends on how complicated a function f is. If f is a linear function like f(x) = 4x -3 then you will be able to specify \delta as some constant times \epsilon. If f(x) is a more complicated function then you probably won't.
 
SteveL27 said:
Epsilon is arbitrarily small (though always greater than zero) but a is a constant. So you can make epsilon * a as small as you like just by making epsilon small and keeping the constant ... constant.

Reminds me of a grad school algebra prof who would always say, "Fixed but arbitrary" in a sort of sarcastic or humorous tone of voice when describing a situation like this. I always thought he was making some kind of commentary on the funny contortions you have to get used to. Constants vary, but they don't vary the same way variables do. I always thought this prof was calling attention to the kinds of things that we never think about ... but if we did think about them, they'd become complicated!

So yeah, epsilon varies and a doesn't vary; except that a could still be anything.

That's math!

Formal logic and proof theory clears up this kind of stuff. The true logical construction of a delta-epsilon proof carries the construction of a logical statement containing the quantifiers "for every epsilon" and "there exists a delta". The "fixedness" of one quantity is a relative concept determined by the scope of the logical quantifiers in the sentence.
 
Last edited:
Stephen Tashi said:
What do you mean by "answer". Are you are talking about proofs involving limits that begin with something like "Let \epsilon be given" and then do some work with inequalities and conclude "So let \delta be a number < \frac{\epsilon}{6} ?

Yeah you got my question!

Stephen Tashi said:
If that's what you're talking about then it is not true that such proofs always end with letting \delta be proportional to \epsilon. The method of specifying \delta must be quite elaborate in some proofs. The proof might end up with a statement like "Let \delta be the smaller of \sqrt{\epsilon^2 + 2\epsilon} and \sqrt{\epsilon^3 + \epsilon^2)}".

ya, we may get a complicated function of δ in terms of ε like in examples you stated above but in these examples also δ decreases by some amount as ε is decreased by some other amount.isn't it? Logically also, if we want limit of a function f(x) at any point c ( and say the limiting value exists and is equal to L) then as we decrease ε ( which is interval around L) we also want δ (which is interval around c ) to converge towards c ( not to diverge from c) so we can say that for limit to exist at c for f(x) δ should decrease as ε decrease ( though they may decrease by different amount). Hence they can never be inversely proportional to each other. Is it right??
 
  • #10
chiro said:
Yeah I think you're right. Basically what we want is for both to get smaller and not have one get smaller while the other gets larger.

When you get the proportionality like this, you get the kind of behaviour that you find in a diverging situation analagous to what happens with say tan(x) at +-pi*n/2 where n is a whole number.

Ya I think the same but still not sure about it!
 
  • #11
Inhsdehkc said:
Hence they can never be inversely proportional to each other. Is it right??

No. Define the map f:\mathbb{R} \rightarrow \mathbb{R} by setting f(x) = 0 for all x \in \mathbb{R}. Now for every 0 < \varepsilon take \delta = \varepsilon^{-1}. Then |f(x)| < \varepsilon whenever |x| < \delta.

In some senses this is the only counter-example. To prove this let f:\mathbb{R} \rightarrow \mathbb{R} be a function continuous at x_0 \in \mathbb{R} in the following way: for every 0 < \varepsilon the inequality |x-x_0| < \varepsilon^{-1} implies |f(x)-f(x_0)| < \varepsilon. If there exists y_0 \in \mathbb{R} such that f(y_0) \neq f(x_0), then set 0 < c = \min\{|f(y_0)-f(x_0)|,|y_0-x_0|^{-1}\}. Since |y_0 - x_0| < c^{-1}, this implies that |f(y_0) - f(x_0)| < c, which is a contradiction. Therefore, f(x) = f(x_0) for all x \in \mathbb{R}.
 
  • #12
Inhsdehkc said:
as we decrease ε ( which is interval around L) we also want δ (which is interval around c ) to converge towards c ( not to diverge from c) so we can say that for limit to exist at c for f(x) δ should decrease as ε decrease ( though they may decrease by different amount). Hence they can never be inversely proportional to each other. Is it right??

No. Let f(x) be the constant function f(x) = 6. Look at the proof of \lim_{x \rightarrow 1} f(x) = 6. Given \epsilon > 0, you could pick \delta = 1/\epsilon. Of course, you don't have to make such a choice, but such a choice would work.

You are attempting to think about limits in an old fashioned "dynamic" way. You think of \epsilon changing or "flowing" toward zero and \delta flowing toward zero along with it. That may be a useful intuition but this is not how limits are defined.
 
  • #13
Stephen Tashi said:
You are attempting to think about limits in an old fashioned "dynamic" way. You think of \epsilon changing or "flowing" toward zero and \delta flowing toward zero along with it. That may be a useful intuition but this is not how limits are defined.

Then can you please tell me (in short) how limits are actually defined ( cause i know only that old fashioned definition of limit). Please!
 
  • #14
Inhsdehkc said:
Then can you please tell me (in short) how limits are actually defined ( cause i know only that old fashioned definition of limit). Please!

If you did problems from a book involving proofs that use epsilons and deltas, the book will have the definition of limit in it. So I'm not going to tell you the definition because you can look it up. Instead, I'm going to tell you how to regard the definition. Regard it as a legalistic document. Don't try to "restate it in your own words", as liberal arts students are advised to do. It means what it says. There is nothing in it that says "if we make epsilon smaller, we must make delta smaller". That kind of statement is a rule of thumb or empirical observation, which is often true in many cases, but it is not in the definition.
 
  • #15
Stephen Tashi said:
No. Let f(x) be the constant function f(x) = 6. Look at the proof of \lim_{x \rightarrow 1} f(x) = 6. Given \epsilon > 0, you could pick \delta = 1/\epsilon. Of course, you don't have to make such a choice, but such a choice would work.

Thanks a lot STEPHEN, the example you gave here is enough to clarify my confusion!
Stephen Tashi said:
Regard it as a legalistic document. Don't try to "restate it in your own words", as liberal arts students are advised to do. It means what it says. There is nothing in it that says "if we make epsilon smaller, we must make delta smaller". That kind of statement is a rule of thumb or empirical observation, which is often true in many cases, but it is not in the definition.

I will consider your advice from now on!
 
  • #16
jgens said:
No. Define the map f:\mathbb{R} \rightarrow \mathbb{R} by setting f(x) = 0 for all x \in \mathbb{R}. Now for every 0 < \varepsilon take \delta = \varepsilon^{-1}. Then |f(x)| < \varepsilon whenever |x| < \delta.

In some senses this is the only counter-example. To prove this let f:\mathbb{R} \rightarrow \mathbb{R} be a function continuous at x_0 \in \mathbb{R} in the following way: for every 0 < \varepsilon the inequality |x-x_0| < \varepsilon^{-1} implies |f(x)-f(x_0)| < \varepsilon. If there exists y_0 \in \mathbb{R} such that f(y_0) \neq f(x_0), then set 0 < c = \min\{|f(y_0)-f(x_0)|,|y_0-x_0|^{-1}\}. Since |y_0 - x_0| < c^{-1}, this implies that |f(y_0) - f(x_0)| < c, which is a contradiction. Therefore, f(x) = f(x_0) for all x \in \mathbb{R}.

Thanks jgens! But the way you explained is little hard for me understand( as i am not used to all mathematics you used here). Its my weakness not yours!:(
 
Back
Top