# Epsilon delta definition of limit

1. Nov 15, 2015

### Avatrin

I am struggling to properly understand the $\varepsilon$-$\delta$ definition of limits.

So, f(x) gets closer to L as x approaches a. That is okay. However, taking the leap from there to the $\varepsilon$-$\delta$ definition is something I have never really been able to do.

Why is the formulation we use that we can make $|f(x) - L|$ as small as we want by making $|x - a|$ sufficiently small? How is this equivalent to the first sentence in the previous paragraph?

I could understand something like if $|x - a|$ approaches zero, so does $|f(x) - L|$. Of course, this may be harder to show algebraically. However, the $\varepsilon$-$\delta$ definition is something I simply do not understand. It may even be equivalent to the first sentence in this paragraph. I feel like it must be, but how?

2. Nov 15, 2015

### Staff: Mentor

It may help to consider a function with a gap. Why does $f(x) = 2 x$ for $x ≤ 1$ and $f(x) = 1$ for $x > 1$ fail to be continuous at $x=1$? And later on you want to define not only pointwise continuity but uniform or Lipschitz continuity as well. Therefore it is necessary to be more precise than just saying 'approaches to'.

3. Nov 15, 2015

### Staff: Mentor

We (or you) can make |f(x) - L| as small as someone else specifies. The "smallness" in each expression is specifically quantified by the $\epsilon$ they specify and the $\delta$ that you supply. If the other person is satisfied, you're all done. OTOH, if they come back with a smaller $\epsilon$, your task is to find a smaller $\delta$. If this results in getting f(x) and L "close enough," then you're done. Otherwise, the process is repeated with ever smaller values of $\epsilon$ and $\delta$ until they are satisfied.
As already mentioned, the value of $\epsilon$ denotes how close f(x) and L are, and the value of $\delta$ denotes how close x is to a. Note that the function f does not have to be defined at x = a. The definition of the limit doesn't require that x = a at any time.

4. Nov 15, 2015

### Avatrin

Reread my post carefully. It's not about continuity; It concerns the formal definition of limits.
Reiterating the definition does not really answer my question. I know what the definition says. I also know how to use it. However, I cant wrap my head around why it is equivalent to the first sentences in my second and fourth paragraphs.

5. Nov 15, 2015

### Krylov

These sentences are not mathematically rigorous. The $\varepsilon-\delta$ definition makes them rigorous. Practise applying the definition to a variety of problems and you will gain an intuition, which seems what you are looking for.

6. Nov 15, 2015

### Staff: Mentor

Which is what I was trying to get at when I said,
That's exactly what I was attempting to do in post #3 -- i.e., show that the two descriptions are roughly equivalent. The difference is that with ϵ and δ the definition is made rigorous.

7. Nov 15, 2015

### Staff: Mentor

I found this

quite appropriate to illustrate what it's all about. And, if I may say, the imagination of the acting persons somehow funny as well.

8. Nov 16, 2015

### Avatrin

After thinking about it for a while, I think I have been reading the definition wrong this entire time. So, let me try rephrasing the definition to see if I have understood it correctly.

For all $\epsilon$, there is a $\delta$ such that for all $x \in (a- \delta, a+ \delta)\backslash \{a\}$, $|f(x) - L| < \epsilon$

Is this correct?

Last edited: Nov 16, 2015
9. Nov 16, 2015

### Staff: Mentor

Except that you don't need to exclude ${a}$, yes. By the way: my textbook first defines it as $L = f(a) = \lim_{x → a} f(x)$

10. Nov 16, 2015

### Staff: Mentor

Looks perfectly fine to me. Also, excluding a is necessary for functions that aren't defined at a.

You're confusing the limit definition with the definition of continuity at a point. The limit can exist even if the function doesn't exist at x = a.

11. Nov 16, 2015

### Staff: Mentor

How can $f$ be continuous at $a$ if $f$ is not defined in $a$? (And please don't mention the Lebesgue measure, we're not integrating.)

I was just quoting. The limit of a function $f$ has been defined there by: If there is at least one sequence $(a_n)$ converging to $a$, e.g. constant $a$ if $a$ belongs to the domain of $f$, with $\lim_{x→a} f(x) = c$ if for any sequence $(x_n)$ converging to $a$ we have $\lim_{x→∞} f(x_n) = c$.

I haven't checked the authors equivalence proof but I'm sure it's true since thousands of students went over it and I would have heard of if it was wrong. This definition includes the possibility of $f$ not being defined in $a$ without explicitly ruling the point out. However, in his definition of continuity via limits the author explicitly required $f(a)$ to be defined.

12. Nov 16, 2015

### pwsnafu

13. Nov 16, 2015

### Staff: Mentor

Thank you. And my apologies!

14. Nov 17, 2015

### mathwonk

I may be an outlier here but to me continuity and limits are essentially the same. I.e. f(x) has limit L as x approaches a if and only if defining f(a) = L makes f continuous at a. To me continuity is the more basic concept, and if you understand that you can also understand limits. I.e. f has a limit at x=a if and only if f can be made continuous by defining f(a) appropriately. Indeed almost all "find the limit" problems in books, such as find the limit of f(x) = (1-x^2)/(1-x) as x approaches 1, proceed by first manipulating f and replacing it by g(x) = 1+x, then having found a continuous function g that equals f everywhere except at 1, we get the limit of f by evaluating g at x=1, namely the limit of f as x approaches 1, equals the value of g at x=1, which is 2.

The first example of finding a limit in books more complicated then this is the limit of sin(x)/x, which uses the squeeze principle, but this is merely a generalization of the same idea. I.e. that principle says essentially that if g and h are continuous at x=a, and have the same value there, and if g(x) ≤ f(x) ≤ h(x) for all x near a except possibly at x=a, then f has a limit at x=a and that limit equals the common value g(a) = h(a).

Thus the epsilon delta definitions of the two concepts look almost the same.

In the original post the language was about f(x) getting closer to L as x approaches a. That is actually not quite correct, since f(x) could get continuallky closer to L without getting ever within say a distance 6 of it. You want, as I believe others have poibnted out, to say f(x) can be made as close to L as you like, by choosing x sufficiently close to a. Then e and d measure "as close as you like" and "sufficiently close". I.e. to translate that into e,d language just ask: how close? I.e. the point is that to make this precise just give a name (e) to how close to L you want f(x) to be, and then give a name (d) to how close x should be to a to make this happen.

It comes out as: " if e is any positive distance, then f(x) will be closer to L than e, as long as x is close enough to a". Or more precisely, "given any interval J of radius e centerd at L, there is a corrresponding interval I of radius d centered at a, such that for every x≠a in I, we will have f(x) in J."

Well, its hard to make very precise and very easy to grasp, but the reason we are so precise about the numbers e and d is that this gives a way to actually check whether a limit exists by calculating with d and e.

e.g. to show that (x+1)^2 -- > 1, as x-->0, let the distance from x to zero be called d, so that x = d or -d. then look and see how close (x+1)^2 is to 1. i.e. then (x+1)^2 = (1±d)^2 = 1 ±2d + d^2, and e can see that as d-->0, both terms ±2d and d^2 -->0 also, hence (x+1)^2-->1.

It is more trouble to get the e in there, but we want d so small that |d^2 ±2d| < e. so we would be ok if d^2 < e/2 and also |2d| < e/2. So lets see, given e>0 I giuess we could take d < e/4 and also d < e/2 and also d<1 (to make d^2 < d < e/2)./

whew! we have at last shown that no matter how small e is, we can choose d small enough that whenever |x-0| < d, then we get |(x+1)^2 - 1| < e.

Last edited: Nov 17, 2015
15. Nov 17, 2015

### Staff: Mentor

@mathwonk Caution: I have been reprehended twice for talking about continuity and showing the equivalence of both concepts, although I just quoted my textbook. Maybe comments are not evaluated contextfree in this forum ...

16. Nov 17, 2015

### mathwonk

I am just trying to help the OP understand them, and I always teach continuity as a more basic idea and let limits grow out of that. I.e. limits are what we try to define in order to try to make a function consinuous that is not originally so. continuity is an easy intuitive idea and limits are not, so that's my approach. I.e. if we have a function that is continuous at a point, then its value there should be determined by its values nearby, but how? I.e. how do we describe the value at that point in terms of the nearby values? That is the origin of the definition of a limit. E.g. the tangent line to a curve should be continuous extension of the concept of secant line.

The limit definition of a tangent line, to a circle, is first given by Euclid in his chapter on circles, in Prop. 16, Book 3. There he says that a certain line is tangent to a circle in the sense that it touches the circle only once there and that no other line can be interposed between the circle and the tangent line. I.e. given any other line, making any angle e with the tangent line, there will be some other point of the circle lying between the two lines at some distance d from the first point.

Equivalently, given any e>0, there is a distance d>0 such that the secant drawn from the first point p to any point q of the circle closer to p than d, will lie between the tangent line and the line making angle e with it. This is exactly the epsilon delta definition of a limit.

17. Nov 27, 2015

### Erland

I wrote a little (well, quite lengthy) pedagogical dialogue, inspired by Plato and Galileo, where two persons, Epsilon and Delta, discuss what a limit is and work out the epsilon-delta definition. It is based upon a misconception in Avatrin's original post. Now, from later posts, it seems that Avatrin grasped the epsilon-delta-definition, but still, this might have a pedagogical value for students.

Admittedly, the dialogue is long. Moderator would perhaps find it suitable to move it to some other forum. Anyway, here it comes:

Epsilon: That $\lim_{x\to a} f(x)=L$ means that $f(x)$ approaches $L$ as $x$ approaches $a$.

Delta: What you mean by "approaches"?

Epsilon: I mean that when $x$ gets close $a$, then $f(x)$ gets close to $L$.

Delta: Close? How close?

Epsilon: The closer $x$ gets to $a$, the closer $f(x)$ gets to $L$.

Delta: Hmm, when you say that $x$ is close to $a$, you mean that the distance beween $x$ and $a$, that is, $|x-a|$, is small, right?

Epsilon: Right.

Delta: And likewise, by "$f(x)$ is close to $L$", you mean that $|f(x)-L|$ is small.

Epsilon: Of course.

Delta: So, you say that $\lim_{x\to a} f(x)=L$ means that $|f(x)-L|$ gets smaller as $|x-a|$ gets smaller?

Epsilon: Yes.

Delta: Hmm. Consider $f(x)=x^2+1$. This function attains its minimum value $1$ at $x=0$. Right?

Epsilon: Right, so..?

Delta: So with $a=0$ and $L=0$, $|f(x)-L|=f(x)$ gets smaller as $|x-a|=|x|$ gets smaller...?

Epsilon: So it seems, hmm...

Delta: And this means that $\lim_{x\to a} f(x)=L$ in this case, that is $\lim_{x\to 0} x^2+1=0$.

Epsilon: Something is wrong here, because obviously, $\lim_{x\to 0} x^2+1=1$.

Delta: But I see nothing wrong in my argument.

Epsilon: Neither do I. So there must be something wrong with my definition.

Delta: Yes. Although in my example, $|f(x)-L|$ got smaller when $|x-a|$ got smaller, $|f(x)-L|$ didn't even get close to $0$, it never even got within the distance $1$ from $0$. This is what must be fixed, I think...

Epsilon: Right! Hmm... what about this: $\lim_{x\to a} f(x)=L$ means that we can make $|f(x)-L|$ as small as we want by making $|x-a|$ sufficiently small?

Delta: Hmm, does this rule out my example?

Epsilon: Yes, it does, because in the example, we can never make $|f(x)-L|$ smaller than $1$, whereas it is stipulated in the definition that it should be possible to make $|f(x)-L|$ smaller than $1$, or smaller than $0.1$, or than $0.01$, well, smaller than any given positive number, by making $|x-a|$ sufficiently small.

Delta: Yes, you are right. But still... What about $f(x)=\frac{x^3-1} { x-1}$? What is $\lim_{x\to 1} f(x)$? Does it even exist?

Epsilon: Certainly. It's value is $3$. I can show you the calculation...

Delta: Not right now, please! But if what you say is true, then we can make $|f(x)-3|$ as small as we want by making $|x-1|$ sufficiently small.

Epsilon: Yes.

Delta: So if we we put $x=1$, then $|x-1|=0$. Then, certainly, $|x-1|$ must be sufficiently small. It cannot be smaller! But then, $|f(x)-3|$ is not even defined, since the denominator in $f(x)$ is $0$.

Epsilon: Yes. OK. We must add that we should make $|x-a|$ sufficiently small, but not $0$.

Delta: OK, this seems reasonable. But still it's quite vague to talk about "as small as we want" and "sufficiently small".

Epsilon: I think it's quite clear.

Delta: Really? Well, let's return to the example with $f(x)=\frac{x^3-1} { x-1}$. Let's say I want $|f(x)-3|$ to be smaller than $0.01$. You say that this is true if $|x-1|$ is sufficiently small, but not $0$?

Epsilon: Yes.

Delta: Then you should be able to tell me how small $|x-1|$ must be to ensure that $|f(x)-3|<0.01$, right?

Epsilon: Right, let's see...

(Epsilon does some calculations at a paper. Delta agrees that they are correct.)

Epsilon: So, we see that $|f(x)-3|=|x-1||x+2|$. We want this to be smaller than $0.01$. Hmm... if $|x-1|<1$, then $0<x<2$, and then $|x+2|<4$.

Delta: Yes, that is in itself true, but I don't see the point. It is certainly not sufficient to make $|x-1|<1$. $|x-1|$ must be much smaller than $1$ to ensure that $|f(x)-3|<0.01$.

Epsilon: Of course. We will undoubtedly find that $|x-1|$ must be much smaller than $1$. But then it is also less than $1$, so assuming this will not lead us wrong.

Delta: Ok, I suppose so. But I still don't see the point.

Epsilon: You'll see. If $0<|x-1|<1$, then $|x+2|<4$, so $|f(x)-3|=|x-1||x+2|<4|x-1|$. If we choose $x$ so that $4|x-1|<0.01$, then $|f(x)-3|<0.01$.

Delta: Yes, provided that $0<|x-1|<1$.

Epsilon: And $4|x-1|<0.01$ is equivalent to $|x-1|<0.0025$. And of course, $0.0025$ is much smaller than $1$, so we have a solution. If we choose $x$ so that $0<|x-1|<0.0025$, that is, if $x\in (0.9975,1.0025)$, then $|f(x)-3|<0.01$, or $f(x)\in (2.99,3.01)$.

Delta: Indeed! But... it is not really to be necessary to make $|x-1|<0.0025$. If we stipulate, say, that $|x-1|<0.5$ instead of $|x-1|<1$ as you did, then we obtain, after a similar argument as yours, that $3.5|x-1|<0.01$ suffices to ensure that $|f(x)-3|<0.01$, and then $|x-1|$ can be sligthly greater than $0.0025$.

Epsilon: And if we stipulate, say, $|x-1|<0.1$, then $|x-1|$ can be allowed to be even greater, etc. But so what? If $0<|x-1|<0.0025$, then $x$ is sufficiently close to $1$ to make $|f(x)-3|<0.01$, and we just wanted such a sufficient condition for $x$. That we can find a "better" upper bound than $0.0025$ doesn't matter.

Delta: Ok, I agree. We have established that if $|x-1|$ is sufficiently small but not $0$, then $|f(x)-3|<0.01$. But what if we want $|f(x)-3|$ to be even smaller, say smaller than $0.001$ instead of $0.01$?

Epsilon: Well, since we still have $|f(x)-3|<4|x-1|$, provided that $0<|x-1|<1$, we obtain, in a similar way as before, that $|f(x)-3|<0.001$ (that is: $f(x)\in (2.999, 3.0001)$) if $0<|x-1|<0.00025$ (or $x\in 0.99975,1.00025)$).

Delta: Right, and we may improve the upper bound $0.00025$ slightly if we want, just like before. But as you said, it would be pointless to insist on that. So, what if we continue and want, say $|f(x)-3|<0.000001$.

Epsilon: Then again, a similar calculation gives that we can choose $0<|x-1|<0.00000025$. The pattern becomes clear here.

Delta: Yes, I see it too. And in fact, it is not hard to show that if we take any number $\epsilon>0$...

Epsilon: I like that letter!

Delta: ...which can be arbitrarily small, or as small as we want, as long as it is greater than $0$, then we can get $|f(x)-3|<\epsilon$ if we choose $x$ so that $0<|x-1|<\epsilon/4$.

Epsilon: Right. So, for any $\epsilon >0$: if $0<|x-1|<\epsilon/4$, then $|f(x)-3|<\epsilon$.
This is exactly what I mean when I say that $\lim_{x\to 1} f(x)=3$.

Delta: Very good! But wait! This might not work if we happen to choose a big $\epsilon$, say $\epsilon > 4$. Because then, $\epsilon/4 > 1$, and then $0<|x-1|<\epsilon/4$ is not sufficient to ensure that $0<|x-1|<1$, which you assumed in your calculation.

Epsilon: Come on! We are interested in getting $f(x)$ close to $3$, and then it is of no interest to look at large distances greater than $4$!

Delta: Perhaps. But in the formal condition you gave, you didn't explicitly state that $\epsilon$ cannot be too great. You should do that if you want to it to be formally correct.

Epsilon: OK, you are right. But instead of inserting a restriction of this kind on $\epsilon$, I think it is better to make it explicit that we must have $|x-1|<1$. So to ensure that $|f(x)-3|<\epsilon$, we may choose $0<|x-1|<\min(\epsilon/4,1)$.

Delta: Right, that takes care of this problem. But this only works for this function $f(x)=\frac{x^3-1} { x-1}$, with $a=1$ and $L=3$. What if we try another function, say $f(x)=\frac{x^4-1} { x-1}$. Does it also have limit as $x\to 1$?

Epsilon: Yes. This limit is $4$.

Delta: So, here too we want to $|x-1|$ so small so that $|f(x)-4|<\epsilon$, for an arbitrary $\epsilon >0$. But I suppose that $0<|x-1|<\min(\epsilon/4,1)$ will not work in this case?

Epsilon: No, $\epsilon/4$ was of course a choice specific to the previous example. In this case...

(Epsilon does some more calculations. Delta agrees that they are correct.)

Epsilon: If we assume that $0<|x-1|<1$ here too, then $|f(x)-4|=|x-1||x^2+2x+3|<11|x-1|$.

Delta: Ah, so for every $\epsilon>0$, if $0<|x-1|<\epsilon/11$, then $|f(x)-4|<\epsilon$.

Epsilon: Indeed. And this means that $\lim_{x\to 1} f(x)=4$... well, just as is in the previous example we should have a slightly stronger condition on $x$: $0<|x-1|<\min(\epsilon/11,1)$ ensures that $|f(x)-4|<\epsilon$.

Delta: Right. So what does it mean to say that $\lim_{x\to a} f(x)=L$, for an arbitrary function $f$ and arbitrary numbers $a$ and $L$? Can we make our ideas here general?

Epsilon: It should certainly be possible. We start from an arbitrary $\epsilon>0$ and try to find a bound of $|x-a|$ such that if $|x-a|$ is smaller than this bound, then $|f(x)-L|<\epsilon$.

Delta: Yes, and this bound may depend upon $\epsilon$, such as $\min(\epsilon/4,1)$ and $\min(\epsilon/11,1)$ in our two examples. We would expect the bound to be smaller, the smaller $\epsilon$ is...

Epsilon: Let us call the bound $\delta$...

Delta: I like that letter!

Epsilon: So, what about this: For every $\epsilon>0$, there is a $\delta>0$, which may depend upon $\epsilon$, such that if $0<|x-a|<\delta$, then $|f(x)-L|<\epsilon$.

Delta: Very good! So, in our examples, we choose $\delta=\min(\epsilon/4,1)$ and $\delta=\min(\epsilon/11,1)$, respectively. In showing that a function has a certain limit at a point, one typically finds such a $\delta$ for each $\epsilon>0$, just as we did before.

Epsilon: Right! But... there is a slight loophole. Not all functions are defined for all real numbers. We must ensure that $x$ lies in the domain of the function $f$. But we just add this condition.

Delta: Right, so our formal definition of limit goes like this: The function $f$ has the limit $L$ at the point $a$, and we write $\lim_{x\to a} f(x)=L$ if: for every $\epsilon>0$ there is a $\delta >0$, which may depend upon $\epsilon$, such that $|f(x)-L|<\epsilon$ whenever $x\in D_f$ and $0<|x-a|<\delta$.

Epsilon: Perfect! This definition is so good that it should have a name. Why not naming it after ourselves?

Epsilon and Delta (in chorus): THE EPSILON-DELTA DEFINITION OF LIMIT!

Ghost of Weierstrass: Ha, don't think you are first!

Last edited: Nov 27, 2015
18. Nov 29, 2015

### Stephen Tashi

Ideas involving the common language meaning of "approaches" deal with some phenomenon taking place in time or taking place in a series of steps. The epsilon-delta definition does not define any process that takes place in time or in a series of steps. So you shouldn't expect to understand the epsilon-delta definition as being the same idea as the common language description of a limit.

For real valued functions of a real variable, you can define a type of limit the involves sequences and this introduces a concept of a process that takes place in a series of steps. You can show such a sequentially defined limit is equivalent to the epsilon-delta definition of limit, but you must use some properties of the real numbers to do this. The two definitions are not "the same idea expressed in different language".

It's clearer to say "for each" than "for all" (in spite of the fact the the latex symbol for the quantifier is named "forall"). For example, it is true that "For each real number n, there exists a number m that is greater than n". However it is not true that "For all real numbers, there exists a number m that is greater than those numbers".

19. Nov 29, 2015

### Erland

Interesting. So you men that if one writes "for all $\epsilon$, there is a $\delta$", this might be interpreted as $\exists\delta\forall \epsilon\dots$ instead of $\forall\epsilon\exists\delta\dots$ (which is the correct interpreation in this case)?

20. Nov 29, 2015

### Stephen Tashi

The correct interpretation is " for each epsilon... there exists a delta ... such that for each x ....", which is what your latter version of notation suggests.

Whether people can interpret "for all epsilon" correctly depends on whether they distinguish a difference in phrases like "For all integers" and "For all integers k". If the appearance of "k" tells them off that "For all integers k" is not statement concerning the entire set of all integers then they won't get confused. However, "For all integers k" must be distinguished from the so-called "set builder notation" where a variable "k" can appear as in $S = \{k: k \$ is an integer $\}$.