# What are epsilon-delta proofs?

1. Aug 2, 2004

### rcg

Thanks in advance for any help,

I'm trying to understand epsilon-delta proofs, and the various sites I've found so far aren't helping that well. I know that epsilon is referring to a small number >0, and delta traditionally refers to a number > epsilon, but I'm not quite sure of why this extra variable is required.

Much appreciated,

Ryan

2. Aug 2, 2004

### AKG

epsilon-delta proofs are proofs of limits. You might see something like $\lim _{x \rightarrow c} f(x) = L$. Now, you want to prove that the limit as x approaches c of the function f is indeed L. You have certain laws and rules that you use for evaluating limits, just like you have different rules for differentiating. But proving limits with delta-epsilon proofs is something like proving derivatives from first principles. I think your understanding of what epsilon and delta represent is wrong. If some function has a limit at some certain point, then you're essentially saying that the closer you get to the point, the closer you'll get to the limit, and that you can get as close to the limit as you want. That's what epsilon is for. For any number greater than zero, epsilon, you can find a value for delta such that all x values near c with maximum distance delta from c is less than epsilon away from L. I'll try to clarify. Say you have the funciton y = x², and you say that the limit as x approaches 2 is 4. Now, let's say I choose epsilon to be 1. That means you have to find a range of x values around 2 such that all the y values are within 1 of 4, or in other words, all the y values are between 3 and 5. If you choose delta to be (2 - sqrt(3)), then all x values within delta of 2, that is, all values from sqrt(3) to 4 - sqrt(3) will have y values that are within epsilon of 4. Epsilon-delta proofs prove that for any arbitrarily small espilon, you can always find some delta such that for all x such that 0 < |x-c| < $\delta$, f(x) is within espilon of the limit, i.e. |f(x) - L| < $\epsilon$.

3. Aug 3, 2004

### Galileo

An epsilon-delta proof means a proof that stems from the definition of a limit.
$$\lim_{x\rightarrow a}f(x)=L$$ is shorthand notation. What is means is:

For all $\epsilon>0$, there exists a $\delta>0$ such that
$|f(x)-L|<\epsilon$, whenever $0<|x-a|<\delta$.

Notice that: |f(x)-L| is the distance from the value of the function to the limit value and |x-a| is the distance from x to the point a.
In words then, the above says that we can make the value of the function f(x) arbitrarily close to L (as close as we want), by making x close enough (but not equal ) to a.

You could prove the value of certain limits by pluggin in x=a and if that doesn;t work you could see if you can cancel some factors. But another way to prove it is to show that there indeed is a number delta>0 which does the above. This is a delta-epsilon proof and uses directly the definition of a limit.

Most of the tricks you use when evaluating limits must be proven using the above definition. Like the limit of a sum is the sum of the limits (provided the individual limits exist).

4. Aug 3, 2004

### gravenewworld

Epsilon-delta proofs are any undergraduate math major's nightmare. From my experience with them, you have to be extremely clever to see some of the tricks to solving them.

5. Aug 3, 2004

### matt grime

I would suggest the exact opposite: there are few tricks needed, and the same method solves almost any epsilon delta proof, which is why more experienced mathematicians are able to do them in their sleep. They certainly are confusing the first time you see them.

A rough way of solving any e-d proof is to say:

let e be arbitrary, now just suppose |x-y|<d, where we do not say what d is. Calculate |f(x)-f(y)| in terms of d and x (if applicable), and show that a suitable choice of d at the beginning would have made this less than e.

example 1.

show that x^2 is (uniformly) continuous on [0,1]

if |x-y| < d, then |x^2-y^2| = |x-y||x+y| < d*2 by hypothesis, hence if we let d be any number less than e/2, |x-y| < d => |x^2 - y^2| < e, e was arbitrary and we are done.

uniform means there is no x dependence in our choice of d.

if we didn't have that x,y were in the interval [0,1] (and hence that there sum was less than 2), then we would need some x dependence.

in the example: |x^2-y^2| =|x-y||x+y| < d(2|x|+d)

now given some e let d be such that d(2|x|+d) < e, which can be done.

6. Aug 3, 2004

### pnaj

Hi everyone,

Analysis was difficult for me to start with, but it does get easier.

Personally, I think sometimes the best way to learn tricky new concepts like this is to learn the definitions 'parrot-fashion', even if you don't completely understand them yet ... so that, when you are trying to prove something you don't have 'dig' the definitions out of your brain everytime.

7. Aug 4, 2004

### e(ho0n3

This is troubling me. Where did d*2 come from.

8. Aug 4, 2004

### arildno

Note Matt Grime's comment:

9. Aug 4, 2004

### e(ho0n3

I'm not following. |x-y||x+y| < d*2. Since x + y is less than 2, then |x-y||x+y| is less than 4, but how does d*2 come into play.

10. Aug 4, 2004

### arildno

|x-y| is, by hypothesis, less than d.
|x+y|<2, since x,y is in [0,1]
Hence, their product must be lower than the product of their respective bounds, i.e, 2d

11. Aug 6, 2004

### mathwonk

hewre is an epsilon delta proof for you to try. Let f(x) = 3x. I want to know how small x has to be so that f(x) will be smaller than 1/10. I.e. I am choosing epsilon to be 1/10, and i want you to find a delta, such that if x is less than delta then f(x) will be less than 1/10.

That is the basic idea. Did you get it?

If so, now try it with f(x) = x^2. If you get it you are on your way!

12. Aug 8, 2004

### rcg

Well, after staring at some of these replies for a few days, I think I've got the general idea (and the hang of my copy of MathType).
I'd like to express my sincere thanks to everyone for helping out.

13. Aug 12, 2007

### ratliff

Yo

The easiest thing to do when trying to complete a E-D proof is to get E in terms of D or vis versa. In you "scratch work" manipulate your |x-a|<D and |f(x)-b|<E equations so that you find a D in terms of E...

14. Aug 13, 2007

### mathwonk

they are about showing that the output of an experiment can be made as precise as you want, by making the inputs sufficiently precise.

For instance if you want your steak to be exactly medium rare, you have to grill it precisely 4.5 minutes on a side.

but if you only want it approximately medium rare, it suffices to cook it 4 or 5 minutes on a side. so you can be off by a little in your cooking. Thats the delta.

the epsilon is how far off the steak is from medium rare.

so you decide how far from medium rare is acceptable. then i tell you how close to 4.5 you have to time it to get that.

the tolaerance you give in the outcome is epsilon. the tolerance in the input is delta.

as long as cook it within delta minutes of 4.5, i guarantee the steak wil be within epsilon of medium rare.

or in a math problem, the square of 3 is 9, but if you want the square to be within 1 of 9, then epsilon is 1.

delta is how close the number must be to 3 to make its square between 8 and 10.

and it does not have to be perfect, just adequate. so lets try delta = 1/2.

then 2.5 squared is 6.25, ooops not good enough.

so ,lets try delta = .1. then 2.9 squared is surely ok since it is 3 - .1 so the square is 9 - .6 + .01 which seems bigger than 8.

we also have to do 3.1 squared, but that is ummm 9 +.6 + .01 < 10, oh how silly of me to do that twice.

ok??

15. Aug 13, 2007

### mathwonk

a continuoius opertion is one such that you do not have to get it eprfect to have the outcome reaonably good. in a discotinuous experiment, if you are off even by a tiny amount, the outcome is totally off.

like walking along a cliff. if you want to be safe, it is not enough to be with a little bit of the edge, you have to actually be on this side of the edge, or you will go over.

i.e. where you end up is not a continuous function of where you walk. (well roughly anyway)

16. Aug 13, 2007