Limits don't have an epsilon (ε) and a delta (δ) per se...
The idea of a limit is that it's supposed to capture our idea of "approaching" a number. For example, if I have the function:
f(x) = (x2 - 4x + 3) / (x - 1)
I know that f(1) does not exist (because 0/0 is undefined). However, if I compute some values of f near 1:
f(1.1) = -1.9
f(1.01) = -1.99
f(1.001) = -1.999
et cetera
It appears to me that f(x) is approaching -2 as x approaches 1. Incidentally, we can write this in symbols as:
f(x) -> -2 as x -> 1
(where -> is supposed to be an arrow, not a dash followed by a greater than sign)
So the question is, how would we prove this? How can we put into symbols the idea that as x approaches 1, f(x) approahces -2?
Well, it's difficult to describe the idea of a moving point in mathematics, so instead let's just let x be some point near 1, and require that it has to be close to 1. I haven't decided just how close to 1 we should make x, so let's use a variable for this distance.
Let the symbol δ be that variable. I shall require the distance between x and 1 to be less than δ. IOW:
|x - 1| < δ
Also, don't forget x isn't allowed to be exactly 1! Thus:
0 < |x - 1| < δ
Now, what about the value of f(x)? Let's use the same idea! Let's introduce the variable ε to be the bound on how close f(x) must be to -2. IOW, I want:
|f(x) - (-2)| < ε
Now, let us combine these two ideas. I want to say "When x is near 1, f(x) is near -2", so putting that into symbols:
I want to prove that: 0 < |x - 1| < δ implies |f(x) - (-2)| < ε
But we still have to figure out what &delta and &epsilon should be!
Well, let's defer that a little bit longer and figure out how they should be related. Our end goal is to prove that f(x) is approaching (-2), so let's fix ε and see if we can find a formula for δ that makes the implication true.
(note: These next steps are typical of so-called "ε-δ" proofs)
Goal: Fnid δ so that:
0 < |x - 1| < δ implies |f(x) - (-2)| < ε
First, plug in what f(x) is:
|(x2 - 4x + 3) / (x - 1) - (-2)| < ε
Multiply through by |x-1|:
|(x2 - 4x + 3) + 2(x-1)| < ε |x-1|
|x2 - 2x + 1| < ε |x-1|
|(x - 1)2| < ε |x-1|
now divide
|x-1| < ε
And now it's clear how to choose our value for δ! If we simply choose δ to be equal to ε, then we're appear to be guaranteed that |x-1| < ε. However, the steps we did above are in the reverse order of what we need to prove this claim, so let's work forwards:
Choose δ to be equal to ε
0 < |x - 1| < δ
substituting yields: |x - 1| < ε
Multiply by |x-1|: |x2 - 2x + 1| < ε |x-1|
Add and subtract inside the ||: |(x2 - 4x + 3) + 2(x-1)| < ε |x-1|
Substitute f: |f(x) - (-2)| < ε
So we've proven our goal! For any ε we choose, we may select δ to be equal to ε and then the following is true:
0 < |x - 1| < δ implies |f(x) - (-2)| < ε
IOW, no matter how close we want f(x) to be to (-2), we are able to sufficiently narrow the freedom on x so that f(x) is close enough to (-2) for all allowed values of x.
But we still haven't chosen an actual value for ε! And this is the key to how the definition of a limit corresponds to our intuitive idea. No matter how small we want the error to be, we can restrict x to a small enough interval around 1 so that f(x) is always within our error bounds!.
Anyways, that's the motivation for the whole thing. Summing it up in the rigorous definition:
f(x) -> L as x -> a if and only if
For every ε>0 there exists a δ>0 such that for all x:
0 < |x - a| < δ implies |f(x) - L| < ε
And to prove that a limit is a particular value using this definition, the typical strategy is what we did above; we "solve" for δ in terms of ε and then prove our solution yields the implication required by the definition.
And getting back to my original statement, δ and ε are dummy variables; we could use other letters for them if we like... we just tend to use δ and ε by convention. These proofs are confusing to beginners, and it helps to have a consistent variable to eliminate a source of confusion.
[/size]