What's the Difference Between Epsilon-Delta and Epsilon-N Definitions?

j@zz!
Messages
1
Reaction score
0
I'm have some trouble distinguising definitions and require some help please:

is the epsilon-n def. for convergence?

is the epsilon-delta def. for continuity?

if this is correct could you slighlt elaborate on what it actually means; I'm quite confused and any help would do!:confused:
 
Physics news on Phys.org
I am sure when i read the epsilon-delta definition some 3-4 years back, they were defining "limits". Did they change the definition or something now?

-- AI
 
The epsilon-n def. for convergence? If you mean for a sequence of functions it is

f_{n}(x) \rightarrow f(x) pointwise

if, and only if

\forall \epsilon >0 , \exists N\in\mathbb{N} \mbox{ such that } n\geq N \Rightarrow \left| f_{n}(x) - f(x)\right| < \epsilon

For continuity:

f(x) is continuous at x=a if, and only if

\forall \epsilon >0 , \exists \delta >0 \mbox{ such that } \left| x - a\right| < \delta \Rightarrow \left| f(x) - f(a)\right| < \epsilon

But I have made some assumptions, could you be more explicit about what exactly the context of these defintions is? Are you dealing with real functions of a real variable? General metric spaces?
 
j@zz! said:
I'm have some trouble distinguising definitions and require some help please:
is the epsilon-n def. for convergence?
Alright, let's say we have a series like:
\sum_{n=1}^{\infty}(\frac{1}{2})^n
We know that this series converges to 1. If you were to start adding up the beginning terms you would get closer and closer to one. Taking the first four terms gives 1/2 + 1/4 + 1/8 + 1/16 = 15/16. Taking the fifth term gives 31/32. Taking the sixth term brings us even closer. But how do we know the limit to which the series converges is 1? To do this, we examine the sequence of partial sums (the numbers you get by adding the first term, the first and second, and so on). This sequence is something like {1/2,3/4,7/8,15/16...}. Now what would be a reasonable definition of convergence? Clearly we can't expect the sum to ever reach one in a finite number of terms, but it does get very close. The definition we chose is that if I were to pick any number, call it \epsilon, then no matter how tiny I picked it to be (as long as it is greater than zero), you could come up with a number, call it N, so that all the terms in the sequence after and including n=N are 'within epsilon' of the limit. So if I picked epsilon to be .1, then you could chose N to be 4, because the fourth term in the sequence is 15/16, and 1-15/16=1/16=.0625<0.1, and for any term in the sequence after this, the difference is even less. But this is not enough to guarantee that the limit is 1. You have to show that you will be able to find an N no matter how small I choose \epsilon, as long as \epsilon&gt;0. To do this, remember the formula for a geometric series:
\sum_{n=1}^{N}r^n=\frac{r-r^{N+1}}{1-r}
So we want the difference between one and the sum to be less than \epsilon:
1-\sum_{n=1}^{N}(\frac{1}{2})^n =1-\frac{.5-.5^{N+1}}{1-.5}&lt;\epsilon
1-\frac{.5-.5^{N+1}}{.5}&lt;\epsilon
1-(\frac{.5}{.5}-\frac{.5^{N+1}}{.5})&lt;\epsilon
1-1+\frac{.5^{N+1}}{.5}&lt;\epsilon
2\times .5^{N+1}&lt;\epsilon
.5^{N}&lt;\epsilon
2^{-N}&lt;\epsilon
-N&lt;\log_{2}{\epsilon}
N&gt;-\log_{2}{\epsilon}
So as long as we choose N to be greater than -\log_{2} {\epsilon}, we are guaranteed that the sequence is within epsilon of the limit for all n\geq N. Note that sometime a series will be above its limit, so the condition that the series is 'within epsilon of the limit' is written as |S_n-L|&lt;\epsilon for all n \geq N where S_n is the nth term in the sequence of partial sums and L is the limit(which in this example was 1).
 
Last edited:
The condition n> N only makes sense when n is a positive integer: it can only apply to sequences and series.

The condition |x- x_0|&lt; \delta only makes sense for continuous variables: it can only apply to limits of functions.
 
Back
Top