What is the Epsilon-Delta Definition of a Limit and How Does It Work?

kts123
Messages
72
Reaction score
0
I can't get my head around the epsilon-delta definition of a limit. Unfortunately I don't have a teacher to ask (I'm teaching this to myself as a self interest) so this forum is my last resort -- google hasn't been kind to me.

From what I've seen, I don't really understand how the definition means much of anything (visual examples included.) All it seems to say is "there is an unspecified value which is greater than the difference between a function and a given value L, where that difference is greater than zero" The problem is, I don't see any need for L or f(x) to be anywhere near one another.

For example: f(5)==>25 ;

limx=>6 f(x) = 20

-1000 < |f(5) - 20| < 1000

And

0 < |5-6| < 2

That is, I see no specific reason to even bother putting in the correct values. How does the definition define anything if we require prior knowledge of what our limits and x values are? Furthermore, how are epsilon and delta even related? The visual explanations make even less sense -- why can't we choose two given values of f(x) and L to which the difference is large, yet select a large enough epsilon so that the centre of f(x) and L lies between the bands of epsilon.

I'm terribly confused as everyone can probably tell... sorry.
 
Physics news on Phys.org
All it is meant to do is provide a rigorous justification for doing limits. I mean, it's on the one hand just some silly formalism, and on the other hand it does make rigorous proofs more straightforward and feasible.

The idea is that, if a limit exists, there is an interval (say, of width delta) around the point in which the function is close (say, within a small difference epsilon) to some value, lim f(x). Doing delta-epsilon proofs is just a rigorous way of showing that this interval exists, and thus the limit exists.

It's a bit awkward, but go ahead and give it a shot. Don't beat yourself up over it.
 
imagine a function which is monotonically decreasing, and takes on all positive values less than 1, as x approaches 4.

can you see why, even prove , its limit is zero as x goes to 4?
 
I can't help but get a tingly feeling on my tongue that there's something very profound behind this definition. Unfortunately I'll have to beat myself up over it for a few months before I give up, that's how it always is. =(
 
ALL positive values of less than 1 as x goes to 4? I suppose the smallest positive value less than one is greater than zero by the smallest possible amount, leaving the only value "less than" to be zero itself (that is to say, since "A" is the smallest possible value greater than zero, and since f(x) > f(x+B) (where B is any positive number, since of course the function is always decreasing.) Thus when f(x)==>A, then f(x+B)<f(x), and since the only value less than A is zero itself...Errr... I think that makes sense.
 
Last edited:
Actually, limits are a very intuitive concept. I think you (and any starting mathematician, even) would agree with me that
by a statement like \lim_{x \to a} f(x) = L we mean that the closer x gets to a, the closer the function value f(x) gets to L.​
In other words: if we look at the function value in a point close enough to a, we are certain to find something which is close enough to L. The epsilon-delta stuff is just a way to rigorously define what we mean by "close enough".

It works as follows:
  • Let \epsilon &gt; 0.
  • Then there exists a \delta &gt; 0 such that: if |x - a| &lt; \delta, then |f(x) - L| &lt; \epsilon.
The first point says, you can choose epsilon. This epsilon will be the maximum allowed difference between the function value and the limit value. You can make it as small or as large as you want. If you keep making it smaller and smaller, the function value will be closer and closer to the limit. Note that it does explicitly say that epsilon is positive: the statement does not say that there will be a point where epsilon is actually zero (that is, the function actually reaches the limit).
Then the second point asserts the existence of a neighborhood of size delta around a for which you will find the function value within epsilon of L. That is, whatever value of x in the interval (a - \delta, a + \delta) I plug into f, I am sure that the function value will be in the interval (L - \epsilon, L + \epsilon). Of course, the smaller the epsilon you give me (the more stringent your definition of what you call "close enough") the smaller my delta will have to be (the closer you will have to be to a to actually come "close enough").
 
kts123 said:
How does the definition define anything if we require prior knowledge of what our limits and x values are?

Definition is not a recipe for limit calculation. However, it tells you how to check if the number you suspect to be the limit is the limit - or not.
 
Ohhhh, thanks CompuChip, I was expecting there to be something that implied "\lim_{\epsilon \to 0}" (that is, something which forced epsilon to squash f(x) and L together.) I have to wonder, though, isn't that definition somewhat useless for a function which oscillates extremely rapidly near the limit; for example, sin(1/x^2), where x is very small, say 2E(-10000)?

Definition is not a recipe for limit calculation. However, it tells you how to check if the number you suspect to be the limit is the limit - or not.

Amazing that no sites wanted to mention these things. I suppose this is a testimate to textbooks being much more reliable than internet sources. =P
 
kts123 said:
I have to wonder, though, isn't that definition somewhat useless for a function which oscillates extremely rapidly near the limit; for example, sin(1/x^2), where x is very small, say 2E(-10000)?
No, in fact it is extremely useful because it allows you to prove that
\lim_{x \to 0} \sin(1 / x^2)
does not exist (which you of course already suspected by a reasoning like: "if x goes to zero, 1/x^2 goes to infinity, and the sine function keeps oscillating there, so probably there is no limit"). For suppose you claim that the limit is -1 \le L \le 1, then I can show that there is an epsilon such that for all delta, there is an x closer than delta to zero for which sin(1/x^2) will be more than epsilon from L. Which by the very definition says: "the limit is not L".
 
Last edited:
  • #10
Amazing that no sites wanted to mention these things. I suppose this is a testimate to textbooks being much more reliable than internet sources.

I am afraid textbooks will be no better here. Many math books leave lots of things untold - as these are obvious. Fact that these things are obvious only for mathematicians, doesn't matter - at least to them :wink:
 
  • #11
I forgot about this part:

kts123 said:
Ohhhh, thanks CompuChip, I was expecting there to be something that implied "\lim_{\epsilon \to 0}" (that is, something which forced epsilon to squash f(x) and L together.)

Note that there is no such thing explicitly. But of course, they idea is that you will make epsilon smaller and smaller. You are allowed to choose \epsilon = 10^{10^{100}}, and the statement should still be true (and in fact, chances are it is, unless you have chosen the suspected limit really badly or you have a strange function). The point is, however, that it holds for all \epsilon &gt; 0, no matter how small. So if there is a number for which it is not true, whether it be very large or very small, then the definition is not satisfied. In fact, as I already stated implicitly in my last post, the opposite of
\lim_{x \to a} f(x) = L, meaning: for each \epsilon &gt; 0 there is a \delta = \delta(\epsilon) &gt; 0 such that for all x satisfying |x - a| &lt; \delta, |f(x) - L| &lt; \epsilon​
is
\lim_{x \to a} f(x) \neq L, meaning: there is an \epsilon &gt; 0 (finding just one suffices) such that for all \delta = \delta(\epsilon) &gt; 0 there is an x satisfying |x - a| &lt; \delta for which |f(x) - L| \ge \epsilon​
Which is: there is a distance \epsilon for which I cannot get f(x) closer than \epsilon to the supposed limit L, whence it cannot be the limit.
 
  • #12
CompuChip said:
No, in fact it is extremely useful because it allows you to prove that
\lim_{x \to 0} \sin(1 / x^2)
does not exist (which you of course already suspected by a reasoning like: "if x goes to zero, 1/x^2 goes to infinity, and the sine function keeps oscillating there, so probably there is no limit"). For suppose you claim that the limit is -1 \le L \le 1, then I can show that there is an epsilon such that for all delta, there is an x closer than delta to zero for which sin(1/x^2) will be more than epsilon from L. Which by the very definition says: "the limit is not L".

I meant when x is very small, but not EXACTLY zero. For examle, \lim_{x \to 10^{-10000}} \sin(1/x^2) The required value for epsilon and delta is so incredibly small that it seems rather difficult to "pick a value that 'seems close enough," it would be very easy to accidently pick a value too large or too small. I suppose I was hoping the formal definition would introduce something less... ambiguous (though I suppose it needs to be based on the nature of a limit.) It is, as you say, very useful, just not the solution to all your limit needs. Anyway, many thanks for the advice, everyone.
 
  • #13
I don't really see what your problem with that definition is. There is no "required value" of epsilon, you can choose it, as I tried to explain. For any given epsilon, finding the right value of delta can indeed be a tough job, in fact, that's the hardest part about epsilon-delta proofs.

Also, the "problem" with the function you introduced only occurs at x = 0, where the function indeed has no limit (as you can actually prove from the definition). You can also prove from the definition that <br /> \lim_{x \to 10^{-10000}} \sin(1/x^2) = \left. \sin(1 / x^2) \right|_{x = 10^{-10000}}<br />
(or you can prove first that if f is continuous in a, then \lim_{x \to a} f(x) = f(a)). It is not any easier or more difficult than proving that \lim_{x \to 1} \sin(1 / x^2) = \sin(1). In general in such proofs, you start by writing down "Let \epsilon &gt; 0" and then proceed to define delta in terms of epsilon, e.g. "Choose \delta = \sqrt{\epsilon} / 4 and let x be such that |x - a| &lt; \delta." Then you go and estimate the difference f(x) - L in a series of inequalities to show that it is smaller than epsilon. Whether a is 0 or infinity, or something in between, is irrelevant for that structure of the proof and does not increase or decrease the level of difficulty.

Can you explain what it is that you find ambiguous about the definition?
 
Last edited:
  • #14
The simpler, the better...

Find the limit of f(x) = x when x goes to infinity, I choose you epsilon = 17 and 3506, can you give me a delta such that the conditions satisfied?

What about f(x) = 1/x? I give you epsilon = 1849 and 0.02? Any deltas?

And lastly, f(x) = cos(x), I choose epsilon = 0.5...
 
  • #15
CompuChip said:
I don't really see what your problem with that definition is. There is no "required value" of epsilon, you can choose it, as I tried to explain. For any given epsilon, finding the right value of delta can indeed be a tough job, in fact, that's the hardest part about epsilon-delta proofs.

Also, the "problem" with the function you introduced only occurs at x = 0, where the function indeed has no limit (as you can actually prove from the definition). You can also prove from the definition that <br /> \lim_{x \to 10^{-10000}} \sin(1/x^2) = \left. \sin(1 / x^2) \right|_{x = 10^{-10000}}<br />
(or you can prove first that if f is continuous in a, then \lim_{x \to a} f(x) = f(a)). It is not any easier or more difficult than proving that \lim_{x \to 1} \sin(1 / x^2) = \sin(1). In general in such proofs, you start by writing down "Let \epsilon &gt; 0" and then proceed to define delta in terms of epsilon, e.g. "Choose \delta = \sqrt{\epsilon} / 4 and let x be such that |x - a| &lt; \delta." Then you go and estimate the difference f(x) - L in a series of inequalities to show that it is smaller than epsilon. Whether a is 0 or infinity, or something in between, is irrelevant for that structure of the proof and does not increase or decrease the level of difficulty.

Can you explain what it is that you find ambiguous about the definition?

That the definition doesn't say what value epsilon or delta should be. By the second part we say 0 &lt; |x - a| &lt; \delta. That's saying that no matter how small we make delta, the difference of x and a will always be smaller -- but not fully zero? It's SMALLER than ANY number you could EVER choose, but NOT zero? What the heck is the reciprocal of such a small number? BIGGER than any number you could choose? I understand what the definition is saying, it just seems weird.
 
  • #16
That the definition doesn't say what value epsilon or delta should be.

Because it doesn't matter what values they have, as long as they fulfill the condition.
 
  • #17
kts123 said:
That the definition doesn't say what value epsilon or delta should be.
That's because epsilon can be anything (as I explained), and delta depends on epsilon and the function itself. The definition works for any function, exactly because it doesn't prescribe delta.

kts123 said:
By the second part we say 0 &lt; |x - a| &lt; \delta. That's saying that no matter how small we make delta, the difference of x and a will always be smaller -- but not fully zero? It's SMALLER than ANY number you could EVER choose, but NOT zero?
Sorry, I missed the part where it says that |x - a| cannot be zero. The definition says: if |x - a| < delta, then |f(x) - L| < epsilon. In particular, you can do it for x = a, if the function is defined there (i.e. for a continuous function). But for functions like 1/x you cannot make |x - 0| equal to 0 because that would force you to choose x = 0 but the function is not defined there.

kts123 said:
What the heck is the reciprocal of such a small number? BIGGER than any number you could choose?
In the phrase "such a small number", which number are you talking about? Epsilon? Delta?
And the reciprocal of a small number is indeed a large number, but what does that have to do with it?

kts123 said:
I understand what the definition is saying, it just seems weird.
Actually, once you understand what the definition is trying to express, I think it's a completely logical way to do it, and one of the few ways that expresses the idea of a limit without making use of that concept. So if you don't mind me being very blunt, I don't think you fully understand it yet :wink:
 
  • #18
Last edited by a moderator:
  • #19
Yeah that's right you don't consider x = a in the definition. The limit can exist even if the function is not defined at that point (which would not be true if you needed to include x=a in the definition). For example f(x) = \frac{x^2-1}{x - 1} is not defined at x=1 but \lim_{x\rightarrow 1} f(x) = 2.
 
  • #20
DavidWhitbeck said:
Yeah that's right you don't consider x = a in the definition. The limit can exist even if the function is not defined at that point (which would not be true if you needed to include x=a in the definition).

Infact the values of f only need be considered when 0<|x-delta|<epsilon for any epsilon
no only can the function be undifined elsewhere it may be differntly defined the limit checks if the function is almost constant in any small neighborhood of the value
 
  • #21
DavidWhitbeck said:
For example f(x) = \frac{x^2-1}{x - 1} is not defined at x=1

I understand what you are aiming at, I just wonder if it is a good example, as obviously

f(x) = \frac{x^2-1}{x - 1} = \frac{(x+1)(x-1)}{x - 1} = x + 1

So the question is, is multiplying by 0/0 enough to make function undefined in a point?



 
  • #22
DavidWhitbeck said:
Yeah that's right you don't consider x = a in the definition. The limit can exist even if the function is not defined at that point (which would not be true if you needed to include x=a in the definition). For example f(x) = \frac{x^2-1}{x - 1} is not defined at x=1 but \lim_{x\rightarrow 1} f(x) = 2.


I know what you're getting at, I've delt with a lot of limits despite not knowing the formal definition. What I don't understand is that it says, in otherwords, 0&lt;|x-a|&lt;\delta, where delta is ANY real number greater than zero. No matter how small we make delta, |x-a| is still smaller -- but not zero. I don't see anywhere that says "the definition only counts if you actually choose delta," by all rights, delta is always defined for ANY value greater than zero, even if we don't "select" delta. Essentially, it's sayings |x-a| is positive, yet smaller than any positive number that exists.
 
  • #23
DavidWhitbeck said:
Yeah that's right you don't consider x = a in the definition. The limit can exist even if the function is not defined at that point (which would not be true if you needed to include x=a in the definition). For example f(x) = \frac{x^2-1}{x - 1} is not defined at x=1 but \lim_{x\rightarrow 1} f(x) = 2.


I know what you're getting at, I've delt with a lot of limits despite not knowing the formal definition. What I don't understand is that it says, in otherwords, 0&lt;|x-a|&lt;\delta, where delta is ANY real number greater than zero. No matter how small we make delta, |x-a| is still smaller -- but not zero. I don't see anywhere that says "the definition only counts if you actually choose delta," by all rights, delta is always defined for ANY value greater than zero, even if we don't "select" delta. Essentially, it's sayings |x-a| is greater than zero, yet smaller than any other number greater than zero.
 
  • #24
Borek said:
I understand what you are aiming at, I just wonder if it is a good example, as obviously

f(x) = \frac{x^2-1}{x - 1} = \frac{(x+1)(x-1)}{x - 1} = x + 1

So the question is, is multiplying by 0/0 enough to make function undefined in a point?

Borek

Yes dividing by 0 is all you need to do to make a function undefined at a point. Removeable singularities are good examples because they illustrate the point without being overly pathological.

Your "obvious" reasoning is wrong, f(x) \neq x + 1 in order to make your simplification you implicitly assumed that x\neq 1, and that's the entire point isn't it! Of course you lose the subtlety if you do bad math! lol
 
  • #25
kts123 said:
I know what you're getting at, I've delt with a lot of limits despite not knowing the formal definition. What I don't understand is that it says, in otherwords, 0&lt;|x-a|&lt;\delta, where delta is ANY real number greater than zero. No matter how small we make delta, |x-a| is still smaller -- but not zero. I don't see anywhere that says "the definition only counts if you actually choose delta," by all rights, delta is always defined for ANY value greater than zero, even if we don't "select" delta. Essentially, it's sayings |x-a| is positive, yet smaller than any positive number that exists.

You don't actually have to choose delta, but you must show that it exists. Let's go over this as a recipe. Let's suppose you think the limit exists and equals the real number L.

(1) Choose \epsilon &gt; 0
(2) We need to find a real number \delta such that whenever |x - a| &lt; \delta, the following inequality is also satisfied |f(x) - L| &lt; \epsilon
(3) If we can repeat (1)-(2) successfully for all positive values of \epsilon then we can conclude that the limit exists.

Now look at that procedure. It is very easy to do for all those cases at once if you find delta as a function of epsilon, \delta = g(\epsilon). You should think of it that way-- from the epsilon you find delta, which in turn gives you the inequality for delta that must imply the inequality for epsilon. It's like a big circle.

By saying that you think delta is always defined, you clearly don't understand the definition yet. You, yourself must find delta to show that the limit exists. It's not a priori defined.
 
  • #26
  • #27
DavidWhitbeck said:
Yes dividing by 0 is all you need to do to make a function undefined at a point. Removeable singularities are good examples because they illustrate the point without being overly pathological.

Your "obvious" reasoning is wrong, f(x) \neq x + 1 in order to make your simplification you implicitly assumed that x\neq 1, and that's the entire point isn't it! Of course you lose the subtlety if you do bad math! lol

Good point, my bad. Nice trick - take any function, multiply it by (x-1)/(x-1) to get function that is otherwise identical (or am I loosing other fine details now?), but undefined at x=1.
 
  • #28
If I'm understanding this defintion right, it implies continuity (or defines it...) I'mprobably overthinking it if I'm making conclusions like that. I'll set it asides and try to figure it out another time.

Anyway, many thanks for the attempts at explaining.
 
  • #29
  • #30
kts123 said:
If I'm understanding this defintion right, it implies continuity (or defines it...) I'mprobably overthinking it if I'm making conclusions like that. I'll set it asides and try to figure it out another time.

Anyway, many thanks for the attempts at explaining.

Continuity of a function f at a just means that the limit
\lim_{x \to a} f(x) = L
exists as in the definition of limit, but moreover that f is actually defined at a and that L = f(a).

So in summary: f is continuous at a means that \lim_{x \to a} f(x) = f(a). f is continuous means that it is continuous at a for all a in the domain.

But for non-continuous functions, the limit may still exist although the function value is different or not defined. For example,
f(x) = \begin{cases} x^2 &amp; \text{ if } x \neq 1 \\ \sqrt{2}\pi &amp; \text{ if } x = 1 \end{cases}
is not a continuous function, nevertheless the limit as x goes to 1 does exist (and is equal to 1, which is not f(1)).
 
Last edited:
  • #31
Trambolin, for the first limit you wrote, you set the limit as x approaches infinity. The epsilon-delta proofs for these are a little different than the standard.

For the second and third limits, you must first define a point a that x approaches.

In general, here's how you should approach it:

1. First, find a function f(x) and a point a, which is the point that you want x to approach.
2. Evaluate the limit - your result is L.
3. Set up the inequalities |f(x)-L|<\epsilon and 0<|x-a|<\delta.
4. Manipulate both inequalities so that you have the same terms inside the absolute value signs in both inequalities.
5. Set \delta=g(\epsilon), and from this, you can find a delta given any epsilon.
 
  • #32
Here's one easy example: Prove that the limit of 3x - 5 as x\rightarrow2 is 1.

Scratch work:
1. Our f(x) = 3x - 5, and a = 2.
2. It is easy to see, either by the problem statement or by evaluating the limit, that L is indeed one.
3. |f(x) - L|< ε if 0<|x - a|< δ, or |(3x-5) -1|< ε if |x - 2|< δ
4. 3|x - 2|< ε if |x - 2|< δ, which is equivalent to |x - 2|< ε/3 if |x - 2|< δ
5. Since the same terms are now inside both absolute value signs, we can see that we must set δ = ε/3. In other words, for whatever ε we pick, δ will be one-third of that number.

Formal proof:
Let δ = ε/3.
|x - 2|< δ = ε/3 if 3|x - 2|< ε, or |3x - 6|< ε. Thus, |(3x - 5) - 1|< ε.
We can see that 0<|x - 2|< δ implies |(3x - 5) - 1|< ε, or equivalently, 0<|x - a|< δ implies |f(x) - L|< ε. QED.

Trambolin, if you were to give me any positive value of ε, we could now find a δ that would satisfy the proof by using δ = ε/3. Also, as you can see, the proof is basically just those 5 steps done backwards.
 
  • #33
Well actually, if you read carefully, it wasn't me who asked the question... I was trying to tickle the motivation channels of the original poster, and the limits (if exist) are obvious from the simple functions that I specifically choose, but anyway thanks for the clarification.
 
Last edited:
  • #34
Thanks for the contribution adartsesirhc. But I hope you did notice that this thread is like, one month old, and post you are referring to is on page 1 of (currently) 3.
 
  • #35
I'm sorry. I only meant to help. The second post was only to clear up any misconception as to the usage of the epsilon-delta proofs.
 
  • #36
delta epsilon explanation

kts123 said:
I can't get my head around the epsilon-delta definition of a limit. Unfortunately I don't have a teacher to ask (I'm teaching this to myself as a self interest) so this forum is my last resort -- google hasn't been kind to me.

From what I've seen, I don't really understand how the definition means much of anything (visual examples included.) All it seems to say is "there is an unspecified value which is greater than the difference between a function and a given value L, where that difference is greater than zero" The problem is, I don't see any need for L or f(x) to be anywhere near one another.

There are three things I would like to say. First, your concern over your lack of understanding of the delta epsilon definition of a limit shows that you have significant mathematical talent.

Second, here is the way I like to phrase the delta-epsilon definition:

DEFINITION: "A function f:R->R aproaches a limit L as x goes to c if and only if
For all epsilon > 0, there exists a delta > 0 such that |x-c| < delta implies that |f(x)-L| < epsilon."

Note that I didn't say anything about "an unspecified value of epsilon." I said, "for ALL positive values of epsilon.

It is an implication of the definition that it is only small values of epsilon that really count. This is because if you can find a "delta," call it delta1, for some small value of epsilon, call it epsilon1, then that value of delta will work for all epsilon greater than epsilon1.

In other words, if you are trying to apply the definition to prove that some number L is a limit as x -> c of some function f(x) and you suceed in finding a delta (delta1) that works for some value of epsilon (epsilon1), then that value of delta (delta1) will work for all epsilon2 > epsilon1. Hence you are finished with values of epsilon greater than epsilon1 and all you have to worry about now is the values of epsilon less than epsilon1.

One could state this as a Theorem:

Suppose that there exists epsilon1, delta1 > zero such that |f(x)-L| < epsilon1 for all x such that |x-c| < delta1, THEN for all epsilon2 > epsilon1 it is true that |f(x)-L| < epsilon2.

Proof: |f(x)-L| < epsilon1 < epsilon2. QED.

The third thing I would like to say is that for the three months of summer vacation before I went to college 45 years ago, I spent a lot of time trying to understand the delta-epsilon definition of a limit and of a derivative. I didn't feel I understood it after those three months. But, a couple of weeks into my freshman calculus class, I felt I really understood it. I don't remember much else about what happened back then, but, I do remember that the Professor invited me to switch from the regular calculus class to the honors calculus class after the first month of class. I thanked him and said that I was learning too much in the regular class to switch, but, could I take both? He said "of course." The point is that the manyt many hours over many many months of thinking about that definition served me well not only as an undergraduate but also as a graduate student. All phases of analysis (indlucing PDE, functional analysis, and C* algebras) were vastly much easier for me because I learned that definition. Same thing with mathematical logic and general topology.

Algebra and algebraic topology and algebraic geometry was difficult for me, though, and still is. Not to mention number theory. I still don't have my belt all the way around Chern classes.

Deacon John
 
  • #37
Deacon John, can you tell me please using your definition of limit what IS the following limit.
1) lim 2x+1 as x-------> 2 where f:N------>R where N is the natural Nos and f(x)=2x+1
Does the following function has any limits within its domain and if yes how many?
f={(1/n,1+2^-n):nεΝ}
 
  • #38
What is your motivation in asking these questions? They seem fairly simple to me and Deacon John's definition of limit gives the same thing as the "usual" limit.

For f(x)= 2x+ 1, as x goes to 2, f:N->R, the limit, by Deacon John's definition, or any definition I am familiar with is 5: Since x is an integer, the only way we can have |x- 2|< \delta for small delta is to have x= 2 so the limit is just f(2)= 5.

For the second question, again since to be "close" to an integer, n must be that integer, every point in the range of the function is a limit point and, since n can be arbitrarily large, (0, 1) is also a limit point. That function has a countably infinite number of limit points.

The only difficulty I can see with Deacon John's definition of limit is that it does not include 0< |x-c|< delta rather than just |x-c|< delta. (And I always forget that myself!)

To point that out, you might ask "what is the limit of the function 'f(x)= x if x is NOT 0, f(0)= 1' as x goes to 0? (f:R->R)"

Using the strict wording of Deacon John's definition, that has NO limit since for x arbitrarily close to 0, f(x) takes on values arbitrarily close to 0 AND 1. Excluding |x-c|= 0, that is excluding x= 0 in this case, f(x) takes values arbitrarily close to 0 only and the limit is 0. (That difference is crucially important to the definition of the derivative- where we are always taking the limit of a fraction which is undefined at the limit value.)
 
  • #39
LAVRANOS said:
Deacon John, can you tell me please using your definition of limit what IS the following limit.
1) lim 2x+1 as x-------> 2 where f:N------>R where N is the natural Nos and f(x)=2x+1
Does the following function has any limits within its domain and if yes how many?
f={(1/n,1+2^-n):nεΝ}

Lavranos,

The definition that I gave was only for functions that map the real numbers into the real numbers.

The definition that I gave does not apply when the domain of the function is the natural numbers (N).

When the domain is the natural numbers, there are a whole host of definitions that are possible.

[For the advanced student: To see this, pick any point in the Stone Chech compactification of N (call the point omega), pick any point of N, for example the number "3," and use your favoite method to identify "3" with "omega." If this is not immediately clear, you might not be an advanced student, but, don't be discouraged, everybody was a beginner when they started out. I pretty sure that different points give different topologies. For example, I'm pretty sure that the S.C.c. contains a point in the closure of the even numbers but not in the closure of the odd numbers.]

Probably the most natural definition of a limit for your kind of functions is given by the discrete topology, where every subset of natual numbers is both closed and open. To apply this requires a more sophisticated definition of a limit than I am willing to explain, but the result is that every function is continuous and every value in the domain (N) and every value in the image (i.e., the set f(N)) of every function (defined on N, that is) is a limit.

The answer to your first question (assuming the discrete topology on N) is "5" because f(2) = 5. And, with the same assumption, every natural number is a "limit" in your sense of the word, i.e., a point in the domain of the function, where the function approaches a limit.

Howeever, your second example is more likely to be associated with questions about the limit points of the image of the second function.

You did not specify the range for your second function, but the most natural choice would be the Cartesian product of the real numbers with themsleves, namely, RxR. I will assume that this is what you meant.

Your second function has the point (0,1) as a limit point of it's image, but this point is not in the image of the second function.

In fact, the limit as x -> infinity of your second function is (0,1), and "infinity" is not in the domain of your second function.

I did not give a "delta - epsilon" definition for this kind of limit, but, there is one. I suggest you look it up. It should be in the same book where you find the one that I did give.

Or better, use your intuition to write down what it should look like, then look it up.

Hint: The "delta-epsilon" definition for the kind of of limit when n --> infinity does not have any "delta" in it, but, it does have an "epsilon."

[For the advanced student: the definition of "limit" that I am trying to motivate Lavranos to "discover" is equivalent to the definition when the topology of the "one point compactification" is put on the set N+ = N union {infinity}. I.e., the closed subsets of N+ are precisely the finite subsets and N+ itslef. As far as I know, "N+" is not a standard notation.]

Cheers,

DJ
 
  • #40
THE FUNCTION f(x)=2x+1 where f:N------->R HAS NO LIMIT AT X=2 BECAUSE X IS NOT A POINT OF ACCUMULATION ,BUT IT IS CONTINUOUS AT X=2.
THE SECOND FUNCTION IS NOT MINE IT IS A PROFESSOR"S FUNCTION AND HE SAYS THAT THE FUNCTION CAN HAVE ALIMIT AT X=0 which is 1,and he proves that.
I SUGGEST you write my post on apiece of paper to make sure you don't make any mistakes and go and check it with couple of books.
But then again the question comes up :DO YOU KNOW HOW TO READ BOOKS
PROPERLY?
Deacon John the above is not for you ,please go and see my posts in the thread <Discussion of valid method of proof> under General maths and i will ask a few more questions.
 
Back
Top