Proving the Limit of f(x)=0 in Calculus with Spivak's Problem

  • Context: Graduate 
  • Thread starter Thread starter Bleys
  • Start date Start date
  • Tags Tags
    Calculus Limit
Click For Summary

Discussion Overview

The discussion revolves around proving the limit of the function f(x) as x approaches a, where f(x) is defined based on a collection of disjoint sets A_n within the interval [0,1]. Participants explore the limit definition in the context of calculus, specifically addressing the behavior of f(x) as it relates to points in and out of these sets.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • One participant suggests that for any neighborhood around a, there are finitely many elements of A_n, leading to the conclusion that f(x) can be made arbitrarily small.
  • Another participant proposes a proof assuming f(a)=0 and discusses the existence of a delta such that the neighborhood around a does not intersect with a finite union of sets B_N.
  • There is a clarification regarding the smallest n being the one such that A_n has an element in the neighborhood of a.
  • Some participants discuss the implications of whether a is in any A_n and how that affects the limit proof, emphasizing the need to consider both cases.
  • One participant mentions the importance of the delta-epsilon definition and how to choose delta based on the distance to the nearest elements of the sets A_n.
  • Another participant provides an intuitive visualization of the proof, suggesting that the intervals between a and elements of A_n are always present due to the density of real numbers.

Areas of Agreement / Disagreement

Participants generally agree on the approach to proving the limit but express differing views on the specifics of the proof, particularly regarding the treatment of cases where a is in or out of the sets A_n. The discussion remains unresolved on certain details of the proof structure.

Contextual Notes

Some participants note the need for clarity on the properties of the sets A_n and the implications of the delta-epsilon definition, indicating that assumptions about the behavior of f(x) in relation to these sets may not be fully articulated.

Bleys
Messages
74
Reaction score
0
I came across a problem in Calculus by Spivak, and I'm having trouble formalizing the proof.
Let A_{n} bet a set of finite numbers in [0,1], and if m \neq n then A_{n} and A_{m} are disjoint. Let f(x) be defined as
f(x)=1/n if x is in A_{n} and f(x)=0 if x is not in any A_{n}. The question asks to prove the limit as x goes to a of f is 0 for any a in [0,1].
Now I thought: given an n, there are only finitely many elements of A_{n} in a neighborhood of a. Choose the smallest such n, say n_{0}. Then f(x)\leq1/n_{0}. Restrict the neighborhood further so that none of the elements of A_{n_{0}} are in the interval. Then choose the next n such that it's minimal. Obviously f(x)\leq1/n\leq1/n_{0}. Successively doing this, for arbitrarily small x, f(x) will tend to 0.
I don't know how to prove this using the limit definition; can someone help me out with what \delta to choose?
 
Physics news on Phys.org
The smallest such n with what property? You're not very clear about that. Here's a proof: I'm going to assume f(a)=0; the other case is only a slight modification and should be instructive for you to think about. Fix \varepsilon>0, then there exists N > 0 with 1/N < \varepsilon. Let B_N = \cup_{ n = 1 }^N A_n, so B_N is a finite set, and a \not \in B_N (why?). Now, let \delta > 0 be such that ( a - \delta, a + \delta ) \cap B_N = \emptyset (why does such a \delta exist?). Then if | a - x | < \delta, either x \in A_m for m > N, or f(x) = 0. In either case, |f(a) - f(x)|<1/N<\varepsilon, as desired.
 
Bleys said:
I came across a problem in Calculus by Spivak, and I'm having trouble formalizing the proof.
Let A_{n} bet a set of finite numbers in [0,1], and if m \neq n then A_{n} and A_{m} are disjoint. Let f(x) be defined as
f(x)=1/n if x is in A_{n} and f(x)=0 if x is not in any A_{n}. The question asks to prove the limit as x goes to a of f is 0 for any a in [0,1].
Now I thought: given an n, there are only finitely many elements of A_{n} in a neighborhood of a. Choose the smallest such n, say n_{0}. Then f(x)\leq1/n_{0}. Restrict the neighborhood further so that none of the elements of A_{n_{0}} are in the interval. Then choose the next n such that it's minimal. Obviously f(x)\leq1/n\leq1/n_{0}. Successively doing this, for arbitrarily small x, f(x) will tend to 0.
I don't know how to prove this using the limit definition; can someone help me out with what \delta to choose?

you're on the right track, you should break this down into 2 parts:
1) if a is in a set An
2) if a is not in a set An

given an epsilon larger than 0, you must consider all f(x) = 1/n that fails ( larger than epsilon). Suppose you consider all 1/n/f(x) >= epsilon, then you can consider a union of sets (S) An such that f(x) >= epsilon for any element in a set An. Now you've reduced the problem to taking delta to be the min distance between the members of that set (S) and a (the distance between the greatest lower bound of (S) and a). If a is a member of the set itself, then you can use the exact same argument, since we are only concerned with behaviour as x APPROACHES a. Also, you know that you can always use this general argument because the interval between any numbers is always infinitely dense.
 
The smallest such n with what property? You're not very clear about that
Sorry, what I meant to say was the smallest n such that the set A_{n} has an element in the neighborhood of a.
so B_N is a finite set, and (why?)
f(a)=0 iff a=0, by definition of f, and this is iff a is not in any An.
(why does such a \delta exist?)
Because B_N is finite.

Did you assume f(a)=0 because you had ( a - \delta, a + \delta )? Would 0 < | a - x | < \delta solve that? After all, since we are looking at the limit 0 then the expression |f(a) - f(x)|<\varepsilon becomes |f(x) - 0|=|f(x)|<\varepsilon, since the value of f at a doesn't matter.

given an epsilon larger than 0, you must consider all f(x) = 1/n that fails ( larger than epsilon). Suppose you consider all 1/n/f(x) >= epsilon, then you can consider a union of sets (S) An such that f(x) >= epsilon for any element in a set An. Now you've reduced the problem to taking delta to be the min distance between the members of that set (S) and a
So since S will be finite (since there will be an n such that 1/n<\varepsilon) then that delta will work. Can I just not consider whether a is in the set or not. After all like you said, we're interested in the limit, not continuity. So 0 < | a - x | < \delta would solve that.
Ok I think I get it. I was having trouble formulating all this with the \delta - \varepsilon definition. Thanks for all your help!
 
Bleys said:
Sorry, what I meant to say was the smallest n such that the set A_{n} has an element in the neighborhood of a.
f(a)=0 iff a=0, by definition of f, and this is iff a is not in any An.
Because B_N is finite.

Did you assume f(a)=0 because you had ( a - \delta, a + \delta )? Would 0 < | a - x | < \delta solve that? After all, since we are looking at the limit 0 then the expression |f(a) - f(x)|<\varepsilon becomes |f(x) - 0|=|f(x)|<\varepsilon, since the value of f at a doesn't matter.So since S will be finite (since there will be an n such that 1/n<\varepsilon) then that delta will work. Can I just not consider whether a is in the set or not. After all like you said, we're interested in the limit, not continuity. So 0 < | a - x | < \delta would solve that.
Ok I think I get it. I was having trouble formulating all this with the \delta - \varepsilon definition. Thanks for all your help!

It's a good idea to say how the limit works in both cases, you need to show that you've considered that case (it is a significant case). But in the end, it uses the same argument.
The delta you are taking is the min ( |a-x| : x \in \cup n=1 to i An )

Here is an intuitive version of the proof:

|A1||A15||A20||A4|<----a------>|A7||A9||A10000||A124921894381241|

Any x's will be dispersed into different sets An, so take delta to be the min distance between a set and the point a. This interval between a and any x in sets An will always exist, since an interval of real numbers is always infinitely dense
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 24 ·
Replies
24
Views
7K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K