Proving the Limit of f(x)=0 in Calculus with Spivak's Problem

  • Thread starter Thread starter Bleys
  • Start date Start date
  • Tags Tags
    Calculus Limit
Bleys
Messages
74
Reaction score
0
I came across a problem in Calculus by Spivak, and I'm having trouble formalizing the proof.
Let A_{n} bet a set of finite numbers in [0,1], and if m \neq n then A_{n} and A_{m} are disjoint. Let f(x) be defined as
f(x)=1/n if x is in A_{n} and f(x)=0 if x is not in any A_{n}. The question asks to prove the limit as x goes to a of f is 0 for any a in [0,1].
Now I thought: given an n, there are only finitely many elements of A_{n} in a neighborhood of a. Choose the smallest such n, say n_{0}. Then f(x)\leq1/n_{0}. Restrict the neighborhood further so that none of the elements of A_{n_{0}} are in the interval. Then choose the next n such that it's minimal. Obviously f(x)\leq1/n\leq1/n_{0}. Successively doing this, for arbitrarily small x, f(x) will tend to 0.
I don't know how to prove this using the limit definition; can someone help me out with what \delta to choose?
 
Physics news on Phys.org
The smallest such n with what property? You're not very clear about that. Here's a proof: I'm going to assume f(a)=0; the other case is only a slight modification and should be instructive for you to think about. Fix \varepsilon>0, then there exists N > 0 with 1/N < \varepsilon. Let B_N = \cup_{ n = 1 }^N A_n, so B_N is a finite set, and a \not \in B_N (why?). Now, let \delta > 0 be such that ( a - \delta, a + \delta ) \cap B_N = \emptyset (why does such a \delta exist?). Then if | a - x | < \delta, either x \in A_m for m > N, or f(x) = 0. In either case, |f(a) - f(x)|<1/N<\varepsilon, as desired.
 
Bleys said:
I came across a problem in Calculus by Spivak, and I'm having trouble formalizing the proof.
Let A_{n} bet a set of finite numbers in [0,1], and if m \neq n then A_{n} and A_{m} are disjoint. Let f(x) be defined as
f(x)=1/n if x is in A_{n} and f(x)=0 if x is not in any A_{n}. The question asks to prove the limit as x goes to a of f is 0 for any a in [0,1].
Now I thought: given an n, there are only finitely many elements of A_{n} in a neighborhood of a. Choose the smallest such n, say n_{0}. Then f(x)\leq1/n_{0}. Restrict the neighborhood further so that none of the elements of A_{n_{0}} are in the interval. Then choose the next n such that it's minimal. Obviously f(x)\leq1/n\leq1/n_{0}. Successively doing this, for arbitrarily small x, f(x) will tend to 0.
I don't know how to prove this using the limit definition; can someone help me out with what \delta to choose?

you're on the right track, you should break this down into 2 parts:
1) if a is in a set An
2) if a is not in a set An

given an epsilon larger than 0, you must consider all f(x) = 1/n that fails ( larger than epsilon). Suppose you consider all 1/n/f(x) >= epsilon, then you can consider a union of sets (S) An such that f(x) >= epsilon for any element in a set An. Now you've reduced the problem to taking delta to be the min distance between the members of that set (S) and a (the distance between the greatest lower bound of (S) and a). If a is a member of the set itself, then you can use the exact same argument, since we are only concerned with behaviour as x APPROACHES a. Also, you know that you can always use this general argument because the interval between any numbers is always infinitely dense.
 
The smallest such n with what property? You're not very clear about that
Sorry, what I meant to say was the smallest n such that the set A_{n} has an element in the neighborhood of a.
so B_N is a finite set, and (why?)
f(a)=0 iff a=0, by definition of f, and this is iff a is not in any An.
(why does such a \delta exist?)
Because B_N is finite.

Did you assume f(a)=0 because you had ( a - \delta, a + \delta )? Would 0 < | a - x | < \delta solve that? After all, since we are looking at the limit 0 then the expression |f(a) - f(x)|<\varepsilon becomes |f(x) - 0|=|f(x)|<\varepsilon, since the value of f at a doesn't matter.

given an epsilon larger than 0, you must consider all f(x) = 1/n that fails ( larger than epsilon). Suppose you consider all 1/n/f(x) >= epsilon, then you can consider a union of sets (S) An such that f(x) >= epsilon for any element in a set An. Now you've reduced the problem to taking delta to be the min distance between the members of that set (S) and a
So since S will be finite (since there will be an n such that 1/n<\varepsilon) then that delta will work. Can I just not consider whether a is in the set or not. After all like you said, we're interested in the limit, not continuity. So 0 < | a - x | < \delta would solve that.
Ok I think I get it. I was having trouble formulating all this with the \delta - \varepsilon definition. Thanks for all your help!
 
Bleys said:
Sorry, what I meant to say was the smallest n such that the set A_{n} has an element in the neighborhood of a.
f(a)=0 iff a=0, by definition of f, and this is iff a is not in any An.
Because B_N is finite.

Did you assume f(a)=0 because you had ( a - \delta, a + \delta )? Would 0 < | a - x | < \delta solve that? After all, since we are looking at the limit 0 then the expression |f(a) - f(x)|<\varepsilon becomes |f(x) - 0|=|f(x)|<\varepsilon, since the value of f at a doesn't matter.So since S will be finite (since there will be an n such that 1/n<\varepsilon) then that delta will work. Can I just not consider whether a is in the set or not. After all like you said, we're interested in the limit, not continuity. So 0 < | a - x | < \delta would solve that.
Ok I think I get it. I was having trouble formulating all this with the \delta - \varepsilon definition. Thanks for all your help!

It's a good idea to say how the limit works in both cases, you need to show that you've considered that case (it is a significant case). But in the end, it uses the same argument.
The delta you are taking is the min ( |a-x| : x \in \cup n=1 to i An )

Here is an intuitive version of the proof:

|A1||A15||A20||A4|<----a------>|A7||A9||A10000||A124921894381241|

Any x's will be dispersed into different sets An, so take delta to be the min distance between a set and the point a. This interval between a and any x in sets An will always exist, since an interval of real numbers is always infinitely dense
 
Back
Top