Why is a finite sub-cover necessary for proving continuity implies boundedness?

  • #1
Oats
11
1
1. The problem statement:
Let ##f:[a, b] \rightarrow \mathbb{R}##. Prove that if ##f## is continuous, then ##f## is bounded.

2. Relevant Information
This is the previous exercise.
Let ##A \subseteq \mathbb{R}## , let ##f: A \rightarrow \mathbb{R}##, and let ##c \in A##. Prove that if ##f## is continuous at ##c##, then there is some ##\delta > 0## such that ##f|A \cap (c - \delta, c + \delta)## is bounded.
I have already proved this result, and the book states to use it to prove the next exercise. It also hints to use the Heine-Borel theorem.

The Attempt at a Solution

:[/B]
Since ##f## is continuous, for each ##c \in [a, b]##, ##f## is continuous at ##c##. By the previous exercise, for each ##c \in [a, b]##, there is ##\delta_c > 0## such that ##f|A \cap (c - \delta, c + \delta)## is bounded, say by ##K_c##. Since, for each ##c \in [a, b]##, ##c \in (c - \delta_c, c + \delta_c)##, we have that the collection ##\{(i - \delta_i, i + \delta_i)\}_{i \in [a, b]}## forms an open cover of ##[a, b]##. By the Heine-Borel theorem, this collection has a finite subcover. That is, there exists ##n \in \mathbb{N}## and ##q_1, \ldots, q_n \in [a, b]## for which ##(q_1 - \delta_{q_1}, q_1 + \delta_{q_1}), \ldots, (q_n - \delta_{q_n}, q_n + \delta_{q_n})## form an open cover for ##[a, b]##, and each is bounded by ##K_{q_1}, \ldots, K_{q_n}## respectively. Now take ##K = \text{max}\{K_{q_1}, \ldots, K_{q_n}\}##. Let ##x \in [a, b]##. Then there is ##q_h##, for ##h \in \{1, \ldots, n\}##, for which ##x \in (q_h - \delta_{q_h}, q_h + \delta_{q_h})##, so that ##|f(x)| \leq K_{q_h} \leq K##. Hence, ##f## is bounded by ##K##.

I feel quite confident in most of the proof, but at the beginning I was feeling a little iffy on exactly why we need a reduction on the cover of ##[a, b]## to a finite one. I immediately sensed that that's what they where after with the hint to use the Heine-Borel theorem, but the actual necessity for the reduction itself is what irked me. I was wondering why we couldn't simply let ##K = \text{max}\{K_c\}_{c \in [a, b]}##. Since then, I began to question that reasoning with the following response, of which I would greatly appreciate feedback in determining if I am right: The reason we cannot simply choose the bound from an infinite collection of bounds, is that the bounds may be increasing. Sure, the function may be bounded around arbitrarily small neighborhoods around each point, but there are unaccountably many of these neighborhoods, and there is nothing stopping the possibility that the function simply keeps continually increasing that bound as ##c## increases in ##[a, b]##. However, with the finitely many guaranteed by the Heine-Borel theorem, there would have to be a largest.
 
Physics news on Phys.org
  • #2
Oats said:
However, with the finitely many guaranteed by the Heine-Borel theorem, there would have to be a largest.
Yes, that's the reason.

If you want to try to solidify your intuition further, try to prove the theorem for the function ##f:[a,b)\to \mathbb R##, ie where the domain is a half-open interval. It can't be done. A counterexample is the function ##f(x)=\frac1{b-x}##. It increases without limit as ##x\to b##. So the ##K_c##s will have no upper bound. The reason the proof doesn't work in this case is that a half-open interval is not compact and so not every cover has a finite sub-cover. In particular, the cover defined in the proof will have no finite sub-cover.
 

Similar threads

Back
Top