- #1
Oats
- 11
- 1
1. The problem statement:
Let ##f:[a, b] \rightarrow \mathbb{R}##. Prove that if ##f## is continuous, then ##f## is bounded.
2. Relevant Information
This is the previous exercise.
Since ##f## is continuous, for each ##c \in [a, b]##, ##f## is continuous at ##c##. By the previous exercise, for each ##c \in [a, b]##, there is ##\delta_c > 0## such that ##f|A \cap (c - \delta, c + \delta)## is bounded, say by ##K_c##. Since, for each ##c \in [a, b]##, ##c \in (c - \delta_c, c + \delta_c)##, we have that the collection ##\{(i - \delta_i, i + \delta_i)\}_{i \in [a, b]}## forms an open cover of ##[a, b]##. By the Heine-Borel theorem, this collection has a finite subcover. That is, there exists ##n \in \mathbb{N}## and ##q_1, \ldots, q_n \in [a, b]## for which ##(q_1 - \delta_{q_1}, q_1 + \delta_{q_1}), \ldots, (q_n - \delta_{q_n}, q_n + \delta_{q_n})## form an open cover for ##[a, b]##, and each is bounded by ##K_{q_1}, \ldots, K_{q_n}## respectively. Now take ##K = \text{max}\{K_{q_1}, \ldots, K_{q_n}\}##. Let ##x \in [a, b]##. Then there is ##q_h##, for ##h \in \{1, \ldots, n\}##, for which ##x \in (q_h - \delta_{q_h}, q_h + \delta_{q_h})##, so that ##|f(x)| \leq K_{q_h} \leq K##. Hence, ##f## is bounded by ##K##.
I feel quite confident in most of the proof, but at the beginning I was feeling a little iffy on exactly why we need a reduction on the cover of ##[a, b]## to a finite one. I immediately sensed that that's what they where after with the hint to use the Heine-Borel theorem, but the actual necessity for the reduction itself is what irked me. I was wondering why we couldn't simply let ##K = \text{max}\{K_c\}_{c \in [a, b]}##. Since then, I began to question that reasoning with the following response, of which I would greatly appreciate feedback in determining if I am right: The reason we cannot simply choose the bound from an infinite collection of bounds, is that the bounds may be increasing. Sure, the function may be bounded around arbitrarily small neighborhoods around each point, but there are unaccountably many of these neighborhoods, and there is nothing stopping the possibility that the function simply keeps continually increasing that bound as ##c## increases in ##[a, b]##. However, with the finitely many guaranteed by the Heine-Borel theorem, there would have to be a largest.
Let ##f:[a, b] \rightarrow \mathbb{R}##. Prove that if ##f## is continuous, then ##f## is bounded.
2. Relevant Information
This is the previous exercise.
I have already proved this result, and the book states to use it to prove the next exercise. It also hints to use the Heine-Borel theorem.Let ##A \subseteq \mathbb{R}## , let ##f: A \rightarrow \mathbb{R}##, and let ##c \in A##. Prove that if ##f## is continuous at ##c##, then there is some ##\delta > 0## such that ##f|A \cap (c - \delta, c + \delta)## is bounded.
The Attempt at a Solution
:[/B]Since ##f## is continuous, for each ##c \in [a, b]##, ##f## is continuous at ##c##. By the previous exercise, for each ##c \in [a, b]##, there is ##\delta_c > 0## such that ##f|A \cap (c - \delta, c + \delta)## is bounded, say by ##K_c##. Since, for each ##c \in [a, b]##, ##c \in (c - \delta_c, c + \delta_c)##, we have that the collection ##\{(i - \delta_i, i + \delta_i)\}_{i \in [a, b]}## forms an open cover of ##[a, b]##. By the Heine-Borel theorem, this collection has a finite subcover. That is, there exists ##n \in \mathbb{N}## and ##q_1, \ldots, q_n \in [a, b]## for which ##(q_1 - \delta_{q_1}, q_1 + \delta_{q_1}), \ldots, (q_n - \delta_{q_n}, q_n + \delta_{q_n})## form an open cover for ##[a, b]##, and each is bounded by ##K_{q_1}, \ldots, K_{q_n}## respectively. Now take ##K = \text{max}\{K_{q_1}, \ldots, K_{q_n}\}##. Let ##x \in [a, b]##. Then there is ##q_h##, for ##h \in \{1, \ldots, n\}##, for which ##x \in (q_h - \delta_{q_h}, q_h + \delta_{q_h})##, so that ##|f(x)| \leq K_{q_h} \leq K##. Hence, ##f## is bounded by ##K##.
I feel quite confident in most of the proof, but at the beginning I was feeling a little iffy on exactly why we need a reduction on the cover of ##[a, b]## to a finite one. I immediately sensed that that's what they where after with the hint to use the Heine-Borel theorem, but the actual necessity for the reduction itself is what irked me. I was wondering why we couldn't simply let ##K = \text{max}\{K_c\}_{c \in [a, b]}##. Since then, I began to question that reasoning with the following response, of which I would greatly appreciate feedback in determining if I am right: The reason we cannot simply choose the bound from an infinite collection of bounds, is that the bounds may be increasing. Sure, the function may be bounded around arbitrarily small neighborhoods around each point, but there are unaccountably many of these neighborhoods, and there is nothing stopping the possibility that the function simply keeps continually increasing that bound as ##c## increases in ##[a, b]##. However, with the finitely many guaranteed by the Heine-Borel theorem, there would have to be a largest.