Extending a uniformly cont function on an open interval to a closed interval?

moxy
Messages
40
Reaction score
0

Homework Statement


Show that the function f: J → ℝ is bounded if f is uniformly continuous on the bounded interval J.


Homework Equations


J is a bounded interval, so say J = (a,b)

f is uniformly continuous on J, so
\forall \epsilon > 0 there exists a \delta > 0 such that for s,t \in J = (a,b)
|f(s) - f(t)| < \epsilon whenever |s - t| < \delta

f: J → ℝ is bounded if there exists a real number M such that |f(x)| ≤ M for all x in J.


The Attempt at a Solution


I think that if I can extend f to the endpoints of J, then I can use the Extreme Value Theorem to say that f attains a min and max value, i.e. is bounded. So I need to define f(a) and f(b).

f(a) = \lim_{n→∞}{f(a + \frac{1}{n})}
f(b) = \lim_{n→∞}{f(b - \frac{1}{n})}

Where clearly f is defined on [a + \frac{1}{n}, b - \frac{1}{n}] for all n \in ℕ


Am I on the right track? I don't feel like I've used the fact that f is uniformly continuous on J. Is it because f is uniformly continuous that I'm able to define f(a) and f(b)?
 
Physics news on Phys.org
Try to do this more simply. The idea is to pick x1,...,xn such that every x in J is "close" to one of these xi. This plus uniform continuity will give you what you want.
 
But I only know that J is bounded, not that it's closed and bounded. So can't I only say that J = (a,b) and not necessarily J = [a,b]? So I don't immediately know if f is defined at a or b, much less if it's continuous at a and b.
 
Okay, nevermind, you changed your post. So... I should define a sequence of, say, rational numbers {xn}, where all xi are in J, such that lim_{n->∞}{\{x_n\}} = a ?

And similarly, {yn}, where all yi are in J, such that lim_{n->∞}{\{y_n\}} = b ?

I think that I'm able to do this, but I don't see how it gets me any closer to showing that f is bounded.

EDIT:

I guess I can say that

lim_{n->∞}{f(x_n)} = f(a)
lim_{n->∞}{f(y_n)} = f(b)

So f is defined on [a,b]. Do I still have to show it's continuous at a and b to use the extreme value theorem? I don't see how this approach is simpler or much different than the one in my original post.
 
Last edited:
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top