Sequence of functions, continuity, uniform convergence

fmam3
Messages
81
Reaction score
0

Homework Statement


Let (f_n) be a sequence of continuous functions on [a,b] that converges uniformly to f on [a,b]. Show that if (x_n) is a sequence in [a,b] and if x_n \to x, then \lim_{n \to \infty} f_n (x_n) = f(x)


Homework Equations


None


The Attempt at a Solution


I just want to double check whether my proof works! Any criticisms welcomed! :)

Since (f_n) is continuous on a closed interval [a,b], (f_n) is uniformly continuous on [a,b]. And since \lim x_n = x, (x_n) is a Cauchy sequence and since (f_n) is uniformly continuous on [a,b], it follows that (f_n (x_n) ) is a Cauchy sequence on [a,b].

Let \varepsilon > 0. Since (f_n(x_n)) is Cauchy, it converges to f(x_n), thus for some N, n > N implies |f_n (x_n) - f(x_n)| < \varepsilon / 2. And since the uniform limit of continuous functions is continuous, that is since f_n \to f uniformly and (f_n) is continuous on [a,b], it implies that f is continuous on [a,b]. And since \lim x_n = x and by the continuity of f, there exists some \delta > 0 such that |x_n - x| < \delta implies |f(x_n) - f(x)| < \varepsilon / 2.

Thus, by the inequality |f_n (x_n) - f(x)| = |f_n(x_n) - f(x_n) + f(x_n) - f(x)| \leq |f_n(x_n) - f(x_n)| + |f(x_n) - f(x)|, it follows that when n > N and |x_n - x| < \delta, we have |f_n (x_n) - f(x)| < \varepsilon /2 + \varepsilon /2 = \varepsilon. This completes our proof.
 
Physics news on Phys.org
fmam3 said:
it follows that (f_n (x_n) ) is a Cauchy sequence on [a,b].

It's not clear how you would conclude this. It's practically tantamount to what you are trying to prove.

thus for some N, n > N implies |f_n (x_n) - f(x_n)| < \varepsilon / 2.
Instead, I would try obtaining more directly |f_n (y) - f(y)| < \varepsilon / 2 for all y. Then take y=x_n.

The rest isn't too bad, although I would use more of an N_2 approach instead of a delta.
 
Billy Bob said:
It's not clear how you would conclude this. It's practically tantamount to what you are trying to prove.

Thanks for the reply and feedback!

Actually I looked at this question again today and realized the exact same point you noted --- by writing (f_n(x_n)) is Cauchy, it's practically just stating what I want to show and hence doesn't work.

I reworked another approach and here it is:

For contradiction, suppose \lim_{n \to \infty} f_n(x_n) \ne f(x). That is, \exists \varepsilon > 0, \forall N, \exists n > N such that |f_n(x_n) - f(x)| \geq \varepsilon.

Since (x_n) is a sequence in a closed and bounded set [a,b], by the Bolzano-Weierstrass Theorem, there exists a convergent subsequence (x_{n_k})_{k \in \mathbb{N}}. And since \lim x_n = x and if the sequence converges, every subsequence converges to the same limit, it implies we have \lim_{k \to \infty} x_{n_k} = x; that is, \forall \delta > 0, \exists N_0 such that \forall k > N_0, we have |x_{n_k} - x | < \delta, and furthermore, by the continuity of each f_n on [a,b], it implies |f_n(x_{n_k}) - f_n(x)| < \varepsilon for each n \in \mathbb{N}. Note that since \{n_k : k > N_0\} \subseteq \{n: n \in \mathbb{N}\}, k > N_0 implies |f_{n_k}(x_{n_k}) - f_{n_k}(x)| < \varepsilon / 2

Sincef_n \to f uniformly on [a,b], \exists N_1 such that |f_n(x) - f(x)| < \varepsilon / 2 for \forall n > N_1, \forall x \in [a,b].

Thus, take k > \max\{N_0, N_1\}, which implies n_k \geq k > \max\{N_0, N_1\} and by the inequality: |f_{n_k} (x_{n_k}) - f(x)| = |f_{n_k} (x_{n_k}) - f_{n_k}(x) + f_{n_k} (x) - f(x)| \leq |f_{n_k} (x_{n_k}) - f_{n_k}(x)| + |f_{n_k}(x) - f(x)| < \varepsilon / 2 + \varepsilon / 2 = \varepsilon ---- contradiction, since we have |f_n(x_n) - f(x)| \geq \varepsilon for \forall n.

Billy Bob said:
Instead, I would try obtaining more directly |f_n (y) - f(y)| < \varepsilon / 2 for all y. Then take y=x_n.

The rest isn't too bad, although I would use more of an N_2 approach instead of a delta.
Perhaps I'm missing something here but can you explain a little bit more in detail on this. Since the problem already has f_n \to f uniformly, the statement |f_n (y) - f(y)| < \varepsilon / 2, \forall y is already true without doing any work. The most difficult part, at least to me, is that we are trying to show that f_n(x_n) converges to something (i.e. f(x)), where f_n(x_n) is indexed by n \in \mathbb{N} on BOTH the sequence of functions and the sequence of numbers. This is in contrast to showing f_n(x) \to f(x), where only the sequence of function is indexed but the input value x is fixed.

Or it could be I'm just talking total rubbish above. I would appreciate any of your thoughts :)
 
Billy Bob said:
It's not clear how you would conclude this. It's practically tantamount to what you are trying to prove.

Oh sorry, on reading what you wrote again, I think I'd misinterpreted your comment. My apologies. I thought the result was clear but anyways, the statement is the following (even though using this approach in my original question would not work):
(1)If (x_n) is a Cauchy sequence in S and f is uniformly continuous on S, then (f(x_n)) is a Cauchy sequence in S.

The proof is quite straight forward. Let \forall \varepsilon > 0. Since f is uniformly continuous on S, \exists \delta > 0 such that \forall x,y \in S and |x - y| < \delta, we have |f(x) - f(y)| < \varepsilon.

Since (x_n) is Cauchy, \exists N such that \forall n,m > N, we have |x_n - x_m| < \delta. Thus, by the above on f is uniformly continuous, n,m > N implies |f(x_n) - f(x_m)| < \varepsilon and thus (f(x_n)) is a Cauchy sequence.

The second statement that I'd used is:
(2)If f is a continuous functions on a closed interval [a,b], then f is uniformly continuous on [a,b]. I'll skip the proof of this as it's essentially just an application of the Bolzano-Weierstrauss theorem.

And combining (1) and (2) together, I concluded that (f_n(x_n)) is Cauchy. But this would not work in my original question --- I think ---- is because while we know that (f_n(x_n)) is Cauchy, we do not know what the limiting function is and thus, it is insufficient to prove the result.
 
Last edited:
Instead of resorting to Cauchy sequences, there is a much simpler way to do this. Let e > 0. Probably the first result in any treatment of uniform convergence tells us that the hypotheses in this problem imply that f is continuous on [a,b]. In particular, f is continuous at x and since x_n approaches x, f(x_n) approaches f(x), so there exists N such that n > N implies |f(x_n) - f(x)| < e/2 (*). By uniform convergence, there exists M such that n > M implies |f_n(y) - f(y)| < e/2 for all y in [a,b]. In particular, |f_n(x_n) - f(x_n)| < e/2 (**). If n > max{N,M}, then by (*) and (**) and the triangle inequality, we have |f_n(x_n) - f(x)| < e, as desired.

*EDIT* Essentially you had the last step of the proof figured out. It remained to estimate |f_n(x_n) - f(x_n)| via uniform convergence and |f(x_n) - f(x)| via continuity of f.
 
Last edited:
snipez90 said:
Instead of resorting to Cauchy sequences, there is a much simpler way to do this. Let e > 0. Probably the first result in any treatment of uniform convergence tells us that the hypotheses in this problem imply that f is continuous on [a,b]. In particular, f is continuous at x and since x_n approaches x, f(x_n) approaches f(x), so there exists N such that n > N implies |f(x_n) - f(x)| < e/2 (*). By uniform convergence, there exists M such that n > M implies |f_n(y) - f(y)| < e/2 for all y in [a,b]. In particular, |f_n(x_n) - f(x_n)| < e/2 (**). If n > max{N,M}, then by (*) and (**) and the triangle inequality, we have |f_n(x_n) - f(x)| < e, as desired.

*EDIT* Essentially you had the last step of the proof figured out. It remained to estimate |f_n(x_n) - f(x_n)| via uniform convergence and |f(x_n) - f(x)| via continuity of f.

That's what I'd thought too before I wrote my proof version 1 (the first post) and proof version 2 (the post after). However, I was quite hesitant on this for one single reason --- closed interval [a,b] (more of a student mentality --- they don't usually include non-essential information in statements so everything must be somehow used). If this result holds for any interval (open or closed), then there's no need to be explicit about writing [a,b] as part of the hypothesis of the problem. I'm actually thinking that the result actually does not hold for an open interval (a,b) since if \lim x_n = x and x \notin (a,b), then all the results of continuity goes out the window. Thus, I in my proof version 2, I chose to use a subsequential argument to explicitly make use of the fact that we're dealing with a closed interval and thus we can use the Bolzano-Weierstrauss.

What do you think?
 
Hmm, well when I encountered this problem before, the problem didn't state explicitly that (x_n) was a sequence in [a,b]. It was a fair assumption I think, since without it you can't really do much with the bound from uniform convergence. Since you're told explicitly that (x_n) is a sequence in the closed interval [a,b], then the limit is in [a,b], so I think the argument holds.
 
snipez90 said:
Hmm, well when I encountered this problem before, the problem didn't state explicitly that (x_n) was a sequence in [a,b]. It was a fair assumption I think, since without it you can't really do much with the bound from uniform convergence. Since you're told explicitly that (x_n) is a sequence in the closed interval [a,b], then the limit is in [a,b], so I think the argument holds.

I just thought about it and here's a simple example to illustrate my point about the importance of closed interval [a,b].

Suppose we have a sequence (x_n) defined as x_n = 1 / (n + 1) for \forall n and clearly, (x_n)_{n \in \mathbb{N}} \subseteq (0,1) and that x_n \to 0 but note that 0 \notin (0,1). And suppose we define the sequence of functions f_n(x) = \frac{x + n}{n} and naturally (f_n) is continuous on (0,1) for \forall n. It is easy to show that f_n \to 1 uniformly on (0,1). But observe that f_n(x_n) = \frac{1}{n} \cdot \large( \frac{1}{n + 1} + n\large) = \frac{n^2 + n + 1}{n} and we clearly have that f_n(x_n) \to +\infty. Yet, we have that the limiting function is f(x) = 1. Thus, we have that \lim_{n \to \infty} f_n(x_n) \ne f(x) because simply that (0,1) is an open interval.

Hence, I think that we are forced to use the closed interval property and we can do so by the Bolzano-Weierstrauss. What do you think? And thanks for your reply above!
 
Hmmm, yes I agree that we need a closed interval, but I'm not sure if we necessarily need the Bolzano-Weierstrass theorem. It should be easy to prove directly from simple epsilon-delta arguments that if we have a sequence in [a,b], which by definition means that each term is in [a,b], then the limit must also be in [a,b].
 
  • #10
snipez90 said:
Hmmm, yes I agree that we need a closed interval, but I'm not sure if we necessarily need the Bolzano-Weierstrass theorem. It should be easy to prove directly from simple epsilon-delta arguments that if we have a sequence in [a,b], which by definition means that each term is in [a,b], then the limit must also be in [a,b].

You know... actually rereading my own proof version 2, I'm actually just making the exact same proof as yours (in fact, nearly word for word) but substituted everything from a plain vanilla sequence to a subsequence. Hmm... I guess I just thought too much about closed intervals and whenever I see them, I start thinking about the beloved Bolzano-Weierstrass. Another reason that I'd approached this problem with subsequences is motivated by a similar proof method of Dini's Theorem, and in that proof, the subsequential argument was essential.

But thanks for the input :)
 
  • #11
Hmmm that's weird, is this problem from Spivak (probably stolen from Rudin)? I think the most difficult problem set I had during the spring quarter this year asked us to prove Dini's theorem, the problem you posted, and the converse of the problem you stated. Dini's theorem took awhile, since I had never actually seen how to apply Bolzano-Weierstrass.
 
  • #12
Haha... that's interesting. Dini's Theorem was actually just an exercise the instructor given out in a problem set sheet and not from a book.

But just for fun, let's prove Dini's Theorem. Before we prove Dini's Theorem, let's show this lemma and the original Dini's Theorem comes out immediately afterwards:

Suppose (f_n) is a sequence of continuous functions on a closed interval [a,b] and (f_n(x)) is nonincreasing. And suppose f_n \to 0 pointwise. Then f_n \to 0 uniformly.

------------------------------------------------------------------------------------------------

Here's the proof that I'd worked out for my problem set. Any comments again appreciated!

First, we'll show that f_n(x) \geq 0 for \forall x \in [a,b], \forall n \in \mathbb{N}. Suppose not, then \exists N_0, \exists x \in [a,b] such that f_{N_0}(x) &lt; 0. Since (f_n) is nonincreasing n &gt; N_0 implies f_n(x) \leq f_{N_0}(x) &lt; 0. But since f_n \to 0 pointwise, there exists N_1 such that n &gt; \max\{N_0, N_1\} implies 0 \leq f_{N_0}(x) \leq 0 or that f_{N_0}(x) = 0 --- which is not possible since we assumed f_N(x) &lt; 0.

For contradiction, suppose f_n \to 0 uniformly on [a,b] does not hold. That is, \exists \varepsilon &gt; 0, \forall N we have |f_n(x) - 0| = f_n(x) \geq \varepsilon for \exists n &gt; N, \exists x \in [a,b]. Now, we construct a sequence of numbers from [a,b] as follows. We claim that for each n \in \mathbb{N}, \exists x_n \in [a,b] such that f_n(x_n) \geq \varepsilon. Suppose this does not hold. Then it implies \exists n_0 such that \forall x \in [a,b], we have f_{n_0}(x) &lt; \varepsilon. But since (f_n) is nonincreasing, n \geq n_0 implies f_n(x) \leq f_{n_0}(x) &lt; \varepsilon --- this is not possible as it contradicts f_n(x) \geq \varepsilon as shown above.
 
  • #13
(continue from above post)

Now, since (x_n)_{n \in \mathbb{N}} \subseteq [a,b], by the Bolzano-Weierstrass Theorem, there exists a convergent subsequence, say, (x_{n_k}) converging to, say, x_0 \in [a,b]. Thus, \lim_{k \to \infty} x_{n_k} = x_0 implies that \forall \delta &gt; 0, \exists K such that |x_{n_k} - x_0| &lt; \delta when k &gt; K. Now, since f_n \to 0 pointwise, \forall x \in [a,b], \exists m \in \mathbb{N} such that |f_m(x) - 0| = f_m(x) &lt; \varepsilon. But by the continuity of (f_n), \lim_{k \to \infty}x_{n_k} = x implies \lim_{k \to \infty}f_m(x_{n_k}) = f_m(x_0) &lt; \varepsilon.

Thus, for k &gt; \max\{K, m\}, implying n_k \geq k &gt; \max\{K, m\} and again by the fact that (f_n) is nonincreasing, we have that f_{n_k}(x_{n_k}) \leq f_m(x_{n_k}) &lt; \varepsilon ---- contradiction, since we had assumed |f_n(x) - 0| = f_n(x) \geq \varepsilon would hold. This completes the proof.

---------

Dini's theorem is simply applying the above statement to g_n = f_n - f where f is the limiting function.




Would you mind sharing how your class went about proving Dini's Theorem? :)
 
  • #14
That looks like a good proof. There was a review session where my teacher spent over an hour trying to work out the proof (the instructor took the course as an undergrad at the same institution, but I guess he was a little rusty). Personally, I'm terrible at making the bookkeeping work out for the last part where you have to use the nonincreasing sequence hypothesis. But for the most part, this is more or less the proof some of us learned.

Here are a few comments. Once I negate a statement and it seems clear enough to me that I can generate a sequence, I usually just apply the Bolzano-Weierstrass theorem without further explanation, but this is a minor difference. I think you chose clearer notation then I did before, and your explicit use of continuity is probably a lot better than the argument I had before. I was more hand-wavy in mine I think, using the continuity of f_n to argue that f_n(y) < epsilon for all y close enough to x_0 (as you had labeled it). This is correct, but not entirely rigorous. I think you made everything clear and explicit and it looks correct, even though I'm pretty tired.
 
  • #15
snipez90 said:
That looks like a good proof. There was a review session where my teacher spent over an hour trying to work out the proof (the instructor took the course as an undergrad at the same institution, but I guess he was a little rusty). Personally, I'm terrible at making the bookkeeping work out for the last part where you have to use the nonincreasing sequence hypothesis. But for the most part, this is more or less the proof some of us learned.

Here are a few comments. Once I negate a statement and it seems clear enough to me that I can generate a sequence, I usually just apply the Bolzano-Weierstrass theorem without further explanation, but this is a minor difference. I think you chose clearer notation then I did before, and your explicit use of continuity is probably a lot better than the argument I had before. I was more hand-wavy in mine I think, using the continuity of f_n to argue that f_n(y) < epsilon for all y close enough to x_0 (as you had labeled it). This is correct, but not entirely rigorous. I think you made everything clear and explicit and it looks correct, even though I'm pretty tired.

Thanks for your reply. When we went over this particular problem, it was actually quite tricky on how they had to negate several statements to obtain certain desired properties (i.e. on how f_n(x) \geq 0 and how the sequence (x_n) was obtained). But I agree with you --- once the sequence (x_n) is obtained, which in my opinion is actually the trickiest part, the rest just follows from the Bolzano-Weierstrauss, continuity and the works.

Thanks for the input and this great dialogue!
 
  • #16
For your "Dini lemma," I prefer a Heine-Borel approach instead of B-W. All the contradictions in the B-W make my head spin, plus it is hard to work out on the spot. Your "Dini lemma" is a great oral exam question.

First, by hypothesis, for each x, f_1(x)\ge f_2(x)\ge f_3(x)\ge\dots\ge0.

Let \epsilon&gt;0.

Fix x.

Since f_n(x) decreases to 0 as n increases, there exists N_x such that f_{N_x}(x) is small.

But f_{N_x} is continuous. Thus there exists an open neighborhood B_x of x, on which f_{N_x} is small.

Now you have an open cover by \{B_x\}. Apply Heine-Borel.

Can you fill in the details?
 
Back
Top