Uniform convergence and derivatives question

Click For Summary
SUMMARY

The discussion centers on the theorem from Spivak's Calculus regarding the relationship between the derivative of the limit of a sequence of functions {fn} and the limit of their derivatives {fn'}. The theorem requires several assumptions: differentiability of {fn}, integrability of {fn'}, pointwise convergence to f, and uniform convergence of {fn'} to a continuous function g. Counterexamples, such as fn = sqrt(x^2 + 1/n^2) and fn = 1/n*sin(n^2 x), demonstrate that dropping these assumptions can lead to non-differentiable limits. The conversation also explores the implications of uniform convergence and the necessity of integrability for the derivatives.

PREREQUISITES
  • Understanding of uniform convergence in function sequences
  • Knowledge of differentiability and integrability of functions
  • Familiarity with the Mean Value Theorem
  • Basic concepts in real analysis, particularly regarding sequences of functions
NEXT STEPS
  • Research the implications of uniform convergence on differentiability in real analysis
  • Study counterexamples in the context of non-integrable derivatives
  • Explore the relationship between uniform convergence and pointwise convergence
  • Examine the Cauchy Integral Formula and its applications in complex analysis
USEFUL FOR

Mathematicians, students of real analysis, and anyone interested in the nuances of convergence and differentiability in function sequences.

Boorglar
Messages
210
Reaction score
10
In Spivak's Calculus, there is a theorem relating the derivative of the limit of the sequence {fn} with the limit of the sequence {fn'}.

What I don't like about the theorem is the huge amount of assumptions required:

" Suppose that {fn} is a sequence of functions which are differentiable on [a,b], with integrable derivatives fn', and that {fn} converges (pointwise) to f. Suppose, moreover, that {fn'} converges uniformly on [a,b] to some continuous function g. Then f is differentiable and f'(x) = lim n-->infinity fn'(x). "

Are really EACH of these assumptions necessary for this to be true? Are there counterexamples for any combination of missing hypotheses? With all these assumptions the proof is quite easy, and I suspect this might be the reason, but in this case, how many of these assumptions can we get rid of?

I've seen the counterexample of fn = sqrt(x^2+1/n^2), which converges uniformly to |x|, which is not differentiable. And also fn = 1/n*sin(n^2 x) which converges to 0 but the derivatives of fn do not always converge.

But what about counterexamples involving non-integrable derivatives, non-uniform convergence to a continuous g, or uniform convergence to a function g which is not continuous? And doesn't uniform convergence of the derivatives imply at least pointwise convergence of the functions? etc, etc... I think you get my point (no pun intended)...
 
Last edited:
Physics news on Phys.org
That seems a bit stronger than it needs to be. Here is a more general theorem:

Let (f_n:[a,b]\rightarrow \mathbb{R}) a sequence of functions such that

1) f_n is continuous on [a,b].
2) f_n is differentiable on ]a,b[.
3) There exists an x_0\in [a,b] such that f_n(x_0) converges.
4) The sequence (f_n\vert_{]a,b[})_n converges uniformly.

Then

1) (f_n)_n is uniformly convergent
2) The uniform limit f is differentiable on ]a,b[
3) f_n^\prime(x)\rightarrow f^\prime(x) for all x in ]a,b[

Note that in complex analysis, if we use complex differentiability, then this statement simplifies even more!
 
Thanks for this reply! I thought about it a lot and I found counterexamples when uniform convergence of {fn'} is not assumed, which means that this condition is in fact essential. Also, I see how we could simplify the theorem to your version, but I still have a problem:

Basically, if {fn} and {fn'} both converge uniformly on [a,b], I showed that the theorem is true. Now I need to show that uniform convergence of {fn'} together with convergence of {fn(x0)} implies uniform convergence of {fn}. This step seems difficult, and I could only prove it by assuming integrable fn'. The reason is that I can't make any link between fn' and fn if fn' is not integrable, since the FTC does not apply anymore...

Do you know how this step is proven for non integrable fn' ? (It's already quite hard to think of fn' which are not integrable, and they probably never appear in practice, but I simply want to know, out of curiosity and satisfaction).
 
Apply that f_n(x_0) is Cauchy to find

\|f_p(x_0)-f_q(x_0)\|\leq \frac{\varepsilon}{2}

Apply uniform convergence of f^\prime_n to find

\sup_{y\in ]a,b[}{\|f_p^\prime(y)-f_q^\prime(y)\|}\leq \frac{\varepsilon}{2(b-a)}

Apply the mean-value theorem to get

\begin{eqnarray*}<br /> \|f_p(x)-f_q(x)\| &amp; \leq &amp; \|(f_p-f_q)(x)-(f_p-f_q)(x_0)\|+\|(f_p-f_q)(x_0)\|\\<br /> &amp; \leq &amp; \sup_{y\in ]a,b[}{\|f_p^\prime(y)-f_q^\prime(y)\|} +\|(f_p-f_q)(x_0)\|\\<br /> &amp; \leq &amp; \varepsilon<br /> \end{eqnarray*}<br />

So f_n is a uniform Cauchy sequence and thus uniformly convergent by completeness.
 
You need uniform convergence of f'n(x) and to make it easier restrict f'n(x) to be continuous.

As an example f_k(x) = (1\k)sin(x*k) converges uniformly on R to 0 but it's derivative cos(xk) doesn't converge.

In the reals you don't have any restrictions on the derivative of the function based on the max/min values the function takes.

In complex analysis you have the cauchy estimate and the cauchy integral formual, which let's you show that if a sequence of holomorphic functions converges to a function g, then g is holomorphic and the derivative of the sequence converges to g'.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
Replies
26
Views
2K
  • · Replies 1 ·
Replies
1
Views
6K