Uniform convergence of integrals

bbkrsen585
Messages
10
Reaction score
0

Homework Statement



Hi All, I've been having great difficulty making progress on this problem.

Suppose gn converges to g a.e. on [0,1]. And, for all n, gn and h are integrable over [0,1]. And |gn|\leqh for all n.

Define Gn(x)=\intgn(x) from 0 to x.
Define G(x)=\intg(x) from 0 to x.

Prove: Gn converges to G uniformly.

2. The attempt at a solution

So, here's what I've got. We can use dominated convergence theorems to show that Gn goes to G pointwise on [0,1]. Since the set is compact, this should also provide some insights.

Moreover, I was thinking we could use Egorov's theorem to take away the set on which G does not go to G uniformly. I'm not sure how to get rid of that otherwise.
 
Physics news on Phys.org
How about a straightforward argument: by the dominated convergence theorem,
\int_{[0,1]}|g_n-g|\,d\mu\rightarrow 0
and since we have
|G_n(x)-G(x)|=\left|\int_{[0,x]}(g_n-g)\,d\mu\right|\leq\int_{[0,x]}|g_n-g|\,d\mu\leq\int_{[0,1]}|g_n-g|\,d\mu
the convergence is uniform. I hope I haven't overlooked something basic.
 
Yes, I agree with your argument, but I'm just not seeing how this shows uniform convergence. LDCT gives us that the integral of gn converges point-wise to the integral of g on [0,1]. How can we infer from this that Gn(x) goes uniformly to Gn?

Maybe what I'm not seeing is that we have effectively bounded |Gn(x)-G(x)| by something that definitely converges to zero (as you have shown). But does this imply that Gn(x) converges to G(x) uniformly. I guess my question is addressing more directly how to prove uniform convergence more generally. Thanks.
 
To prove uniform convergence, we want a bound dependent on n only, in praticular independent of x. In this case, we obtained a bound for |G_n(x)-G(x)| valid for arbitrary x. By taking n large enough, we can ensure that the difference is small for all x in [0, 1]. Independence of a bound from x is crucial here.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top